text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
std::reverse not supported? Hi all, I'm new to c++ and Qt. I am, and have been a C# VS dev since, well, before it was released and was a Java guy before that. Now that you know who I am, here's my issue I'm having: I have a solution that was developed in VS. Within that solution is a vectorUtil.h file. In that file is a method called Reverse which uses void std::reverse( BidirIt first, BidirIt last ). I know that it may be frowned upon to have implementation in a .h file but I didn't write it. However, when I integrated the code into a Qt project (Qt Creator 4.3.1 Community), it issues a compile-time error that states, 'reverse' was not declared in this scope. It doesn't appear in intellisense (or whatever it's called in Qt). It's almost as if it's not in the std library. Anybody have any insight? I've bold'd the line that the compiler is puking on. Here is the relevant code and includes: #pragma once #ifdef DLL_EXPORT_WIN32 #ifdef UTILITIES_EXPORTS #define UTILITIES_API __declspec(dllexport) #else #define UTILITIES_API __declspec(dllimport) #endif #else #define UTILITIES_API #endif #include <vector> #include <string> using namespace std; namespace Utilities { /// <summary> /// Class Property. /// Helps with creating abstractions /// </summary> class UTILITIES_API VectorUtil { public: /// <summary> /// Reverses the specified data. /// </summary> /// <param name="data" type="vector<BYTE>&">The data.</param> /// <returns>std.vector<_Ty, _Alloc>.</returns> template<class Type> static vector<Type> Reverse(vector<Type>& data) { vector<Type> reverseData(data.size()); Copy(data, 0, reverseData, 0, reverseData.size()); **reverse(begin(reverseData), end(reverseData));** return reverseData; } }; } You're missing the #include <algorithm>, see for instance cplusplus.com. Thank you for your response Johan. I had thought that too. I added the #include <algorithm> and then, upon compiling, got the following error: [copydata] Error 4 It's really descriptive and lets me know exactly what is wrong. In searching for it online, I found troves of information on the error... ok, not really. It's not descriptive and there is nothing online about that error that I can find :) However I did notice that I'm on 5.8 and need to be on 5.9.1. I'll update and report back. We can put this one to rest. I upgraded to 5.9.1 and that did the trick. Thanks!
https://forum.qt.io/topic/82801/std-reverse-not-supported
CC-MAIN-2018-05
refinedweb
399
60.01
Introduction A bar plot is a graphical representation which shows the relationship between a categorical and a numerical variable. In general, there are two popular types used for data visualization, which are dodged and stacked bar plot. A stacked bar plot is used to represent the grouping variable. Where group counts or relative proportions are being plotted in a stacked manner. Occasionally, it is used to display the relative proportion summed to 100%. Article Outline The article comprised of the following segments: The very first step is to load the required libraries which comprised of numpy, pandas and matplotlib. import numpy as np import pandas as pd import matplotlib.pyplot as plt Basic knowledge of matplotlib’s subplots If you have basic knowledge of matplotlib’s subplots( ) method, you can proceed with this article, else I will highly recommend reading the first blog on this visualisation guide series. Link: Introduction to Line Plot — Matplotlib, Pandas and Seaborn Visualization Guide (Part 1) Dataset Description For the current article, Reading dataset The first step is to read the tips dataset using pandas read_csv( ) method and printing the first 4 rows. tips = pd.read_csv("datasets/tips.csv") tips.head(4) Objective The goal of this article is to generate the following stack bar plot which represent the gender-wise smoker proportion, where smoker category for each gender group sums to 100%. Data Preparation To generate the stacked bar plot we need to compute the sex wise smoker proportion. To achieve this, you need to go through the following steps. Step 1: Groupby the data based on sex and select smoker column from each group. Step 2: Apply the value_counts( ) method and supply normalize = True to convert it to proportion. Step 3: Multiply it with 100 using mul( ) method. Step 4: Round it to 2 decimal places, using round( ) method. Step 5: use unstack( ) method so that the sex labels will be presented in index and smoker status in columns and percentage values in cells. We will save the final output to df object and use it for the final plot generation. df = (tips .groupby("sex")["smoker"] .value_counts(normalize=True) .mul(100) .round(2) .unstack()) df Stacked bar plot using Matplotlib style In the first step, we are going to use raw matplotlib syntax to achieve the final plot. Follow the following steps: Step 1: instantiate the subplots( ) method with 12 inch width and 6 inch height and save the figure and axes objects to fig and ax. Step 2: We will use the ax.bar( ) method and in the x-axis we will supply the index values using df.index and on the y-axis e will supply the “No” column values. Further, we will label them as No and supply a bar width of 0.3. Step 3: Again we will use the ax.bar( ) method, but this time we will supply the “Yes” column that we want to stack over the “No” bar. One thing to note that as we would like to stack it over the “No” bars, thus wherever the No bar ends the “Yes” bar will start from there. Thus, we need to inform the bar( ) method that now the current starting position (here bottom) is the df.No column values (or heights). We need to label this as Yes and supply the width of 0.3. fig, ax = plt.subplots(figsize = (12,6)) ax.bar(df.index, df["No"], label = "No", width = 0.3) ax.bar(df.index, df["Yes"], bottom = df.No, label = "Yes", width = 0.3) Stacked bar plot customisation In the customisation part, the first thing is to add the data labels inside the bars. To do so, similar to dodged bar plot here, we need to get familiar with the plot internals. Let’s understand the container object of the bar plot that will help us to achieve our desired plot. The axes (ax) object contains an object called container. If we run ax.containers, it will display a list of 2 objects, where each object contains a bar container of 2 artists. In simple language, the container object contains each bar pairs from No (blue bars) and Yes (orange bars) inputs. # Check for containers ax.containers [<BarContainer object of 2 artists>, <BarContainer object of 2 artists>] We can take out the first and second object from containers using index and print them separately, which also displays the same output. # Print what containter 0 and 1 have print(ax.containers[0]) print(ax.containers[1]) <BarContainer object of 2 artists> <BarContainer object of 2 artists> Now, let’s say we want to get further deep inside each container and print the properties of each bar. We can go for first container 0 (index) and then 1 (index). The output shows that the first container contains two blue bars with height 62.07 and 61.78. Similarly, the second container contains two orange bars of height 37.93 and 38.22. # Accessing what each container holds print(ax.containers[0][0]) print(ax.containers[0][1]) print(ax.containers[1][0]) print(ax.containers[1]) We can use two for loops to achieve the same output. Here, first we loop through each container and then loop through each bar. # Access what each of the containers contain using for loop for c in ax.containers: for v in c: print) Getting the bar height To get the height of each bar while looping through bars, we can use the get_height( ) method and round the values to 2 decimal places. The first two output shows the height of blue bars (from left to right) and the rest two are related to orange bars. # Accessing the heights from each rectangle for c in ax.containers: for v in c: print(v.get_height().round(2)) 62.07 61.78 37.93 38.22 For labelling the stacked bars, we need to have the bars’ height obtained from each container in the form of a list. To achieve this, we can use a list comprehension. We need to go through the following steps: Step 1: First loop through teach container and save it to a temporary variable c Step 2: Use a list comprehension where we use another loop that loop through the bars under each container (c). Step 3: Use a conditional if statement which returns the height of the bar if the height is greater than 0, else it returns an empty string. This is enabled so that it add bar label only when the bar height is above 0. # Looping and printing each container object's height for c in ax.containers: # Optional: if the segment is small or 0, customize the labels print([round(v.get_height(), 2) if v.get_height() > 0 else '' for v in c]) [62.07, 61.78] [37.93, 38.22] Adding labels, removing spines, modifying axes labels and legend Adding labels: To add the label, we go through the above discussed approach and save the list of bar heights in labels. Note: Here, we have added a % symbol by converting the height values to string using the str( ) method. Then we need to use the exclusive bar labelling method ax.bar_label( ), where we will supply the container object [c] and labels [labels = labels]. Further, we will specify the position of the labels and size (14). Removing Spines: To achieve this, use need to use a for loop that loop through the position list [“top”, “right”] and supply these positions to ax.spines[position] and set the visibility to False using the set_visible() method. Adding labels: Next step is to alter the tick parameters [using tick_params( )], and axis labels [using set_xlabel( ) and set_ylabel( )] to make the plot informative and aesthetically beautiful. Adding legend: The final step is to customise the legend. Here, using the ax.legend( ) method. - I have modified the existing labels to “no” and “yes”, - Set the legend and title font size to 12 and 14 respectively. - Add a legend title called “smoker”. - Position the legend using bbox_to_anchor by supplying the x and y position. # Add labels for c in ax.containers: labels = [str(round(v.get_height(), 2)) + "%" if v.get_height() > 0 else '' for v in c] ax.bar_label(c, label_type='center', labels = labels, size = 14) # add a container object "c" as first argument # Remove spines for s in ["top", "right"]: ax.spines[s].set_visible(False) # Add labels ax.tick_params(labelsize = 14, labelrotation = 0) ax.set_ylabel("Percentage", size = 14) ax.set_xlabel("Sex", size = 14) # Add legend ax.legend(labels = ["no", "yes"], fontsize = 12, title = "Smoker", title_fontsize = 18, bbox_to_anchor = [0.55, 0.7]) # Fix legend position # ax.legend_.set_bbox_to_anchor([0.55, 0.7]) fig Saving the plot To save the stacked bar plot, we can use the figure object (fig) and apply the savefig( ) method, where we need to supply the path (images/) and plot object name (dodged_barplot.png) and resolution (dpi=300). # Save figure fig.savefig("images/stackedbarplot.png", dpi = 300) Stacked barplot with pandas DataFrame [pandas plot( ) method] Next we will generate the same plot but using pandas DataFrame based approach. Let’s print the df. df Generating stacked bar plot The next step is to generate the same stacked bar plot, but now we will be using pandas DataFrame based plot( ) method. To generate, we need to go through the following steps: Step 1: Instantiate the subplots( ) method with 12 inch width and 6 inch height and save the figure and axes objects to fig and ax. Step 2: We will use the df.plot( ) method. Next we need to tell the plot method that the kind of the plot is bar, and it should be a stacked bar plot thus enabled stacked = True. Then supply the axes (ax) object to ax, bar width of 0.3 and edge color “black”. This will generate the basic framework for the stacked bar plot. fig, ax = plt.subplots(figsize = (12, 6)) # Plot df.plot(kind = "bar", stacked = True, ax = ax, width = 0.3, edgecolor = "black") OR Another approach is to use the df.plot.bar( ) method to generate the above plot. # OR fig, ax = plt.subplots(figsize = (12, 6)) # Plot df.plot.bar(stacked = True, ax = ax, width = 0.3, edgecolor = "black") Customising bar plot The stacked bar plot customisation part is exactly the same. # Adding bar labels for c in ax.containers: labels = [str(round(v.get_height(), 2)) + "%" if v.get_height() > 0 else '' for v in c] ax.bar_label(c, label_type='center', labels = labels, size = 14) # add a container object "c" as first argument # Removing spines for s in ["top", "right"]: ax.spines[s].set_visible(False) # Adding tick and axes labels ax.tick_params(labelsize = 14, labelrotation = 0) ax.set_ylabel("Percentage", size = 14) ax.set_xlabel("Sex", size = 14) # Customising legend ax.legend(labels = ["no", "yes"], fontsize = 12, title = "Smoker", title_fontsize = 18) # Fixing legend position ax.legend_.set_bbox_to_anchor([0.55, 0.7]) fig Once you learn base Matplotlib, you can customize the plots in various ways. I hope you now know various ways to generate a stacked bar plot. Apply the learned concepts to your datasets. References: [1] J. D. Hunter, “Matplotlib: A 2D Graphics Environment”, Computing in Science & Engineering, vol. 9, no. 3, pp. 90–95, 2007. I hope you learned something new!
https://onezero.blog/introduction-to-stacked-bar-plot-matplotlib-pandas-and-seaborn-visualization-guide-part-2-2/
CC-MAIN-2022-40
refinedweb
1,873
66.54
FLOCK(2) Linux Programmer's Manual FLOCK(2) flock - apply or remove an advisory lock on an open file #include <sys/file.h> int flock(int fd, int operation); Apply or remove an advisory lock on the open file specified by fd. The argumentblocking request, include LOCK_NB (by ORing) with any of the above operations. A single file may not simultaneously have both shared and exclusive locks. Locks created by flock() are associated with an open file descriptor. A process may hold onlyBADF fd is not.) flock() places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock() and perform I/O on the file. flock() and fcntl(2) locks have different semantics with respect to forked processes and dup(2). On systems that implement flock() using fcntl(2), the semantics of flock() will be different from those described in this manual page. Converting a lock (shared to exclusive, or vice versa) is not guaranteed to be atomic: the existing lock is first removed, and then a new lock is established. Between these two steps, a pending lock request by another process may be granted, with the result that the conversion either blocks, or fails if LOCK_NB was specified. (This is the original BSD behavior, and occurs on many other implementations.) NFS details fcntl(2) byte-range locks on the entire file. This means that fcntl(2) and flock() locks do interact with one another over NFS. It also means that in order to place an exclusive lock, the file must be opened for writing.), lslocks(8) Documentation/filesystems/locks.txt in the Linux kernel source tree (Documentation/locks.txt in older kernels) This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 FLOCK(2) Pages that refer to this page: flock(1), chown(2), fcntl(2), fork(2), getrlimit(2), syscalls(2), dbopen(3), flockfile(3), lockf(3), nfs(5), proc(5), tmpfiles.d(5), signal(7), cryptsetup(8), fsck(8), lslocks(8), vipw(8@@util-linux)
https://man7.org/linux/man-pages/man2/flock.2.html
CC-MAIN-2021-04
refinedweb
365
55.95
Minimum Candy Distribution – Interview Algorithm Problem Recently in an online test I faced this question i.e. on how to minimize the number of candies/toffee to be distributed by a teacher and found that a lot of similar questions are frequently asked in a lot of interviews. Read More – Possible beautiful arrangements So the problem statement is – A teacher has some students in class. She has to distribute some candies to these students. Every student has some grades/rank that he/she has acquired over a series of tests. Now students are sitting in a line in a random order(will be provided in input) and there are few rules that teacher has to follow while distributing the candies. Rules are – - Among any two students who sit adjacent to each other and have different grades, student with the higher grades must get extra candies. - At least one candy is given to every student. - Students sitting adjacent to each other and have same grades then there is no condition on the number of candies they get i.e. if one get 5 candies other can get 15 or even 1 or 5 or any other positive number(but we have to minimize this). There might be other conditions on the input but for now, we will focus on the logic on how to do this – The approach that I actually used in the test was different than the solution that I will be sharing today as it was the more complicated one and required some extra operations as well but it worked for me. So in a brief what I did was that I kept the last location upto which I need to increase the candies distributed in case there is a long chain of students sitting with decreasing order of grades. For example – If there are 7 students and their grades are 2, 3, 4, 4, 3, 2, 1 If there are 7 students and their grades are 2, 3, 4, 4, 3, 2, 1 Then distribution is like- After step 1,2,3 quantities distributed are– 1, 2, 3, 0, 0, 0, 0 (at this time 3rd student is marked as the last location upto which candy counter increment is needed because if we have to increase the number of candies of 3rd we need not worry about 2nd student because condition of higher candy does not breaks.) After 4th step – 1, 2, 3, 1, 0, 0, 0 (because 3rd and 4th student have the same grade and we have to minimize the toffee, now 4th student is marked) 5th Step – 1, 2, 3, 2, 1, 0, 0 (increase count till marked because we have to give at least one candy to every student and also maintain another condition of students with different grades, still 4th is marked) 6th step – 1, 2, 3, 3, 2, 1, 0 (still 4th is marked) 7th step – 1, 2, 3, 4, 3, 2, 1 So this was the solution that I try. You can also try this by yourself but now let’s jump to a better solution and this will solve this problem in O(2n) time complexity and is much easier to implement and understand. Theoretically in this approach first we go left to right and set “next” candy value to either “previous+1” or “1”. This way we get the up trends i.e. checking condition as per next student and not as the previous student sitting adjacently. Then we go right to left and do the same, this way getting the down trends. Implementation of Candy problem So lets suppose the input format is - First line – A value for number of students (n) - Next n lines – Grade of student. import java.util.Scanner; public class Solution { public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int arr[] = new int[n]; for(int i = 0; i < n; i++) { arr[i] = in.nextInt(); } int candies[] = new int[n]; candies[0] = 1; // First loop for up trends for(int i = 1; i<n; i++) { if(candies[i] == 0) { candies[i]=1; } if(arr[i] > arr[i-1]) { candies[i] = candies[i-1]+1; } } // Second loop for down trends for(int i = n-1; i > 0; i--) { if(arr[i-1] > arr[i] && candies[i-1] <= candies[i]) { candies[i-1] = candies[i]+1; } } // Calculating the sum - This step can be avoided by // addition and substraction in previous loops, but for // simplicity it is seperated out. long sum = 0l; for(int i = 0; i < n; i++) { sum += candies[i]; } System.out.println("Minumum number of candies required to distribute are - "+sum); } } Input:- 8 1 2 2 3 4 3 2 1 Output:- Minumum number of candies required to distribute are - 16 So that’s all for this tutorial. Hope this helps and you like the tutorial. Do ask for any queries in the comment box and provide your valuable feedback. Do come back for more because learning paves way for a better understanding. Keep Coding!! Happy Coding!! 🙂
http://www.codingeek.com/practice-examples/interview-programming-problems/minimum-candy-distribution-interview-algorithm-problem/
CC-MAIN-2017-34
refinedweb
842
60.99
Coding the player¶ In this lesson, we'll add player movement, animation, and set it up to detect collisions. To do so, we need to add some functionality that we can't get from a built-in node, so we'll add a script. Click the Player node and click the "Attach Script" button: In the script settings window, you can leave the default settings alone. Just click "Create": Note If you're creating a C# script or other languages, select the language from the language drop down menu before hitting create. Note If this is your first time encountering GDScript, please read Scripting languages before continuing. Start by declaring the member variables this object will need: extends Area2D export var speed = 400 # How fast the player will move (pixels/sec). var screen_size # Size of the game window. using Godot; using System; public class Player : Area2D { [Export] public int Speed = 400; // How fast the player will move (pixels/sec). public Vector2 ScreenSize; // Size of the game window. } // A `player.gdns` file has already been created for you. Attach it to the Player node. // Create two files `player.cpp` and `player.hpp` next to `entry.cpp` in `src`. // This code goes in `player.hpp`. We also define the methods we'll be using here. #ifndef PLAYER_H #define PLAYER_H #include <AnimatedSprite.hpp> #include <Area2D.hpp> #include <CollisionShape2D.hpp> #include <Godot.hpp> #include <Input.hpp> class Player : public godot::Area2D { GODOT_CLASS(Player, godot::Area2D) godot::AnimatedSprite *_animated_sprite; godot::CollisionShape2D *_collision_shape; godot::Input *_input; godot::Vector2 _screen_size; // Size of the game window. public: real_t speed = 400; // How fast the player will move (pixels/sec). void _init() {} void _ready(); void _process(const double p_delta); void start(const godot::Vector2 p_position); void _on_Player_body_entered(godot::Node2D *_body); static void _register_methods(); }; #endif // PLAYER_H. Warning_2<< The _ready() function is called when a node enters the scene tree, which is a good time to find the size of the game window: func _ready(): screen_size = get_viewport_rect().size public override void _Ready() { ScreenSize = GetViewportRect().Size; } // This code goes in `player.cpp`. #include "player.hpp" void Player::_ready() { _animated_sprite = get_node<godot::AnimatedSprite>("AnimatedSprite"); _collision_shape = get_node<godot::CollisionShape2D>("CollisionShape2D"); _input = godot::Input::get_singleton(); _screen_size = get_viewport_rect().size; } game, we will map the arrow keys to the four directions. Click on Project -> Project Settings to open the project settings window and click on the Input Map tab at the top. Type "move_right" in the top bar and click the "Add" button to add the move_right action. We need to assign a key to this action. Click the "+" icon on the right, then click the "Key" option in the drop-down menu. A dialog asks you to type in the desired key. Press the right arrow on your keyboard and click "Ok". Repeat these steps to add three more mappings: move_leftmapped to the left arrow key. move_upmapped to the up arrow key. And move_downmapped to the down arrow key. Your input map tab should look like this: Click the "Close" button to close the project settings. Note We only mapped one key to each input action, but you can map multiple keys, joystick buttons, or mouse buttons to the same input action. You can detect whether a key is pressed using Input.is_action_pressed(), which returns true if it's pressed or false if it isn't. func _process(delta): var velocity = Vector2.ZERO # The player's movement vector. if Input.is_action_pressed("move_right"): velocity.x += 1 if Input.is_action_pressed("move_left"): velocity.x -= 1 if Input.is_action_pressed("move_down"): velocity.y += 1 if Input.is_action_pressed("move_up"): velocity.y -= 1 if velocity.length() > 0: velocity = velocity.normalized() * speed $AnimatedSprite.play() else: $AnimatedSprite.stop() public override void _Process(float delta) { var velocity = Vector2.Zero; // The player's movement vector. if (Input.IsActionPressed("move_right")) { velocity.x += 1; } if (Input.IsActionPressed("move_left")) { velocity.x -= 1; } if (Input.IsActionPressed("move_down")) { velocity.y += 1; } if (Input.IsActionPressed("move_up")) { velocity.y -= 1; } var animatedSprite = GetNode<AnimatedSprite>("AnimatedSprite"); if (velocity.Length() > 0) { velocity = velocity.Normalized() * Speed; animatedSprite.Play(); } else { animatedSprite.Stop(); } } // This code goes in `player.cpp`. void Player::_process(const double p_delta) { godot::Vector2 velocity(0, 0); velocity.x = _input->get_action_strength("move_right") - _input->get_action_strength("move_left"); velocity.y = _input->get_action_strength("move_down") - _input->get_action_strength("move_up"); if (velocity.length() > 0) { velocity = velocity.normalized() * speed; _animated_sprite->play(); } else { _animated_sprite- diagonally than if it just moved horizontally. We can prevent that if we normalize the velocity, which means we set its length to 1, then multiply by the desired speed. This means no more fast diagonal movement. Tip call play() or stop() on the AnimatedSprite. Tip $ is shorthand for get_node(). So in the code above, $AnimatedSprite.play() is the same as get_node("AnimatedSprite").play(). In GDScript, $ returns the node at the relative path from the current node, or returns null if the node is not found. Since AnimatedSprite is a child of the current node, we can use $AnimatedSprite. Now that we have a movement direction, we can update the player's position. We can also use clamp() to prevent it from leaving the screen. Clamping a value means restricting it to a given range. Add the following to the bottom of the _process function (make sure it's not indented under the else): position += velocity * delta position.x = clamp(position.x, 0, screen_size.x) position.y = clamp(position.y, 0, screen_size.y) Position += velocity * delta; Position = new Vector2( x: Mathf.Clamp(Position.x, 0, ScreenSize.x), y: Mathf.Clamp(Position.y, 0, ScreenSize.y) ); godot::Vector2 position = get_position(); position += velocity * (real_t)p_delta; position.x = godot::Math::clamp(position.x, (real_t)0.0, _screen_size.x); position.y = godot::Math::clamp(position.y, (real_t)0.0, _screen_size.y); set_position(position); Tip The delta parameter in the _process() function refers to the frame length - the amount of time that the previous frame took to complete. Using this value ensures that your movement will remain consistent even if the frame rate changes. Click "Play Scene" (F6, Cmd + R on macOS) and confirm you can move the player around the screen in all directions. Warning If you get an error in the "Debugger" panel that says Attempt to call function 'play' in base 'null instance' on a null instance this likely means you spelled the name of the AnimatedSprite node wrong. Node names are case-sensitive and $NodeName must match the name you see in the scene tree. Choosing animations¶ Now that the player can move, we need to change which animation the AnimatedSprite is playing based on its direction. We have the "walk" animation, which shows the player walking to the right. This animation should be flipped horizontally using the flip_h property for left movement. We also have the "up" animation, which should be flipped vertically with flip_v for downward movement. Let's place this code at the end of the _process() function: if velocity.x != 0: $AnimatedSprite.animation = "walk" $AnimatedSprite.flip_v = false # See the note below about boolean assignment. $AnimatedSprite.flip_h = velocity.x < 0 elif velocity.y != 0: $AnimatedSprite.animation = "up" $AnimatedSprite.flip_v = velocity.y > 0 if (velocity.x != 0) { animatedSprite.Animation = "walk"; animatedSprite.FlipV = false; // See the note below about boolean assignment. animatedSprite.FlipH = velocity.x < 0; } else if (velocity.y != 0) { animatedSprite.Animation = "up"; animatedSprite.FlipV = velocity.y > 0; } if (velocity.x != 0) { _animated_sprite->set_animation("right"); _animated_sprite->set_flip_v(false); // See the note below about boolean assignment. _animated_sprite->set_flip_h(velocity.x < 0); } else if (velocity.y != 0) { _animated_sprite->set_animation("up"); _animated_sprite->set_flip_v(velocity.y > 0); } Note The boolean assignments in the code above are a common shorthand for programmers. Since we're doing a comparison test (boolean) and also assigning a boolean value, we can do both at the same time. Consider this code versus the one-line. Tip A common mistake here is to type the names of the animations wrong. The animation names in the SpriteFrames panel must match what you type in the code. If you named the animation "Walk", you must also use a capital "W" in the code. When you're sure the movement is working correctly, add this line to _ready(), so the player will be hidden when the game starts: hide() code goes in `player.cpp`. // We need to register the signal here, and while we're here, we can also // register the other methods and register the speed property. void Player::_register_methods() { godot::register_method("_ready", &Player::_ready); godot::register_method("_process", &Player::_process); godot::register_method("start", &Player::start); godot::register_method("_on_Player_body_entered", &Player::_on_Player_body_entered); godot::register_property("speed", &Player::speed, (real_t)400.0); // This below line is the signal. godot::register_signal<Player>("hit", godot::Dictionary()); }_6<< Notice our custom "hit" signal is there as well! Since our enemies are going to be RigidBody2D nodes, we want the body_entered(body: Node) signal. This signal will be emitted when a body contacts the player. Click "Connect.." and the "Connect a Signal" window appears. We don't need to change any of these settings so click "Connect" again. Godot will automatically create a function in your player's script. Note the green icon indicating that a signal is connected to this function. Add this code to the function: func _on_Player_body_entered(body): hide() # Player disappears after being hit. emit_signal("hit") # Must be deferred as we can't change physics properties on a physics callback. $CollisionShape2D.set_deferred("disabled", true) public void OnPlayerBodyEntered(PhysicsBody2D body) { Hide(); // Player disappears after being hit. EmitSignal(nameof(Hit)); // Must be deferred as we can't change physics properties on a physics callback. GetNode<CollisionShape2D>("CollisionShape2D").SetDeferred("disabled", true); } // This code goes in `player.cpp`. void Player::_on_Player_body_entered(godot::Node2D *_body) { hide(); // Player disappears after being hit. emit_signal("hit"); // Must be deferred as we can't change physics properties on a physics callback. _collision_shape->set_deferred("disabled", true); } Each time an enemy hits the player, the signal is going to be emitted. We need to disable the player's collision so that we don't trigger the hit signal more than once. Note Disabling the area's collision shape can cause an error if it happens in the middle of the engine's collision processing. Using set_deferred() tells Godot to wait to disable the shape until it's safe to do so. The last piece; } // This code goes in `player.cpp`. void Player::start(const godot::Vector2 p_position) { set_position(p_position); show(); _collision_shape->set_disabled(false); } With the player working, we'll work on the enemy in the next lesson.
https://docs.godotengine.org/en/3.5/getting_started/first_2d_game/03.coding_the_player.html
CC-MAIN-2022-27
refinedweb
1,718
52.26
Following another question (Rails controller - execute action only if the two methods inside succeed (mutually dependent methods)), I would like to ensure that inside one of my controller's action, if the user does not see the message displayed by a Rails UJS method, then the first methods of the controller action are not implemented either. CONTEXT I have a Controller with a method called 'actions'. When the app goes inside the method 'example_action', it implements a first method (1) update_user_table and then (2) another update_userdeal_table. (both will read and write database) and then (3) a third method which is related to a Rails UJS(ajax) call. My issue is the following: in case of timeout in the middle of the controller, I want to avoid the case where the User table (via method 1) is updated, the UserDeal table is updated (via method 2) but NOT the thrid method i.e the ajax request that displays a message FAILS (error, timeout,...status like 500 or 404 or canceled or timeout...). In my app, for mobile users if they're in a subway with internet connection, they launch the request that goes through 'example_action' controller, performs successfully the first method (1) and second method (2) but then they enter a tunnel for 60 seconds with very very low (<5b/sec) or NO internet connection, so for UX reasons, I timeout the request and display to the user 'sorry it took too long, try again'. The problem is that if I could not show to them the result (3), I need to be able to not execute (1) and(2). I need the two methods (1) and(2) and(3) to be "mutually dependent": if one does not succeed, the other one should not be performed. It's the best way I can describe it. Today Here is my code. It's not working as I am manually testing by clicking and then after just 2 seconds I disconnect the internet connection. I see in my database that (1) and(2) were performed and the databases were updated but I saw the message 'sorry it took too long, try again'. Is that the right approach ? if yes how to do this? If not, should I try a different angle like: if (1) and(2) were successful but not(3) should I store the fact the rails UJS xhr status was an error or timeout, that consequently the modal wxas not effectively displayed to the user and then show to them the result/message once they get back online? Here is the code html page for the user the user click on a button that triggers a Rails UJS aajax request that will display ultimately the modal message <div id=zone"> <%= link_to image_tag(smallest_src_request), deal_modal_path, remote: true %> </div> class DealsController < ApplicationController def deal_modal Deal.transaction do update_user_table # that's the (1) update_userdeal_table # that's the (2) # show_modal_message respond_to do |format| format.js end end private def update_user_table # update the table User so it needs to connect to internet and acces the distant User table end def update_userdeal_table # update the table UserDeal table so it needs to connect to internet and access the distant UserDeal table end end showModalMessage("Here is your result <variable and all>); $(document).on('page:change', function () { $("#zone"). on('ajax:error',function(event,xhr, status, error){ console.log(' ajax call failed:', error); var msg; msg = Messenger().post({ hideAfter: 4, message: "sorry it took too long, try again." }); }); $(document).on('page:change', function () { //set timeout Rails UJS ajax option that will display message for ajax:error cases defined above $.rails.ajax = function(options) { if (!options.timeout) { options.timeout = 5000; } return $.ajax(options); }; }); So the transaction will only rollback if an error is thrown. If an unhandled error is thrown, your application will crash and it will show a 500 error in some way. In order to display the response to the user, on success or error, you will need to render something. So you don't want to prevent the respond_to block from executing. One way to handle this would be to set a flag via an instance variable. def deal_modal begin Deal.transaction do update_user_table update_userdeal_table end @success = true rescue @success = false end # show_modal_message respond_to do |format| format.js end end Then in deal_modal.js.erb <% if @success %> showModalMessage("Here is your result <variable and all>"); <% else %> showModalMessage("There was a problem"); <% end %> EDIT: Dealing with connection issues is definitely tricky and there isn't really one ideal solution. I would generally let the database continue uninterrupted and let it return either a success or failure on it's own time. For lengthy transactions, you can use a gem like delayed_job or sidekiq to process the action in the background and let the rails controller return a response saying "...pending..." or something. Unless you're using websockets on the frontend, this means continually polling the server with ajax requests to see if the background process is complete.
https://codedump.io/share/aZEEia6chJmv/1/rails-controller---execute-action-only-if-the-a-rails-ujs-method-inside-succeed-mutually-dependent-methods
CC-MAIN-2017-04
refinedweb
828
59.74
Microsoft Velocity exposes a unified, distributed memory cache for client application consumption. We show you how to add Velocity to your data-driven apps. Aaron Dunnington MSDN Magazine June 2009 Read more! Use Test-Driven Development with mock objects to design object oriented code in terms of roles and responsibilities, not categorization of objects into class hierarchies. Isaiah Perumalla MSDN Magazine May 2009 .NET RIA Services provides a set of server components and ASP.NET extensions such as authentication, roles, and profile management. We’ll show you how they work. Jonathan Carter Joel Pobar presents an introduction to how compilers work and how you can write your own compiler to target the .NET Framework. Joel Pobar MSDN Magazine February 2008. MSDN Magazine July 2005 See how routed events and routed commands in Windows Presentation Foundation form the basis for communication between the parts of your UI. Brian Noyes MSDN Magazine September 2008 Chris Tavares explains how the ASP.NET MVC Framework's Model View Controller pattern helps you build flexible, easily tested Web applications. Chris Tavares MSDN Magazine March 2008 introduce you to some of the concepts behind the new F# language, which combines elements of functional and object-oriented .NET languages. We then help you get started writing some simple programs. Ted Neward MSDN Magazine Launch 2008 var data = [1, 2, 3, 4, 5]; function map (func, array) { var returnData = new Array(array.length); for (i = 0; i< array.length; i++) { returnData[i] = func(array[i]); } return returnData; } function increment(element) { return element++; } print (map (incremenent, data)); output: [2,3,4,5,6] // increment function let increment x = x + 1 // data let data = [1 .. 5] // map (func, myList) let map func myList = { for x in myList -> func x } print_any (map increment data) print_any { for x in 1..5 -> x + 1 } let booleanToString x = match x with false -> "False" | _ -> "True" type player = { firstName : string; lastName : string; } type soccerTeam = { name : string; members : player list; location : string; } let lazyTwoTimesTwo = lazy (2 * 2) let actualValue = Lazy.force lazyTwoTimesTwo class Duck def Quack() puts "Quack!" end end class Cow def Quack() puts "Cow's dont quack, they Mooo!" end end def QuackQuack(duck) duck.Quack() end animal = Duck.new() QuackQuack(animal) animal = Cow.new() QuackQuack(animal) Quack! Cow's don't quack, they Mooo! List<Customer> customers = new List<Customer>(); // ... add some customers var q = from c in customers where c.Country == "Australia" select c; Dim rss = From cust In customers _ Where cust.Country = "USA" _ Select <item> <title><%= cust.Name %></title> <link><%= cust.WebSite %></link> <pubDate><%= cust.BirthDate %></pubDate> </item> Dim rssFeed = <?xml version="1.0" encoding="utf-8"?> <rss version="2.0"><channel></channel></rss> rssFeed...<channel>(0).ReplaceAll(rss) rssFeed.Save("customers.xml") ldloc.1 ldstr "channel" ldstr "" call class [System.Xml.Linq]System.Xml.Linq.XName [System.Xml.Linq]System.Xml.Linq.XName::Get(string, string) callvirt instance class Generic.IEnumerable'1<XElement> XContainer::Descendants(System.Xml.Linq.XName)
http://msdn.microsoft.com/en-us/magazine/cc507636.aspx
crawl-002
refinedweb
491
51.24
public class ArrayReading {When try to run getting null pointer exception. public static void main( String [] args ) { String[][] Data = null; /nam\tFirstnam e\tLocatioe\tLocatio n\n");n\n"); for(int i = 0;i<2;i++) { for(int j = 0;j<3;j++) { System.out.print(Data[j]+"\t"); } //move to new line System.out.print("\n"); } } } Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. See Java Tutorials: Arrays This course will introduce you to C++ 11 and teach you about syntax fundamentals. Open in new window Open in new window Open in new window Please take a look at the Java Trails for Arrays Specifically, the "Creating, Initializing, and Accessing an Array" section. Hope that helps. To what (code)? >> still getting null pointer exception.please advise It's always a good idea to tell us where in the code you get that null pointer. the more info we get from you, the better we'll be able to help
https://www.experts-exchange.com/questions/27861349/null-pointer-exception-while-reading-array.html
CC-MAIN-2018-26
refinedweb
186
69.07
Python Script help, stop execution - skimmer333 last edited by Hi all, I’ve just started learning python and programming. I have written up a simple script which prompts for three separate bits of info, and then uses that to create a folder and saves the document/tab to that folder. So far it works ok. What I am now trying to achieve, is to stop the script if the Cancel button is selected on a prompt dialog box. My code is as follows. I’m testing just using the first prompt, hence the console.write. I just can’t figure out what I need to do. (Note, I don’t want to close Npp, just stop the script from continuing. Any assistance would be greatly appreciated. from Npp import editor, notepad import os vPath = 'H:\\INCs' vNum = notepad.prompt('INC : ','Enter INC#','') if vNum == None: console.write('None') vFI = notepad.prompt('FI: ','Enter FI','') vDesc = notepad.prompt('Desc: ','Enter Description','') vBuild = str(vPath) + '\\' + str(vNum) + '-' + str(vFI) + ' ' + str(vDesc) vBuildAll = vBuild + '\\' + str(vNum) + '.txt' if not os.path.exists(vBuild): os.makedirs(vBuild) notepad.saveAs(vBuildAll); - Ekopalypse last edited by You make a function and return from it. def main(directory): vNum = notepad.prompt('INC : ', 'Enter INC#', '') if vNum is None: console.write('None') return main(r'H:\INCs') - skimmer333 last edited by Thanks heaps. Makes sense to me now.
https://community.notepad-plus-plus.org/topic/23308/python-script-help-stop-execution/1?lang=en-US
CC-MAIN-2022-40
refinedweb
230
78.45
Ruby On Rails Interview Questions and Answers Ques 6. What is a class? Ans. You should easily be able to explain not only what a class is, but how and when you would create a new one as well as what functionality it would provide in the larger context of your program. Is it helpful? Add Comment View Comments Ques 7. What is the difference between a class and a module?Ans. The straightforward answer: A module cannot be subclassed or instantiated, and modules can implement mixins. Is it helpful? Add Comment View Comments Ques 8. What is an object?Ans.. Is it helpful? Add Comment View Comments Ques 9. How would you declare and use a constructor in Ruby?Ans. Constructors are declared via the initialize method and get called when you call on a new object to be created. Using the code snippet below, calling Order.new acts as a constructor for an object of the class Order. class Order def initialize(customer, meal, beverage) @customer = customer @meal = meal @beverage = beverage end end Is it helpful? Add Comment View Comments Ques 10. How does a symbol differ from a string?Ans. Symbols are immutable and reusable, retaining the same object_id. Is it helpful? Add Comment View Comments Most helpful rated by users:
http://www.withoutbook.com/Technology.php?tech=59&page=2&subject=Ruby%20On%20Rails%20Interview%20Questions%20and%20Answers
CC-MAIN-2019-51
refinedweb
214
70.29
I am trying to display a few sensor data on a display screen using Tkinter. I am really new to this application and I'm struggling So, my query is: What should be inside GetTemp definition to read the sensor data and replace with the old data? Any help regarding this issue would be very helpful. Thank You for you time Code: Select all import tkinter as tk class Mainframe(tk.Frame): # Mainframe contains the widgets # More advanced programs may have multiple frames # or possibly a grid of subframes def __init__(self,master,*args,**kwargs): # *args packs positional arguments into tuple args # **kwargs packs keyword arguments into dict kwargs # initialise base class tk.Frame.__init__(self,master,*args,**kwargs) # in this case the * an ** operators unpack the parameters # put your widgets here self.Temperature = tk.IntVar() tk.Label(self,textvariable = self.Temperature).pack() self.TimerInterval = 500 # variable for dummy GetTemp self.Temp = 0 # call Get Temp which will call itself after a delay self.GetTemp() def GetTemp(self): ## replace this with code to read sensor self.Temperature.set(self.Temp) self.Temp += 1 # Now repeat call self.after(self.TimerInterval,self.GetTemp) class App(tk.Tk): def __init__(self): tk.Tk.__init__(self) # set the title bar text self.title('CPU Temperature Demo') # Make sure app window is big enough to show title self.geometry('300x100') # create and pack a Mainframe window Mainframe(self).pack() # now start self.mainloop() # create an App object # it will run itself App()
https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=214252&sid=2050c6a28b260a0333a4932e9805b02c
CC-MAIN-2018-47
refinedweb
248
51.04
It wasn't too long ago when Marvel comic-book adaptations were a joke of the industry; with dismal flops such the Dolph Lundgren version of The Punisher, the 1991 version of Captain America (not to mention the never-released Roger Corman version of The Fantastic Four) and the years-long struggle to sort out the mess of the rights to Spider-Man, they wasted a lot of years and let rival DC Comics corner the market with its superheroes. After turning a corner with 1998's Blade, Marvel was able to take a little more control over its own projects and broke out big time with X-Men, opening that movie to $54 million in 2000. Of course, the real breakthrough was last summer's Spider-Man, which destroyed all expectations, spinning a web around theater patrons to the tune of $114 million opening weekend. Clearly, these characters have become important cultural touchstones that when done well, can be tremendously popular. Even the much less popular and more mediocre Daredevil was able to open to over $40 million and earn over $100 million domestic. The Hulk is actually one of Marvel's few pre-1990s successes in alternative media, with the very popular The Incredible Hulk TV series in the late '70s and early '80s. The Bill Bixby/Lou Ferrigno-starring series is still fondly remembered if fairly campy by today's standards, but it represents a character that people love. Like most of the enduring characters, Hulk is far more than a one-note caricature. In the way that Spider-Man is about maturation and responsibility, Hulk is about rage and the seduction of power. Naturally, you need a top-flight director to bring these issues to life, which has certainly been done here in the person of Ang Lee (hopefully he's faired a little better than the other Oscar-nominated director to release a summer blockbuster this month, John Singleton of 2 Fast 2 Furious). His choosing this assignment was a bit of a head-scratcher as a follow-up to Crouching Tiger, Hidden Dragon but it can only be good for the film to have a director willing and able to explore the more complex issues of what otherwise could be a flat exercise. Again following the Spider-Man model, the cast is comprised of up-and-coming leads and very well respected character actors in the supporting roles. Playing The Hulk's human alter-ego Bruce Banner is Eric Bana, an Australian import best known either for a role in Black Hawk Down or the lesser-seen but haunting prison bio Chopper. Oscar-winner Jennifer Connelly plays the love interest and gives foundation to the film. Sam Elliott and Nick Nolte round out the major portion of the cast. Let's be honest, though, we're mostly interested in the Not-So-Jolly Green Giant. Despite his pleas, we really do like him when he's angry. This brings us to the much-debated CGI Hulk. Fans were sent into a near panic in January after a Super Bowl commercial that some felt was far from convincing. This seems to be more a product of an FX savvy audience and a hypercritical target market than anything else. Reviews have pegged the CGI work as some of the best ever completed (let's face it -- we're talking about a giant green monster. It's not supposed to look realistic) and there has probably been some improvement over the last five months. Most of the grumbling has dwindled out, with anticipation rising once again. With a combination of impressive action and effective drama, Hulk could be a tremendous audience pleaser. First place at the box office is not at all in question, but just how big can Hulk get? The current June opening weekend record of $54 million held by Austin Powers 2: The Spy Who Shagged Me is about to become a memory. Even the inflation-adjusted June record of $70 million held by Batman Forever looks as though it is in jeopardy. The best recent comparisons for Hulk are, of course, the past few Marvel adaptations, with Spider-Man and X2 being in the same strata. While perhaps not on the same level as the webslinger, the singular focus of The Hulk gives it a slight edge over the splintered storylines of the X-Men, allowing a little better identification for audiences. Hulk should pull a massive $88 million for Universal this weekend, leaving Finding Nemo and the rest of the top ten seeing green. This will also make Universal's third straight $50 million opener, a tremendous feat considering the upcoming release of American Wedding. In a vain attempt at counter programming, Warner Bros launches Alex and Emma, starring current It Girl Kate Hudson and the less exciting Wilson brother, Luke. Loosely (very loosely) based on Dostoyevsky's The Gambler, it's really another Rob Reiner schlock-fest (how far Reiner has fallen -- with two of the greatest comedies ever under his belt, This Is Spinal Tap and The Princess Bride, you'd expect him to be able to recapture some of that magic in the last 16 years). Wilson is a writer with a gambling problem who has literally put his life on the line for his work; unless he finishes his latest novel in 30 days, he'll be killed by Cuban mobsters. Me, I'd just get a more lenient editor but he decides to hire himself a typist to help him out, in the form of Hudson, who starts to imagine both of them as the characters in the romantic novel he's writing. Flashbacks and hijinks ensue. Clearly, the gambling-addicted novelist who makes bad decisions is really the one for her. Really, Alex and Emma is far too convoluted for a simple romantic comedy and even though Kate Hudson is coming off the spring success of How to Lose a Guy in 10 Days, this movie is a step backwards for her. The commercials aren't especially funny or moving (or even in rotation) and Luke Wilson, though likeable, is no special draw. It's basically been lost in the Hulk Hype, and WB hasn't really pushed the issue either. When Harry Met Sally 2 this ain't. Look for a middling $13 million on the weekend. The Razzies can pretty much close their ballots after this weekend, as it'll be tough to top the third new film of this week for sheer hackwork and hubris. From Justin to Kelly, "starring" the winner and runner-up of the first season of American Karaoke... I mean American Idol, is a transparent and crass attempt to squeeze out one more dollar out of a tired and overexposed entertainment regime. This ploy wouldn't even be such a bad thing if they actually managed to make something worth watching, but the end result looks like they put about five minutes total into the concept, plotting and writing of the film. Totally stealing Frankie Avalon and Annette Funnicello's bit, From Justin to Kelly strands the two on a beach resort doing what all young people do on vacation; perform elaborately choreographed pop song and dance routines and find themselves in vaudeville-like pratfall comedy set-ups. Even the lamest "Must See TV" NBC sitcoms don't stoop this low for entertainment. Time and time again has shown that just because audiences are interested in people in one setting, it doesn't mean they'll follow to all others (hence the Friends "curse"'). Fox has severely misjudged the American Idol fans' loyalty when the buzz was really all about the show's format. At the very least, winner Kelly Clarkson's album has gone platinum, but runner-up Justin Guarini's album sold a dismal 57,000 copies in its first week, showing that there's only so far in-your-face marketing can take you. To the studio's credit, Fox realized relatively early in the game that it had a loser on its hands and cancelled nearly all advertising for the film, not producing a theatrical trailer and cutting out all preview and press screenings completely. I'm sure the suits involved would like to pretend it never existed, though they don't deserve to get let off the hook that easy. The laughing stock of the summer, the only audience left now for this is the hopelessly addicted (clearly not many), the pre-teen crush crowd and the rubberneckers looking for an ironic experience. With a 2,001 venue release that smacks of contractual obligation, it should probably be enough for a merciful $5 million weekend and hundreds of jokes for next year's Oscar host. Still running very strong among returning films is Finding Nemo, which recaptured first place last weekend though it is ceding it to The Hulk this frame. This fishy adventure is now the third best earning Pixar film ever, having passed Toy Story mid-week on its way well past $200 million. The $255 million of Monsters, Inc. may very well fall within the next few weeks at this rate; this accomplishment has to be seen as at least a mild upset. With a much less star- studded cast and the untested May opening, there was much that could go wrong that could have left Pixar with a failure. Instead, their winning streak continued and improved. Perhaps they really can do no wrong. With a ridiculously steep drop off in its second weekend, 2 Fast 2 Furious is quickly becoming one of the shortest-lived sequels ever. Falling a full 63% from opening weekend, 3 Fast 3 Furiouser seems to be ruled out of the picture. With a glut of blockbusters on the horizon, those 3,100 plus screens are going to be easily tempted to dump this lame-brained glorified car commercial. Look for only about $7 million this weekend, an 85% drop in just two weeks. Competing for biggest drop-off with that film will be Dumb and Dumberer, which managed to lose steam even through the weekend. This pointless sequel will be best served by being melted down for its silver content but a quick exit from theaters is a good compromise. The other two new films released last weekend, Rugrats Go Wild and Hollywood Homicide, will fare better but neither has shown above average staying power. This trio will end up in the $4 to $7.5 million range. It's a banner weekend for comic-book fans, who finally get to see one of their heroes come to life. Their collective breaths are held, though, waiting in judgment as to how well the adaptation has been done. It's the biggest game in town by far with high expectations to be met. That just means the potential payoff is even larger.
http://www.boxofficeprophets.com/sulewski/forecast062003.asp
CC-MAIN-2019-09
refinedweb
1,816
64.95
Dynamic key word in c# Hi in this post i want to write about Dynamic keyword and how we can use that. Let me explain you my problem Actually from the past few months i am working on a project called “.Net conversion”. That is We are having some of our code in vb component. As Microsoft withdrew its support for vb we decided to make a move to .Net When converting the code we had number of challenges. But we succeeded in that process. There are few features in vb code that were not supported by .Net(I mean we don’t have them directly but we can achieve them using some techniques). So let me discuss few of them here. Use of variant key word in Visual Basic 6.0 Let us take a simple example of VB code that makes use of Variant keyword Private Sub Command_Click() Dim arrNames As Variant arrNames = ReturnNames(textBox.Text) End Sub Private Function ReturnNames(ByVal name As String) Dim arrayOfNames As Variant Dim arrStates() As String Dim states As String states = "Alaska,Indiana,NY,MA" arrStates = Split(states, ",") If name = "US" Then arrayOfNames = arrStates 'Here we are assigning array to variant Else arrayOfNames = name 'Here we are assigning string to variant End If ReturnNames = arrayOfNames End Function Let me explain you what i am trying to do here. Actually there are 2 function in the above vb code 1.Command_Click() 2.ReturnNames There is a vb form that contains a text box and a button. Command_Click() is called on button click in the vb form. It reads the value entered in the textbox of vb form. ReturnNames function it accepts a string and returns a variant. So what i am actually trying to do here is. 1. I am having a string of states present in US(states variable) 2.I am trying to split the states and assigning that to arrStates. arrStates is an array that contains list of States. 3. I am trying to read the value entered in the vb form text box. If the value is “US” then i am trying to return all the states present inside US(array of states). 4. If the value entered in the textbox is something other than US then i am trying to return the name entered(This is simply a string) So ReturnNames function returns either an array or string. Note : This is just a simple example and this example doesnt make any sense in realtime and my opinion is to just show how to use variant key word. So in VB a single function can return any type like array or string or int etc. This can be achieved by declaring a variable as variant 1.So with variant key word in vb there is no necessity to define its type at runtime. 2.It can be assigned any value. Now let us see some similar concepts in C# There are few concepts in c# that can bring dynamic behavior to the code. they are 1.Var 2.object 3.dynamic var keyword in C# Var is an implicit type. It is an alias for all the types in c#. That is we dont need to declare a variable explicitly with the type and we can simply use var keyword. var a = 10; a = a + 10; a=a+"10";//This line throws a compile time error. Console.WriteLine(a.GetType()); When we assign a value for var at declaration time then that particular variable should be assigned only that variable. So the type of the variable is assigned at compile time itself. So va key word in c# is not at all same a variant in VB code object keyword in C# object class is the base class for all the classes in the framework. So when we declare an variable as object then that can be casted to anything. object a = 10; a = a + 10;//This throws an error at compile time a=a+"10";//This line throws a compile time error. a=Convert.Tostring(a)+"10"; //Value of a is 1010 a=Convert.ToInt(a)+"10";//Value of a is 1010 a=Convert.ToInt(a)+10; //value of a is 20 a = Convert.ToString(a )+ 10;//value of a is 1010 Console.WriteLine(a.GetType());//Returns string So from above code we can understand few things. 1. The type of the variable declared as object differs based on the statement and context it is used. 2. So When ever we are performing some operation on variable declared as object we are supposed to explicitly perform casting. 3.We can assign values of different types to variable created using object. 4.It checks the types at complie time dynamic keyword in C# dynamic keyword was introduced in C# 4. A variable declared with dynamic can be assigned any value. dynamic a = 10; a = a + 10; a = a + "10"; //Value of a is 1010 a = Convert.ToString(a) + 10;//value of a is 1010 Console.WriteLine(a.GetType());//Returns string In the following line, no cast is required, because the type is identified at run time only: a= a+ 10; You can assign values of different types to dynamicExample: a= “test”; Some points 1. The type of the variable declared as dynamic differs based on the statement and context it is used. 2. So When ever we are performing some operation on variable declared as dynamic , no need to perform any casting. As type is checked at runtime 3.We can assign values of different types to variable created using dynamic. 4.It checks the types at runtime. A Difference between Object and dynamic keyword When ever we are using dynamic key word we can assign any thing to it and it will be resolved at runtime but that is not the case with object keyword. Let us take a small example that caught my attention. object stringArray = new string[10]; stringArray[0] = "Pavan"; //Throws compile time error stringArray[1] = "Rajesh"; //Throws compile time error stringArray[2] = "Harsha"; //Throws compile time error object[] stringarray1 = new String[10]; stringarray1[0] = "Pavan"; stringarray1[1] = "Rajesh"; stringarray1[2] = "Harsha"; dynamic stringArrayUsingDynamic = new string[10]; stringArrayUsingDynamic[0] = "Pavan"; stringArrayUsingDynamic[1] = "Rajesh"; stringArrayUsingDynamic[2] = "Harsha"; In the above code when we apply indexing on stringArray it is throwing a compile time error. So even though we were able to assign an object of string array to stringArray at the time of adding values it is throwing a compile time error. So we are supposed to create an object[] for this purpose. Dynamic as return type of a function using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace DynamicKeyword { class Program { static void Main(string[] args) { Console.WriteLine("Enter US for array. OR enter somevalue to get string as returntype from ReturnVariable function"); string input_from_user = Console.ReadLine(); dynamic z = ReturnVariable(input_from_user); } static dynamic ReturnVariable(string input_from_user) { dynamic k = ""; string states = "Alska,Indiana,MA,NY"; k = states.Split(','); if (input_from_user.Contains("US")) { k = k; } else { k = "pavan"; } return k; } } } Another beautiful feature of Dynamic—It simplifies code involved in refection Suppose let us assume that there is a dll in the gac or from some path and we want to load it at runtime and call the methods present in it. Let us see how we do that in traditional way. string path="C:\\Test.dll"; Assembly assm= Assembly.LoadFile(path); Type typ = assm.GetTypes()[0]; object d= Activator.CreateInstance(typ); object[] parameters = new object[]{ "param1,param2" }; object returnValue = typ.InvokeMember("Test", BindingFlags.InvokeMethod | BindingFlags.Static | BindingFlags.NonPublic, null, d, parameters); So inorder to call the “Test” method present inside Test.dll we need to follow these steps. 1.Load the assembly using Assembly.Load 2.Get the type using Assembly.GetType() 3.Now create the object using Activator.CreateInstance() 4.Call InvokeMember with all the required parameters The above code can be simplified to some extent using Dynamic keyword string path="C:\\Test.dll"; Assembly assm= Assembly.LoadFile(path); Type typ = assm.GetTypes()[0]; dynamic d= Activator.CreateInstance(typ); d.Test("param1","Param2"); Absolutely written subject material, Really enjoyed reading through. Vernell September 25, 2012 at 9:31 am Difference between Var,Dynamic and object explained nicely …… Dhiren Kumar Kaunar September 26, 2012 at 7:27 am Thank you So much Dhiren…:) pavanarya September 26, 2012 at 9:20 am Nice….. JP October 4, 2012 at 5:13 pm nice work monjurulhabib October 14, 2012 at 12:12 am
http://pavanarya.wordpress.com/2012/09/23/dynamic-key-word-in-c/
CC-MAIN-2014-10
refinedweb
1,419
65.22
Thanks much for all your help ... and patience! Type: Posts; User: Rob Aiello Thanks much for all your help ... and patience! That was it now it works! - in Command Prompt When I try to compile in Dr.Java I still get the same error but I guess that's because I'm using the statement: import jxl.*; is it possible to... ok I copied the jxl.jar file into the same folder as ExcelDemo.java Different error but essentially the same result: I:\Java>javac -cp jxl.jar ExcelDemo.java ExcelDemo.java:3: error: cannot... I followed your instructions - I think: Below is the screen copy from command prompt I:\Java>javac -cp jxl.jar ExcelDemo.java ExcelDemo.java:3: error: package jxl does not exist import jxl.jar;... //I attempted the above but still no luck - the following is the code: import jxl.jar; public class JXLTest { public static void main(String[] args) { System.out.println("Test... //I tried the following: import jxl.*; //and explicitly import jxl.jar; I've: 1)added the the excelapi subdirectory that contains the jar file to the system class path, then I tried explicitly adding the jxl.jar file to the classpath (below) c:\Program...
http://www.javaprogrammingforums.com/search.php?s=ee6f8a7df9128a26aaa3b51ef16b0d17&searchid=1133324
CC-MAIN-2014-42
refinedweb
202
61.63
Code Owners no longer able to override "disable committer approval" for merge requests SummarySummary Previously (before 13.11), if a merge request disallowed approvals from committers, code owners were still able to self-approve changes for files they owned. This is noted in the approval settings documentation. With GitLab 13.11, it is no longer possible to approve a merge request you contributed to, even if you were the code owner for the change. Steps to reproduceSteps to reproduce - Create or find a project with code owner approvals enabled. I used a new project with just a readme, protected the mainbranch, then added two codeowners for the readme. - Make sure the Prevent MR approvals from users who make commits to the MRsetting is turned on. - As one of the code owners, make a change and create a merge request for a file protected by the code owner. - Notice you are unable to approve the merge request as a code owner who contributed to it, even though this was previously possible. What is the current bug behavior?What is the current bug behavior? Code owners cannot approve merge requests they made commits to if the corresponding setting is enabled, despite documentation and previous behavior suggesting this should be possible. What is the expected correct behavior?What is the expected correct behavior? Code owners should override the Prevent MR approvals from users who make commits to the MR setting for files they own as stated in documentation. Relevant logs and/or screenshotsRelevant logs and/or screenshots Big screenshots inside Here is my test code owner file: One of the code owners makes a commit and an MR to change the readme: With the "Prevent MR approvals from users who make commits to the MR" setting enabled, the code owner cannot approve their merge request and only the second code owner is shown as an approver: After unchecking the box for preventing self approval, both code owners, including the one that made the commit, can approve: Output of checksOutput of checks Results of GitLab environment infoResults of GitLab environment info This is my support team test instance, the report came from a customer who recently moved from 13.9.4 to 13.11.2 and noticed this after the upgrade when it worked before, so the change happened between those versions we think! Expand for output related to GitLab environment info System information System: Ubuntu 18.04 Proxy: no Current User: git Using RVM: no Ruby Version: 2.7.2p137 Gem Version: 3.1.4 Bundler Version:2.1.4 Rake Version: 13.0.3 Redis Version: 6.0.12 Git Version: 2.31.1 Sidekiq Version:5.2.9 Go Version: unknown GitLab information Version: 13.11.4-ee Revision: d1a2e182d3b Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: PostgreSQL DB Version: 12.6 URL: HTTP Clone URL: SSH Clone URL: git@gitlarb.party:some-group/some-project.git Elasticsearch: no Geo: no Using LDAP: no Using Omniauth: yes Omniauth Providers: GitLab Shell Version: 13.17 >= 13.17.0 ? ... OK (13.17 (cluster/worker) ... 1? ... yes Init script exists? ... skipped (omnibus-gitlab has no init script) Init script up-to-date? ... skipped (omnibus-gitlab has no init script) Projects have namespace: ... 2/1 ... yes 3/2 ... yes 11/4 ... yes 12/5 ... yes 1/6 ... yes 1/7 ... yes 1/8 ... yes 12/9 ... yes 1/10 ... yes 74/12 ... yes 1/13 ... yes 76/14 ... yes 1/20 ... yes 1/21 ... yes 1/22 ... yes 1/24 ... yes 11/25 ... yes 10/26 ... yes 11/27 ... yes 11/28 ... yes 11/29 ... yes 1/30 ... yes 95/31 ... yes 96/32 ... yes 1/34 ... yes 3/35 ... yes Redis version >= 5.0.0? ... yes Ruby version >= 2.7.2 ? ... yes (2.7.2) Git version >= 2.31.0 ? ... yes (2.31.1) Git user has default SSH configuration? ... yes Active users: ... 8 Is authorized keys file accessible? ... yes GitLab configured to store new projects in hashed storage? ... yes All projects are in hashed storage? ... no Try fixing it: Please migrate all projects to hashed storage as legacy storage is deprecated in 13.0 and support will be removed in 14.0. For more information see: doc/administration/repository_storage_types.md Elasticsearch version 7.x (6.4 - 6.x deprecated to be removed in 13.8)? ... skipped (elasticsearch is disabled) Checking GitLab App ... Finished Checking GitLab subtasks ... Finished
https://gitlab.com/gitlab-org/gitlab/-/issues/331548
CC-MAIN-2022-21
refinedweb
736
66.74
This page first looks at the AgentCheck interface, and then proposes a simple Agent check that collects timing metrics and status events from HTTP services. Custom Agent checks are included in the main check run loop, meaning they run every check interval, which defaults to 15 seconds. Agent checks are a great way to collect metrics from custom applications or unique systems. However, if you are trying to collect metrics from a generally available application, public service or open source project, we recommend that you write an Integration. Starting with version 5.9 of the Datadog Agent, a new method for creating integrations is available. This and check out the integrations-extras GitHub repository to see other contributed integrations. First off, ensure you’ve properly installed the Agent on your machine. If you run into any issues during the setup, contact our support. AgentCheckinterface All custom Agent checks inherit from the AgentCheck class found in checks/__init__.py and require a check() method that takes one argument, instance which is a dict having the configuration of a particular instance. The check method is run once per instance defined in the check configuration (discussed later). Note: AgentCheckinterface for Agent v6 There is some differences between Agent v5 and Agent v6: The following methods have been The function signature of the metric senders changed from: gauge(self, metric, value, tags=None, hostname=None, device_name=None, timestamp=None) to: gauge(self, name, value, tags=None, hostname=None, device_name=None) Sending metrics in a check is easy. If you’re already familiar with the methods available in DogStatsD, then the transition is very simple. are collected and flushed out with the other Agent metrics. At any time during your check, are collected and flushed with the rest of the Agent payload. Your custom Agent check can also report the status of a service by calling the self.service_check(...) method. The service_check method accepts occurred. hostname: (optional) The name of the host submitting the check. Defaults to the host_name of the Agent. check_run_id: (optional) An integer ID used for logging and tracing purposes. The ID doesn’t need to be unique. If an ID is not provided, one is automatically generated. message: (optional) Additional information or a description of why this status occurred. If a check cannot run because of improper configuration, programming error, or because it could not collect any metrics, it should raise a meaningful exception. This exception is logged and is shown in the Agent info command for easy debugging. For example: $ sudo /etc/init.d/datadog-agent info Checks ====== my_custom_check --------------- - instance #0 [ERROR]: ConnectionError('Connection refused.',) - Collected 0 metrics & 0 events As part of the parent class, you’re given a logger at self.log, so you can do things like self.log.info('hello'). The log handler is checks.{name} where {name} is the name of your check (based on the filename of the check module). Each check has a YAML configuration file that is placed in the conf.d directory. The file name should match the name of the check module (e.g.: haproxy.py and haproxy.yaml). Note: YAML files must use spaces instead of tabs. The configuration file has the following structure: init_config: key1: val1 key2: val2 instances: - username: jon_smith password: 1234 min_collection_interval: 20 - username: jane_smith password: 5678 min_collection_interval: 20 For Agent 5, min_collection_interval can be added to the init_config section to help define how often the check should be run globally, or defined at the instance level. For Agent 6, min_collection_interval must be added at an instance level, and can be configured individually for each instance. If it is greater than the interval time for the Agent collector, a line is added to the log stating that collection for this script was skipped. The default is 0 which means it’s collected at the same interval as the rest of the integrations on that Agent. If the value is set to 30, it does not mean that the metric is collected every 30 seconds, but rather that it could be collected as often as every 30 seconds. The collector runs every 15-20 seconds depending on how many integrations are enabled. If the interval on this Agent happens to be every 20 seconds, then the Agent collects and includes the Agent check. The next time it collects 20 seconds later, it sees that 20 is less than 30 and doesn’t collect the custom Agent check. The next time it sees that the time since last run was 40 which is greater than 30 and therefore the Agent check is collected. The init_config section allows you to have an arbitrary number of global configuration options that is available on every run of the check in self.init_config. The instances section is a list of instances that this check is run against. Your actual check() method is run once per instance. This means that every check supports multiple instances out of the box. Before starting your first check it is worth understanding the checks directory structure. Add files for your check in the checks.d folder, which lives in your Agent root. mycheck.pyyour configuration file must be named mycheck.yaml. To start off simple, write a check that does nothing more than sending a value of 1 for the metric hello.world. The configuration file is very simple, including no real information. This goes into conf.d/hello.yaml: init_config: instances: [{}] The check itself inherits from AgentCheck and send a gauge of 1 for hello.world on each call. This goes in checks.d/hello.py: from checks import AgentCheck class HelloCheck(AgentCheck): def check(self, instance): self.gauge('hello.world', 1) Let’s write a basic check that checks the status of an HTTP endpoint. On each run of the check, a GET request is made to the HTTP endpoint. Based on the response, one of the following happens: First let’s define how the looks something like this: init_config: default_timeout: 5 instances: - url: - url: timeout: 8 - url: Now let’s define our check method. The main part of the check makes a request to the URL and time the response time, handling error cases as it goes. In this snippet, we start a timer, make the GET request using the requests library (Learn how to add custom python package to the Agent) and handle and errors that might arise. # Load values from the instance config) if r.status_code != 200: self.status_code_event(url, r, aggregation_key) If the request passes, we want to submit the timing to Datadog as a metric. Let’s call it http.response_time and tag it with the URL. timing = end_time - start_time self.gauge('http.response_time', timing, tags=['http_check']) Finally, define what happens in the error cases. We have already seen that we call self.timeout_event in the case of a URL timeout and we call self.status_code_event in the case of a bad status code. Let’s define those methods now. First, define timeout_event. Note that we want to aggregate all of these events together based on the URL so we define the aggregation_key as a hash of the URL. def timeout_event(self, url, timeout, aggregation_key): self.event({ 'timestamp': int(time.time()), 'event_type': 'http_check', 'msg_title': 'URL timeout', 'msg_text': '%s timed out after %s seconds.' % (url, timeout), 'aggregation_key': aggregation_key }) Next, }) The entire check would be placed into the checks.d folder as http.py. The corresponding configuration would be placed into the conf.d folder as http.yaml. Once the check is in checks.d, test it by running it as a python script. Restart the Agent for the changes to be enabled. Make sure to change the conf.d path in the test method. From your Agent root, run: For Agent v5: sudo -u dd-agent -- dd-agent check <check_name> For Agent v6: sudo -u dd-agent -- datadog-agent check <check_name> And confirm # Load values from the instance configuration) return if r.status_code != 200: self.status_code_event(url, r, aggregation_key) timing = end_time - start_time self.gauge('http.response response code for %s' % url, 'msg_text': '%s returned a status of %s' % (url, r.status_code), 'aggregation_key': aggregation_key }) Custom Agent checks can’t be directly called from python and instead need to be called by the Agent. To test this, run: sudo -u dd-agent dd-agent check <CHECK_NAME> If your issue continues, reach out to Support with the help page that lists the paths it installs. For Agent version < 5.12: The Agent install includes a file called shell.exe in your Program Files directory for the Datadog Agent which you can use to run python within the Agent environment. Once your check (called <CHECK_NAME>) is written and you have the .py and .yaml files in their correct places, you can run the following in shell.exe: from checks import run_check run_check('<CHECK_NAME>') This outputs any metrics or events that the check returns. For Agent version >= 5.12: Run the following script, with the proper <CHECK_NAME>: <INSTALL_DIR>/embedded/python.exe <INSTALL_DIR>agent/agent.py check <CHECK_NAME> For example, to run the disk check: C:\Program' 'Files\Datadog\Datadog' 'Agent\embedded\python.exe C:\Program' 'Files\Datadog\Datadog' 'Agent\agent\agent.py check disk Additional helpful documentation, links, and articles:
https://docs.datadoghq.com/agent/agent_checks/
CC-MAIN-2018-22
refinedweb
1,534
66.23
This project was bootstrapped with [Create React App](). $ cnpm install core-form This buildexits too early npm run buildfails on Heroku npm run buildfails to minify, otherwise. It also only works with npm 3. Then, add a file called .eslintrc to the project root: { "extends": "react-app" } Now your editor should report the linting warnings. Note that even if you edit your .eslintrc file further, these changes will only affect the editor integration. They won’t affect the terminal and in-browser lint output. This is because Create React App intentionally provides a minimal set of rules that find common mistakes. If you want to enforce a coding style for your project, consider using Prettier instead of ESLint style rules. This feature is currently only supported by Visual Studio Code and WebStorm. Visual Studio Code and WebStorm support. { "version": "0.2.0", "configurations": [{ "name": "Chrome", "type": "chrome", "request": "launch", "url": "", "webRoot": "${workspaceRoot}/src", "userDataDir": "${workspaceRoot}/.vscode/chrome", "sourceMapPathOverrides": { "webpack:///src/*": "${webRoot}/*" } }] } Note: the URL may be different if you've made adjustments via the HOST or PORT environment variables. Start your app by running npm start, and start debugging in VS Code by pressing F5 or by clicking the green debug icon. You can now write code, set breakpoints, make changes to the code, and debug your newly modified code—all from your editor. You would need to have WebStorm and JetBrains IDE Support Chrome extension installed. In the WebStorm menu Run select Edit Configurations.... Then click + and select JavaScript Debug. Paste into the URL field and save the configuration. Note: the URL may be different if you've made adjustments via the HOST or PORT environment variables. Start your app by running npm start, then press ^D on macOS or F9 on Windows and Linux or click the green debug icon to start debugging in WebStorm. The same way you can debug your application in IntelliJ IDEA Ultimate, PhpStorm, PyCharm Pro, and RubyMine. Prettier is an opinionated code formatter with support for JavaScript, CSS and JSON. With Prettier you can format the code you write automatically to ensure a code style within your project. See the Prettier's GitHub page for more information, and look at this page to see it in action. To format our code whenever we make a commit in git, we need to install the following dependencies: npm install --save husky lint-staged prettier Alternatively you may use yarn: yarn add husky lint-staged prettier huskymakes it easy to use githooks as if they are npm scripts. lint-stagedallows us to run scripts on staged files in git. See this blog post about lint-staged to learn more about it. prettieris the JavaScript formatter we will run before commits. Now we can make sure every file is formatted correctly by adding a few lines to the package.json in the project root. Add the following line to scripts section: "scripts": { + "precommit": "lint-staged", "start": "react-scripts start", "build": "react-scripts build", Next we add a 'lint-staged' field to the package.json, for example: "dependencies": { // ... }, + "lint-staged": { + "src/**/*.{js,jsx,json,css}": [ + "prettier --single-quote --write", + "git add" + ] + }, "scripts": { Now, whenever you make a commit, Prettier will format the changed files automatically. You can also run ./node_modules/.bin/prettier --single-quote --write "src/**/*.{js,jsx}" to format your entire project for the first time. Next you might want to integrate Prettier in your favorite editor. Read the section on Editor Integration on the Prettier GitHub page. react-router Alternatively you may use yarn: yarn add react-router This works for any library, not just react-router.: Instead of downloading the entire app before users can use it, code splitting allows you to split your code into small chunks which you can then load on demand. This project setup supports code splitting via dynamic import(). Its proposal is in stage 3. The import() function-like form takes the module name as an argument and returns a Promise which always resolves to the namespace object of the module. Here is an example: moduleA.js const moduleA = 'Hello'; export { moduleA }; App.js import React, { Component } from 'react'; class App extends Component { handleClick = () => { import('./moduleA') .then(({ moduleA }) => { // Use moduleA }) .catch(err => { // Handle failure }); }; render() { return ( <div> <button onClick={this.handleClick}>Load</button> </div> ); } } export default App; This will make moduleA.js and all its unique dependencies as a separate chunk that only loads after the user clicks the 'Load' button. You can also use it with async / await syntax if you prefer it. If you are using React Router check out this tutorial on how to use code splitting with it. You can find the companion GitHub repository here. --save node-sass-chokidar Alternatively you may use yarn: yarn add node-sass-chokidar Then in package.json, add the following lines to scripts: "scripts": { + "build-css": "node-sass-chokidar src/ -o src/", + "watch-css": "npm run build-css && node-sass-chokidar. To enable importing files without using relative paths, you can add the --include-path option to the command in package.json. "build-css": "node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/", "watch-css": "npm run build-css && node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/ --watch --recursive", This will allow you to do imports like @import 'styles/_colors.scss'; // assuming a styles directory under src/ @import 'nprogress/nprogress'; // importing a css file from the nprogress node module npm-run-all Alternatively you may use yarn: yarn add npm-run-all Then we can change start and build scripts to include the CSS preprocessor commands: "scripts": { "build-css": "node-sass-chokidar src/ -o src/", "watch-css": "npm run build-css && node-sass-chokidar. Why node-sass-chokidar? node-sass has been reported as having the following issues: node-sass --watch. With Webpack, using static assets like images and fonts works similarly to CSS. You can import a file right in a JavaScript module. This tells Webpack to include that file in the bundle. Unlike CSS imports, importing a file gives you a string value.: bmp, gif, jpg, jpeg, and png. SVG files are excluded due to #1153. --save react-bootstrap bootstrap@3 Alternatively you may use yarn: yarn add react-bootstrap bootstrap@3 Import Bootstrap CSS and optionally Bootstrap theme CSS in the beginning of your src/index.js file: import 'bootstrap/dist/css/bootstrap.css'; import 'bootstrap/dist/css/bootstrap-theme.css'; // Put any other imports below so that CSS from your // components takes precedence over default styles. flow-bin(or yarn add.env.NODE_ENV !== 'production') { analytics.disable(); }: <title>%REACT_APP_WEBSITE_NAME%</title> .env files should be checked into source control (with the exclusion of .env*.local). .envfiles are can be used? Note: this feature is available with react-scripts@1.0.0and higher. .development, .env.local, .env npm run build: .env.production.local, .env.production, .env.local, : When you enable the proxy option, you opt into a more strict set of host checks. This is necessary because leaving the backend open to remote hosts makes your computer vulnerable to DNS rebinding attacks. The issue is explained in this article and this issue. This shouldn’t affect you when developing on localhost, but if you develop remotely like described here, you will see this error in the browser after enabling the proxy option: Invalid Host header To work around it, you can specify your public development host in a file called .env.development in the root of your project: HOST=mypublicdevhost.com If you restart the development server now and load the app from the specified host, it should work. If you are still having issues or if you’re using a more exotic environment like a cloud editor, you can bypass the host check completely by adding a line to .env.development.local. Note that this is dangerous and exposes your machine to remote code execution from malicious websites: # NOTE: THIS IS DANGEROUS! # It exposes your machine to attacks from the websites you visit. DANGEROUSLY_DISABLE_HOST_CHECK=true We don’t recommend this approach. Note: this feature is available with react-scripts@1.0.0and higher. If the proxy option is not flexible enough for you, you can specify an object in the following form (in package.json). You may also specify any configuration value http-proxy-middleware or http-proxy supports. { // ... "proxy": { "/api": { "target": "<url>", "ws": true // ... } } // ... } All requests matching this path will be proxies, no exceptions. This includes requests for text/html, which the standard proxy option does not proxy. If you need to specify multiple proxies, you may do so by specifying additional entries. You may also narrow down matches using * and/or **, to match the path exactly or any subpath. { // ... "proxy": { // Matches any request starting with /api "/api": { "target": "<url_1>", "ws": true // ... }, // Matches any request starting with /foo "/foo": { "target": "<url_2>", "ssl": true, "pathRewrite": { "^/foo": "/foo/beta" } // ... }, // Matches /bar/abc.html but not /bar/sub/def.html "/bar/*.html": { "target": "<url_3>", // ... }, // Matches /baz/abc.html and /baz/sub/def.html "/baz/**/*.html": { "target": "<url_4>" // ... } } // ... } When setting up a WebSocket proxy, there are a some extra considerations to be aware of. If you’re using a WebSocket engine like Socket.io, you must have a Socket.io server running that you can use as the proxy target. Socket.io will not work with a standard WebSocket server. Specifically, don't expect Socket.io to work with the websocket.org echo test. There’s some good documentation available for setting up a Socket.io server. Standard WebSockets will work with a standard WebSocket server as well as the websocket.org echo test. You can use libraries like ws for the server, with native WebSockets in the browser. Either way, you can proxy WebSocket requests manually in package.json: { // ... "proxy": { "/socket": { // Your compatible WebSocket server "target": "ws://<socket_url>", // Tell http-proxy-middleware that this is a WebSocket proxy. // Also allows you to proxy WebSocket requests without an additional HTTP request // "ws": true // ... } } // ... }> window.SERVER. To install it, run: npm install --save enzyme react-test-renderer Alternatively you may use yarn: yarn add enzyme react-test-renderer You can write a smoke test with it too:.';
https://npm.taobao.org/package/core-form/v/0.1.3
CC-MAIN-2019-51
refinedweb
1,693
58.38
The file compiles correct, I just want to make sure i assigned the function and variables correctly because when i run the program, the output is always 0 reguardless of the inputted numbers. Code: #include <iostream>#include <cmath> using namespace std; const double G = 6.673 * pow ( 10, -8 ); int main () { double value_1, value_2, distance, gravitational_force; char response='y'; while (response == 'y' || response == 'y') { cout << "Welcome to the Gravitational Force program." << endl; cout << endl; { cout << "Please enter the mass of the first object: "; cin >> value_1; cout << endl; cout << "Please enter the mass of the second object: "; cin >> value_2; cout << endl; cout << "Please enter the distance between the two objects: "; cin >> distance; cout << endl; int gravitational_force = ( G * value_1 * value_2 ) / pow ( distance , 2 ); cout << "The gravitation force between the two objects is "; cout << gravitational_force; cout << " dynes."; cout << endl; } cout << "Would you like to run the program again? (Y for yes, anything else for no) "; cin >>response; } return 0; }
http://cboard.cprogramming.com/cplusplus-programming/141359-function-printable-thread.html
CC-MAIN-2013-48
refinedweb
156
54.05
I have this code import random #bring in the random number import time number=random.randint(1, 200) #pick the number between 1 and 200 def intro(): print(“May I ask you for your name?”) name=input() #asks for the name print(name + “, we are going to play a game. I am thinking of a number between 1 and 200″) time.sleep(.5) print(“Go ahead. Guess!”) def pick(): guessesTaken = 0 while guessesTaken < 6: #if the number of guesses is less than 6 time.sleep(.25) enter=input(“Guess: “) #inserts the place to enter guess try: #check if a number was entered guess = int(enter) #stores the guess as an integer instead of a string if guess< =200 and guess>=1: #if they are in range guessesTaken=guessesTaken+1 #adds one guess each time the player is wrong if guessesTaken<6: if guess print(“The guess of the number that you have entered is too low”) if guess>number: print(“The guess of the number that you have entered is too high”) if guess != number: time.sleep(.5) print(“Try Again!”) if guess==number: break #if the guess is right, then we are going to jump out of the while block if guess>200 or guess<1: #if they aren't in the range print(“Silly Goose! That number isn’t in the range!”) time.sleep(.25) print(“Please enter a number between 1 and 200″) except: #if a number wasn’t entered print(“I don’t think that “+enter+” is a number. Sorry”) if guess == number: guessesTaken = str(guessesTaken) print(‘Good job, ‘ + name + ‘! You guessed my number in ‘ + guessesTaken + ‘ guesses!’) if guess != number: number = str(number) print(‘Nope. The number I was thinking of was ‘ + number) playagain=”yes” while playagain==”yes” or playagain==”y” or playagain==”Yes”: intro() pick() print(“Do you want to play again?”) playagain=input() I don’t know why, but if i run this script, no matter what I enter for my guess, it goes with print(“I don’t think that “+enter+” is a number. Sorry”) Even if it is a number. I am not sure how to fix this You must be logged in to reply to this topic.
http://css-tricks.com/forums/topic/how-to-do-try-except-in-python/
CC-MAIN-2013-48
refinedweb
366
70.53
It is a quite common that you come across a screen whose common requirement is to layout a part or whole of the screen dynamically based on the runtime decisions made by the user. A typical example is of screen used for submission of report process. Screen for such a functionality is made of two blocks, in the first block user inputs report process name and presses a Search button on the basis of report name specified a list of arguments appears in the lower block, against which the user is supposed to provide values. In general against every field in JSP, ActionForm bean has to have a pair of accessor and mutator methods (getter and setter methods). In this scenario since the number of fields varies there could be a problem if some fields does not have a corresponding accessor and mutator method. Best practices There are three possible solutions to this problem Name all the fields in specific pattern like field1, field2, field3 etc and provide their accessor and mutator methods in the FormBean. In this approach there is a maximum limit on number of fields that can appear on screen. The purpose of FormBean is to reduce the overhead of doing getParameter () in the Servlet. We always have the option of not using a FormBean and doing the getParameter in Action class. - Third option is to utilize the setProperty ()/getProperty () methods provided by Struts especially for this type of problem. In case we have a small and fixed number of fields on screen, it is not a bad idea to go for the first approach described above, but in general it has been found that the last approach is the best one. To use this, define two methods in your ActionForm bean class. These methods may in turn be written to store the values into a HashMap . Populate all the required fields in the form Action Class or any Helper class. public class testVarForm extends ActionForm { private HashMap hMap = new HashMap(); public testVarForm() { } public void setProperty(String key, Object value) { this.hMap.put(key, value); } public Object getProperty(String key) { return this.hMap.get(key); } public HashMap getHashMap() { return this.hMap; } public void setHashMap(HashMap newHMap) { this.hMap =newHMap; } } Then use these field and respective values in your JSP using the following tag. <html:text <html:text OR <bean:write --Puneet Agarwal There's a fourth, and, IMHO, more elegant option available based on the JavaBeans standard: Use indexed properties. In your HTML, you can use: {{{ <html:text; <html:text; }}} If you use dyna forms, you specify the type as an implementation class of a List (e.g. java.util.ArrayList). If you are using hand coded forms, just use accessor and mutator methods like: {{{ public Object getX(int index) { return x.get(index); } - public void setX(int index, Object value) { x.set(index, value); } }}} One Gotcha: These are naive implementations that don't check for index out of bounds; production code must be more robust. Delve a bit more into the API and you'll find that Struts builds on this concept to allow you to use mapped properties as well. -- Java Architect Also see: StrutsCatalogInstrumentableForms -- Michael McGrady
http://wiki.apache.org/struts/StrutsCatalogVariableScreenFields?highlight=FormBean
CC-MAIN-2014-15
refinedweb
532
60.24
In the previous articles, we have talked about monitoring using Prometheus and other ways. In this article, we are going to talk about how you can write your own exporter using Python. To write your own exporter you need to use prometheus_client library. To install it you can type the below command. pip install prometheus_client Now let’s look at the code where we will export the metrics and then our Prometheus can scrape those metrics. from prometheus_client import start_http_server, Summary import random import time REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request') @REQUEST_TIME.time() def process_request(t): time.sleep(t) if __name__ == '__main__': start_http_server(9000) while True: process_request(random.random()) In the above code what we have done is imported prometheus_client and started the http_server which will serve the metrics endpoint. We have created a Request_Time function with help of summary, this function is acting as a decorator and then can be added to any function to check how much time it took to execute that part of code. Then in our process_request function, we added a random sleep to simulate the time taken to execute the function. In main, we just started the server and then in a loop called the process_request function with random values. This will trigger the Request_Time function and the metrics get recorded. If you open localhost:9000/metrics you will see something like below. Now you can add this endpoint in Prometheus to start scraping. - job_name: python static_configs: - targets: ['localhost:9000'] Now you Prometheus will start scrapping the metrics. You can now use Grafana to plot the metrics. This was how you can write a very basic Prometheus exporter and then you to plot on Grafana. If you like the article please share and subscribe.
https://www.learnsteps.com/writing-metrics-exporter-for-prometheus-using-python/
CC-MAIN-2022-27
refinedweb
291
73.47
" } [download] See what. they are essentially protected by virtue of the fact that you have to use the object I would be careful about calling much in Perl protected. Sure, general politeness will keep most people from using the package in unintended ways, but Perl sure won't: package Example::Module; use strict; sub new { bless {}, shift } sub foo { my ($s, $v1, $v2) = @_; print "Hi '$v1' and '$v2'\n"; } 1; "); [download] Your name space isn't poluted: use strict; use warnings; use mymodule; print "My Test\n"; Func1(); # ERROR! Func2("Fluffy", 5); # ERROR! [download] But check this out: use strict; use warnings; use mymodule qw( Func1 ); # Import Func1 #use mymodule qw( Func2 ); # This won't work since # Func2 is not in @EXPORT_OK. print "My Test\n"; Func1(); Func2("Fluffy", 5); # ERROR! [download] You should never export methods. As you can see, there's no reason to do so. All functions in a package are tied to the object blessed to that package, whether you use Exporter or not. and how can I prevent it as it seems to be allowing everything into the calling script's namespace? It's possible to disguise a function using lexicals and/or closures so that noone outside the package can call it, but there's no need to do so. Write proper documentation instead. Study the following short program and its output: use strict; sub Foo::bar { print 'called on line ', (caller)[2], ": $_[0]\n" and $_[0]; } my $frobozz = bless \&Foo::bar, 'Foo'; $frobozz->bar('eenie'); $frobozz->bar('eenie')->('meenie'); $frobozz->('eenie'); Foo->bar('eenie'); Foo->bar('eenie')->bar('meenie'); Foo::bar('eenie'); 'Foo'->bar('eenie'); 'Foo::bar'->('eenie'); "Foo'bar"->('eenie'); # Perl trivia # bar('eenie'); # bombs! require Exporter; @Foo::ISA = ('Exporter'); @Foo::EXPORT_OK = ('bar'); Foo->import('bar'); bar('eenie'); __END__ [download] Note! The above is certainly not an example of good coding. Its only purpose is to illustrate some aspects of Perl. the lowliest monk I believe you should "use Exporter" not "require Exporter" I believe you should "require Exporter" or "use base 'Exporter'" not "use Exporter". The exporter module defines &Exporter::import so that it can be inheritied by other modules. Calling Exporter::import('Exporter') doesn't really make sense. The only difference between "use base Exporter;" and either "use Exporter;" or "require Exporter;" is that "use base" will modify @ISA for you and the others make you modify it yourself. You either have to have Exporter in your @ISA or have done something like *import = \&Exporter::import; to get the benefits of Exporter. Nothing more, nothing less. use Exporter; our @ISA = qw(Exporter); [download] Lots Some Very few None Results (213 votes), past polls
http://www.perlmonks.org/index.pl?node_id=439989
CC-MAIN-2016-07
refinedweb
446
62.58
Occasionally you need to know what the battery level is the device. This is pretty simple with the battery package from the flutter plugins. Download Visit the GitHub repository to clone the source for this application. This code was tested with Flutter 0.5.1 Setup We're only going to cover the actual interaction with the battery package and not the rest of the app. As such, we'll start at the starting_point tag in the repo. The app and the battery display are all built by this point and just waiting for the updates to happen. Once the code is checked out, just run git checkout starting_point to update your local copy to the tag. Our finished version is tagged as with_battery The app already has a basic battery icon that shows the charging state and charge level. It is initialized to a default state of charging with 75% charge level. We're going to make these values update dynamically. We will use the variables _batteryState to store the current charging state, and _batteryLevel to store the current charge level. The Code Step 1: Import the package The following steps were already done for this tag, but we're going to cover them quickly. To add the plugin to our app, we need to add the following line to our pubspec.yaml under " dependencies:" battery: ^0.2.2 If you're using VSCode or Android Studio, they should either prompt your to updater your dependencies or just do it automatically. If you're not, then run flutter packages get to download our new dependencies. Next, we need to add the import to the top of our lib/main.dart file so that our code can use the new dependency. import 'package:battery/battery.dart'; Step 2: Add in the callback handling In lib/main.dart, we need to update our state handling to account for the new library. All of the following code will be added inside the _BatteryLevelPageState class. Step 2.1: Create a plugin instance This is pretty straight forward, but we need to create an instance of the plugin in our code so that we can interact with it. Fun Fact: The plugin uses a factory to implement the singleton pattern. Add the following line of code to the top of the _BatteryLevelPageState class. final Battery _battery = Battery(); We make this variable final because we never intend to reassign it to a new value. The name, _battery, is prefixed with an underscore to denote that the variable is "private". Step 2.2: Set up our listeners The battery provides two asynchronous properties that we are going to interact with. First batteryLevel which provides us with a 0 to 100 representation of the current charge level. Second onBatteryStateChanged that will inform us when the charging state of the battery changes or the battery level percentage has changed. In the initState method of _BatteryLevelPageState, remove the current initializers for _batteryLevel and _batterystate and add the following code instead: _battery.batteryLevel.then((level) { this.setState(() { _batteryLevel = level; }); }); _battery.onBatteryStateChanged.listen((BatteryState state) { _battery.batteryLevel.then((level) { this.setState(() { _batteryLevel = level; _batteryState = state; }); }); }); Our first block of code is accessing the batteryLevel future. We're using .then to wait for the value to be available, then updating our state to store that value. Our second block is roughly equivalent, except that onBatteryStateChanged is a stream that will supply us a new value periodically. We listen for any of these changes, re-query the current battery level, then update our state with both new values. And now, it's beautiful: Step 3: Finishing up All the heavy lifting was done already to make the _batteryState and _batteryLevel values be displayed. When we call setState, we are notifying the widget that it needs to repaint with new values. Then the magic all happens here: child: CustomPaint( painter: _BatteryLevelPainter(_batteryLevel), child: _batteryState == BatteryState.charging ? Icon(Icons.flash_on) : Container(), ), We have a CustomPainter to draw the battery and it's charge level. Then, if we're charging, we tell the painter to draw a lightning icon inside itself. Step 4: Caveats This is really only testable in an emulator with Android. The iOS emulators do not support changing or handling the battery state. Step 5: Credits I borrowed heavily from the battery_indicator plugin for the custom painter for drawing the shape of the battery. If you don't feel like drawing your own, the plugin above is easily customized to fit your needs.
https://flutter.institute/tracking-your-battery/
CC-MAIN-2019-18
refinedweb
750
64.3
The Reader class of the java.io package is an abstract superclass that represents a stream of characters. Since Reader is an abstract class, it is not useful by itself. However, its subclasses can be used to read data. Subclasses of Reader In order to use the functionality of Reader, we can use its subclasses. Some of them are: We will learn about all these subclasses in the next tutorial. Create a Reader In order to create a Reader, we must import the java.io.Reader package first. Once we import the package, here is how we can create the reader. // Creates a Reader Reader input = new FileReader(); Here, we have created a reader using the FileReader class. It is because Reader is an abstract class. Hence we cannot create an object of Reader. Note: We can also create readers from other subclasses of Reader. Methods of Reader The Reader class provides different methods that are implemented by its subclasses. Here are some of the commonly used methods: ready()- checks if the reader is ready to be read read(char[] array)- reads the characters from the stream and stores in the specified array read(char[] array, int start, int length)- reads the number of characters equal to length from the stream and stores in the specified array starting from the start mark()- marks the position in the stream up to which data has been read reset()- returns the control to the point in the stream where the mark is set skip()- discards the specified number of characters from the stream Example: Reader Using FileReader Here is how we can implement Reader using the FileReader class. Suppose we have a file named input.txt with the following content. This is a line of text inside the file. Let's try to read this file using FileReader (a subclass of Reader). import java.io.Reader; import java.io.FileReader; class Main { public static void main(String[] args) { // Creates an array of character char[] array = new char[100]; try { // Creates a reader using the FileReader Reader input = new FileReader("input.txt"); // Checks if reader is ready System.out.println("Is there data in the stream? " + input.ready()); // Reads characters input.read(array); System.out.println("Data in the stream:"); System.out.println(array); // Closes the reader input.close(); } catch(Exception e) { e.getStackTrace(); } } } Output Is there data in the stream? true Data in the stream: This is a line of text inside the file. In the above example, we have created a reader using the FileReader class. The reader is linked with the file input.txt. Reader input = new FileReader("input.txt"); To read data from the input.txt file, we have implemented these methods. input.read(); // to read data from the reader input.close(); // to close the reader To learn more, visit Java Reader (official Java documentation).
https://www.programiz.com/java-programming/reader
CC-MAIN-2021-04
refinedweb
475
59.09
HOME HELP PREFERENCES SearchSubjectsFromDates This is due to the way GLI handles metadata. When it skims the archive files looking for extracted metadata, it ignores any with a namespace - assuming that this metadata was assigned using metadata.xml files. So the exif metadata, which is like exif.xxx, won't show up in GLI, and therefore doesn't appear in the list of items to index. This has been changed for the next release, the metadata will be stored as ex.exif.xxx, and GLI will look for ex. metadata, so it will appear in the enrich panel, and will show up in metadata lists. In the meantime, you can edit the collect.cfg file by hand (make sure the collection is not open in GLI), and add the metadata you want indexed into the indexes list. Regards, Katherine Renate Morgenstern wrote: > Hi, > I have an JPEG image collection and use the EmbeddedMetadata plugin to > extract the EXIF, IPTC and XMP data. > The metadata is extracted as I can display it in any format, however, I > can't search on this extracted metadata. I have added some values in > Dublin Core fields, and when searching on this it gives results. For the > index I selected and also across all fields. > Any suggestion how to get the EXIF, etc data into the index? > Thanks in advance for any help. > Regards > Renate > >
http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-----dfr--0-1l--11-en-50---20-help-Jonathan+Pattison--00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&cl=CL3.9.7&d=4C58C064-7080309-cs-waikato-ac-nz
CC-MAIN-2020-29
refinedweb
230
73.37
Heads up, I did a weekend post on project frontier, so if you’re one of the many Monday-Friday readers (aren’t you supposed to be working?) then you might have missed it. In that post, Reader Jordi asks: Hi Shamus, why don't you make your own file formats that are maximally efficient for your situation? Although writing a compiler/converter from a third party file format to your own takes some extra effort (although I imagine this would be fairly easy), I think it has 2 major advantages: 1) You get to select the third party format purely based on what modeling/animation tools you want to use, or how easy the format is to parse without having to worry about efficiency (both in terms of extra data/memory and parsing speed). 2) In your game you get to use maximally efficient and flexible file formats because you can custom tailor them to your own unique situation. Ah. I really wanted to take this route. But my art path begins with Blender. (I’ll get into why later. I’m not planning on using it myself, I’ll tell you that.) If I knew Python and knew how to make Python talk to Blender, then I’d do this in a heartbeat. But I don’t and “learn a new language so you can avoid learning a new file format” isn’t really the most efficient way of doing things. Particularly since I’d still have to write the Python scripts, and then write C++ code to read the resulting files. So we’re stuck with whatever goofball lame-brain files we can wring out of Blender. Now, my first instinct is to write a model importer and replace Sticky the Stickman with proper geometry. But, I remind myself, we’re working backwards this time. So instead of bringing in a model, we’re going to bring in an animation and have Sticky perform it. I search around and find this Creative Commons character: He’s perfect for my tests: - He’s low poly. Later, when I’m working on a model importer, it will be a lot easier to test on this 400 polygon guy than on some 10,000 polygon beast. When points go out of place, I’ll have a sporting chance at being able to spot the problem, instead of finding myself looking at a dense tangle of visually indecipherable geometry. - He comes with several animations. In Blender, he can run, punch, and stand. This gives me a variety of animations for testing. - He’s a minimalist humanoid. No guns, armor, furry sidekicks, fancy clothing, or wild anime hair. It’s just a guy, and nothing else. Again, simplicity is clarity. Note that this is NOT something to be used in the game. This is only for research. My research begins with the following highly rigorous scientific approach: I open the guy up in Blender. After just ten minutes I figure out how to select his animations, and then I go to File » Export and save the animation in every format available. COLLADA, Stanford .ply, Motion Capture .bvh, Stl, 3ds, obj, fbx, and x3d. Once I have a nice, even coating of random files spewed all over my desktop, I start opening up the files in a text editor. Most files are binary. (And thus gibberish in a text editor.) I set those aside. Text files are far easier to work with than binary files. You can read a text file yourself, which lets you know what the file “really” says. This makes it fairly easy to spot the difference between a design problem (I’m misunderstanding the format) and an implementation problem. (A regular old software bug.) I inspect the files, and find that .bvh files look very, very straightforward. It’s a description of a skeleton (which I don’t need, because I already have one) followed by a pure list of rotations. “Turn the shoulder 30 degrees. Twist the spine -15 degrees. Rotate the knee 20 degrees.” Smashing. This is exactly what I want. Some animations files (perhaps most animation files) are inextricably linked to a particular skeleton. So every skeleton in the game (say, men and women) needs to have its own animations. That’s fine if you’re a big studio and you can solve problems with money, but indies can’t go around doubling their workload on a whim. There’s a bit of ambiguity that confuses me for a while. The designer of this model has joints for both “shoulders” and “upper arms”, and my model has a single joint for the shoulders. This leads to some puzzling movement until I discover the problem. The coordinate systems don’t agree, either. My world is set up so that positive X is east, positive Y is south, and positive Z is up. Better than half of the graphics engines out there use use Y for vertical and Z for north / south. (Notch is a loose cannon. He uses Z for east / west. I’ve NEVER seen that before.) My coordinate system was chosen carefully. 1 unit = 1 meter = 1 square of terrain detail. I can covert between “map coords” and “world coords” by just throwing away the Z value. (In my previous job where Y was “up”, this was done by throwing away Y, then making Z the new Y, then dividing the new X, Y values by 10. I’m sure you can see how this could lead to occasional confusion.) But the person who made this animation has another system in mind, and so I have to sort that out. The shoulders rotate on the wrong axis for me. Instead of swinging forward, they rotate in place. (Imagine your arms at your sides. Now turn your arm so that the palm of your hand faces forward, then behind you, etc.) The knees move backwards for a while and things are generally creepy and strange until I get it all sorted. I really wish I’d taken a screenshot of the process. In any event, I manage to figure it out. I have an animation system, and Sticky the Stickman can now run in place. I forgot to take screenshots, but he’re’s a shot of Sticky doing the sine-wave animations from earlier, which we can pretend is a victory dance. Just one step left. I have to read in a skeleton, vertex points, and polygons. It’s the hardest step, but at least i know everything underneath it works properly. A Star is Born Remember the superhero MMO from 2009? Neither does anyone else. It was dumb. So dumb I was compelled to write. What Does a Robot Want? No, self-aware robots aren't going to turn on us, Skynet-style. Not unless we designed them to. Starcraft 2: Rush Analysis I write a program to simulate different strategies in Starcraft 2, to see how they compare. 86 thoughts on “Project Frontier #13: An Animated Topic” amazing! your fast pace is simply breast taking, making a game seem really easy (when YOU do it). it’s a pity you don’t know python. it’s a beautiful and elegant language that you can, really, learn in one day. Let’s hope his wife doesn’t find out about his new pace, than. :-P Every time I read the second paragraph of this comment, I get deja vu. Just a heads up, Shamus, if you’re using that comic to start learning: That was Python 2.x . The “Hello World Program” in Python 3.x is: print("Hello World") [/pedantic] import antigravitystill works the same though. Breast taking, that’s a new one damm! well, talk about lapsus :-T Very nice animation work there. Thank you very much for answering my question, Shamus! I hope you don’t mind me hammering on about it a bit. You say you would want to use Python to talk to Blender directly. While I agree that would be awesome, it wasn’t really what I had in mind. What I was thinking, is that you could just take any easy to parse file format (e.g. the bvh one) that isn’t necessarily very efficient for you, parse it (which you have to do either way) and then instead of immediately using it in your game, you save it to a format that you made up and that is exactly what you want it to be. I can see why Python or another scripting language would be suited for that, but you can also do it with C++. In fact, you should probably be able to find a generic object serialization library for C++ (although writing your own binary serialization for the object should not be a problem for you either). That way, when you parse the bvh file into your Animation class (I assume you have, or could easily make, something like this), do all kinds of preprocessing that you might want, and serialize the object to disk. Then, in the game, where performance really matters, you deserialize it and instantly have your Animation object ready to go. Okay, I’m done now. :) I’d just like to add that I agree with benjamin and am in awe of how fast you are accomplishing all these cool things. And happy Independence Day! Actually, technically python is INSIDE of blender.. you just have to open the script window. The route I took was to take a well defined format (.obj) and tweak the export script to include extra data that I want in the file.. I would like to say I have something intelligent and funny to say about Sticky the Stickman. But I don’t. Uh… “Victory Dance emote for Sticky?” :D Nah… :/ Keep up the good work, and Happy Fourth! :) nice animations. OT: how is the book coming along? Spoilsport. I thought you hate Blender… And I wholeheartedly agree with that assessment…Blender is pure evil…the UI pre-2.5 was abysmal…I’m constantly told it is better now, but whenever I try finding my way into that program, I fail at hurdles so basic that I don’t want to try going further in…I guess I’ll stay with 3ds Max for the rest of my life… One thing confuses me…you wrote that the Blender character had joints for shoulders and upper arms whereas yours only had shoulders…I have to admit I can’t follow you here…on your screenshots your character does have both shoulders and upper arms and the Rig in the Blender screen doesn’t look so different… Two joints one separate axis? The shoulder’s a socket joint or some such (I haven’t taken anatomy in forever) so you can’t model it using a single planar joint which I’m assuming is how these joints work. Joints typically have three degrees of freedom (rotations) unless you tell them not to be. A basic character arm rig you have a clavicle bone, an upper arm bone and a lower arm bone…and as far as I can see both Shamus’ Stickman and the Blender Character have exactly these…advanced rigs have additional bones in the upper and lower arm to blend rotations, but none of these have anything to do with the shoulder itself… Though I have to admit I still have much to learn considering rigging, especially at the shoulder…I have yet to find a simple solution to prevent the shoulder from collapsing in itself when the arm ie raised… Unfortunately I don’t think Shamus has the luxury of choice. No matter how many negatives Blender has, there is one masive advantage it has over the competition – it’s 100% free. I suspect that the explanation we’ve been promised will be that someone else is delivering the models for him, and they actually like the program. It’s funny how people have this massive UI lockin on arts programs. All the graphics designers I’ve ever met work with only with Photoshop and God help you if you ever suggest using the GIMP, even though they have the same tools. For 3d modelling, I know a few editors which lack some operators and have others, but it’s probably the same issue with investing too much into the specific UI. Don’t get me started on GIMP…it might have more or less the same tools as PS, but I really hate that three-windows-design that can all separately lose focus and disappear behind other windows…also it tends to have problems with my Wacom… Blender is really a beast in terms of UI…it is not only cluttered with features noone ever uses, but also completely counterintuitive…I don’t know why, but they chose to do it completely different to everybody else…Max, Maya, C4D, XSI, etc all share some similarities in UI and basic functionalities…if you know one of these you can open each of them and know your way around them in a couple of minutes (at least or the basics), but not Blender…the only other app I know, that os similarly different, is zBrush, and even that is by far easier to learn than Blender… I think Blender might be acceptable to use as a first 3D program, but when you used a different one before, it is just frustrating to sit in font of the screen trying for hours to create stuff you can do in the other tool in minutes… “I really hate that three-windows-design that can all separately lose focus and disappear behind other windows” If that happens, it’s a glitch. The tool and layer windows have been special “always on top” windows for the last couple of years at least. I know it doesn’t work right on KDE (at least when I tried it on Ubuntu), but it works properly for me in Windows and Gnome. Don’t know which version it was, but the last time I tried it, I full-screen-ed the image and my tools and layers where hidden behind it…completely stupid…haven’t tried it since… Heh yeah, and I remember when running it normally, i.e. free floating windows, one mis-click and you bring something else in front of the whole thing, and have to bring each window separately back to front. At least that’s how I remember it, was some couple of years ago and it took me 5 minutes to say “screw this crap”. I’ve got the feeling GIMP was made by/for people who never have more than one program open, and a completely empty desktop… Well…with a multi-monitor setup, it might be worth a try…having the canvas on the primary fullscreen and the tools and other stuff on the secondary…but I still prefer Photoshop… Maybe they come from Linuxland and are used to giving big multi-window apps like the GIMP their own workspaces. Or, yeah, multiple monitors. It’s definitely nice on a dual-monitor setup, and for many window managers you can define window rules to make the program behave the way you want. The next version of GIMP will, however, offer a single-window-mode. Has GIMP caught up to the functionality of Photoshop 4 yet? Setting aside the terrible UI, someone who spends their days working with Photoshop probably doesn’t want to go hunting for plugins that sort-of implement features they’ve been using for over a decade. -j Pretty much this. I suspect some game designers do so much of their work in 3D modeling programs and the like that their Photoshop work is agonizingly simple, however, so why not use GIMP? If you’re a photographer or a print designer, the two programs stop looking identical. I think the thing about Blender is just that it requires you to think about using it differently, but then you become super efficient. It’s a but like Python in that respect, if you’ve only ever done Fortran before … (so maybe that’s not a represantative example?). It was pretty painful doing the simplest things, but after I had rewired my brain, it was pure joy and Fortran looks soo clunky now! One of these days I’ll have to learn Blender properly. Let’s see how that’ll be going. At least it will be a good exercise in not stopping to learn new stuff. Tell me how that’s going…I didn’t have much luck the last 15 times I tried :D I’ve learned to use it (albeit without prior knowledge of a different system). I can whip out game-quality (EG not-CG) models/UV maps in no time flat. So, it works, if you know it. Hard if you don’t, though. You’re a loose cannon, Notch…but you’re a damn good cop! What? A left-handed system? I guess so. I like this because when I look down from above (as with the map) the origin is in the upper-left. I’ve always wondered why graphing for math and such has the origin in the lower left, but it’s in the upper left for computer graphics. I would guess that: For Math, x is left to right because it’s the way we read, and y is pointing up because, well, it seems logical to display something that increase as going up. For computer graphics, it originate from the CRTs of olde, where the beam started hitting the screen at the upper left corner. Of course I might be totally wrong on both accounts. Actually, a more likely explanation for having the origin in the upper left seems to be an extension to your reading theory – the page starts in the upper left. In math, the origin is at the center. In computer science, the origin is at the beginning. I don’t understand why character models would care which way was east, though. Do they only have one axis of rotation? Is it less trivial to roll or pitch a character than it is to yaw them? In Maths (and in Physics as well) any coordinate can point anywhere it likes, only usually you have x going left to right, because that’s how people read. Where I work, we’re sometimes juggling with lots and lots of different coordinate systems. If I was to code something like hierarchical motion, every body part would have its own local coordinate system. One thing they’d all have in common though: They’re right-handed. If you have left-handed coordinate systems, all of vector mathematics falls apart (or at least you need to rethink everything you do, because all operations and laws are established for right-handed systems). I heartily recommend always using right-handed systems (says the left-hander) if you plan on doing anything requiring coordinate transformations (like, rotating stuff in 3D or such…) For non-technical/mathematical people out there: A right-handed system is one you can construct by pointing the thumb of your right hand in positive x direction, the index finger in positive y direction and then pointing the middle finger at a right angle to both. You can turn that thing any way you want (actually, a good see how far you can twist your arm is to try and visualise some weird coordinate system), it will always be right-handed. And you’ll never manage to form the same system with your left hand except if you’re able to bend your middle finger 90° backwards. For computer screens, I think coordinates just follow the raster beam of CRT tubes. Starting top left, going down linewise. Still no problem to construct a right-handed system there, just put your right hand on the screen, thumb right, index finger down … I’m not responsible for injuries :) Actually I can quite easily form the exact same system with both my hands. Sure, the axes will be named differently, but that’s just arbitrary labels. I do not understand how the names of the coordinates would change how the mathematics works. The ordering of the coordinates matters, not the naming. If you change from right handed to left handed some vector operations switch signs. You say x1 cross x2 equals x3, for your definition. If you switch x1 and x2 suddenly your x3 points in the other direction. Imagine turning a screw from x1 to x2, than the screw travels in x3 direction. You could theoretically build up a left handed mathematic, but then you will have lots of trouble adding some kinds of physics. When x1 is left to right and x2 is up to down x3 is into the monitor. If x1 is east x2 is south then x3 goes down. In my work x1: left x2:up x3:back is very popular. If x1 cross x2 equals x3, and then we switch the x1 and x2 axes of the system. x2 cross x1 still equals x3, right? You wrote “x2 cross x1 still equals x3, right?” It ends up being an arbitrary sign convention. Underlying it is a sort of pun: cross products and vectors have the same number of components, and transform similarly in some ways (e.g., you can add them: adding angular momentum works the same way as adding momentum), so it’s a useful pun. But they aren’t quite the same thing. Chasing the links early in (to things like pseudovector, bivector, and exterior product) will tell you more about this issue in a general abstract way. Or if you can be satisfied with a single concrete point to back up my claim that there’s an arbitrary pun going on, note that if you transform your coordinate system as if by reflection through a mirror, then true vectors which were parallel remain parallel, but a true vector which was parallel to a cross product (e.g., a momentum which was parallel to an angular momentum) will now be antiparallel, running in the opposite direction. Nope, you cannot. Remember: Thumb: positive x Index: positive y middle: positive z Ah … just look it up: it’s impossible to make the same coordinate system with both hands, one axis will always be reversed, or two have to switch places (which is the same as inverting one and then rotating 90°) Oh, I’ve looked it up. Trust me. I just could find no place that actually explains it. Just turning my left hand 90 degrees to the left I get the same system, the one from my right hand will have to coordinates x, y, z, and the one from my left will instead call them z, y, x, but I don’t see how they’re actually any different. noooooo! The important part is that your thumb is always x, the index finger is always y and so on. You’re not allowed to rename them! It’s impossible to turn a left-hand system into a right-hand system only by moving/rotating. As I said above, you can’t do it without swapping two axes or inverting one. Which is what you did => you cheated! If you don’t cheat, it is possible to build every possible 3D cartesian coordinate system with exactly one of your hands, which means each of those is either left- or right-handed. Actually, physicists have a hard time understanding why magnetic fields are right-handed. An electron moving along your right thumb through a magnetic field that goes in the direction of your right index finger experiences a force in the direction of your right middle finger. If you try it with your left hand, you get the direction of the force wrong. The reason it’s not the same (and why you’re not allowed to switch the axis) is that the order of the values is an integral part of what defines the coordinates that the axis measure. That is to say, (2,1) is a different point to (1,2). In the example where you ‘get the same coordinate axis, but with different names’, the reason they’re different is because (1,2,3) in one will not map to the same place as (1,2,3) in the other. If a given point is not the same in both coordinate systems, then the coordinate systems are not the same, even if you manage to have different axis line up. As for the electron and right-handed systems, that is plain wrong. The equations work correctly in either case. If you try it with your left hand, you get the opposite direction, but you also have the axis reversed. Meaning the force is still pushing the electron in the right direction. You’ve just changed the arbitrary labels of the points in space. It’s not cheating. Who decided I can’t rename my own coordinate system? @Zukhramm: You wanted the right-hand rule explained. That is the right-hand rule. In vector maths: a x b = -(b x a) (=> cross products are not communtative) In words: 1. any two right-hand systems can be transformed into one another by means of rotation, translation and scaling (with non-negative factors!); the same goes for left-hand systems 2. You cannot transform a right-hand system into a left-hand system without inverting an axis or swapping two of them. Swapping axes or inverting them will change results of vector operations. My experience bore this out. Moving from right-hand to left-hand, I had to invert the X axis of incoming models. @ Kian: Yes, yes, you could also compute magnetic (not electric! electric fields are scalar potential fields, magnetic ones are vector fields) forces with left-handed vector maths, or you could just rearrange the vectors and call it left-handed, that’s just a matter of definition. The thing is this: If field vector and movement vector are normal to each other, and the force is normal to both of them, then how does this force “pick” one of the two possible solutions (and always the same one)? This can be traced back to quantum mechanics and some other stuff I’m not familiar with, but the question remains: How have elementar forces and particles a sense of left or right? You could easily imagine a universe following the same laws as ours, only with one coordinate inverted. There’s no known reason it could not work. But it’s completely incompatible with ours. @Zak McKracken I never asked for the right hand rule explained, I already knew it. 2. answers my question however, so thank you. Amusingly, I was always taught in math class that two-dimensional systems start in the bottom left, but three-dimensional systems start in the top left. In fact, I read Shamus’ axis description and found it odd that that was supposed to be odd. You’re taking a very similar approach to getting animation working to my own: A blocked-out reference figure rather than a straight-up high-detail mesh. It does make it easier to work with, and I must admit, I get rather attached to the little guy. But I hope you won’t end up leaving the project at that stage – I’m looking forward to seeing what artstyle-specific human figure you can come up with. Still looks great. I could probably waste a month imply running an avatar around an infinite number of WoW-sized worlds. You’d think so, but in retrospect I spent more time poring over documents describing the Quake 3 model and map file formats (MD3 and BSP) before I was able to make an importer, than I did reading Python tutorials before I was able to use it to pass 2 rounds in Google Code Jam. That said, you could always just use AssImp to import from any common format. It’s not the fastest way to import, but the expectation is that when you want speed you can just load everything into custom data structures and dump it to disk with a little bit of pointer correction. On a lighter note, it makes a big difference seeing characters in your world. I really like how it’s shaping up. That first screenshot looks fantastic for a couple months of spare time work. “Notch is a loose cannon.” hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaahahahahahahahahahahahahaha Great job! It’s nice that you’re using Blender – my specialty. i agree, big ++ for blender “Low Poly Human? He’s just this guy, you know?” The Wavatar PHP doesn't work! Hm? Wavatars were folded into the core of WordPress a couple of years ago. You no longer need the plugin. If you’ve got a version of WordPress made since 2008, you should have wavatars on your site. You’re using one right now. It’s right beside your comment. I know, but I want to use Wavatars for my blogs.. You don’t know Python? What!? You should really halt the project and go rectify that. In my experience, coding anything in Python takes, at most, half as long as in any other language. And really, learning Python probably won’t even take a full day. What would you recommend for a Python tutorial that’s the least infected with either self-indulgent wackiness (“think of programming as being like waxing your car with a unicorn!”) or open hostility toward people who don’t already know Python? Bonus points if it avoids the massive, unexplained jump in difficulty that seems to be de rigeur in programming texts – you know, the part that inevitably comes where in one exercise you’re, say, writing nested loops, and in the next you’re supposed to write a program to calculate compound interest under several different tax structures using mechanics that have, at best, been hinted at in the explanatory text. (That’s a real thing that happens in an online tutorial for either Python or Perl, I wish I could remember where I saw it.) 1: Get python. [1] 2: Open IDLE. [2] 3: > help() 4: try something 5: go to 3 It’s how I did it, along with Google and reading existing code. The vast majority of modules that come with Python are in Python (and therefore come in source). Shamus here would care about open(path, “wb”) to open the file, and the ‘struct’ module to write packed data, eg: import struct def write_mesh(path, mesh): with open(path+'.vertex', 'wb') as file: for (x,y,z) in mesh.vertices: file.write(struct.pack('<fff', x, y, z)) with open(path+'.index', 'wb') as file: for (i0,i1,i2) in mesh.triangles: file.write(struct.pack('<HHH', i0, i1, i2)) The C++ code would just be opening the files and dumping the bytes into the GL mapped buffer. You could do it more complicated, but I dont really see the point. [1] I’d recommend python 3 if you don’t have a reason not to, but you probably won’t be able to tell the difference at first. [2] The editor/”IDE” (but not really) that comes with Python. It’s no Visual Studio, but it’s pretty good. I envy people who can take this approach. Becoming a full-time code monkey has taught me that I am, to some small degree, a social creature after all, at least insofar as I need to talk things out with people from time to time. ^^; I think Zak’s link below might be a good place for me to start… Well, it certainly helps if you can program already, the more languages you know, the easier it is to learn a new one. python.org itself has a pretty comprehensive tutorial, too, although it does take a while to get into actual python: I know snippets of many languages (going all the way back to BASIC and LOGO; my languages in college were C++ and VB); I’m realizing that what I’m weaker on is devising algorithms, which I don’t get to practice much at work, rather than learning syntax and whatnot. My day (and evening, and middle-of-the-night) job involves quickly generating SQL and PL/SQL (yes, yes, go ahead and hate :-P ) on short notice, so I do a large number of variations on a small number of things repeatedly, and one’s brain can get a bit calcified that way. That’s what I think needs more emphasis: logical and algorithmic thinking. Not even necessarily in a CS context – back in college I took a semester-long logic course in the philosophy department as a liberal-arts collective, and it’s stayed with me all these years, perhaps the best course I’ve ever taken. This is what helped me a lot: It’s for people who can code already, know nothing about python and may or may not know about object oriented programming. Was exactly right for me, very comprehensive, very practical, very interstingly written. Whether this actually is right for you depends heavily on your background, though. If you’re a seasoned coder, you might not need such a long introduction (see Simons post). If you’ve never coded before, it might be a bit steep. Alex, sounds like you ran into trouble with why’s (poignant) guide to Ruby, or something similar. Nothing I’ve ever seen come out of the Python community was as… strange as that. Anyways, here are two suggestions from someone who’s now gainfully employed as a programmer largely because of Python. Learn Python the Hard Way. It sounds intimidating, but really isn’t. Basically it’s a series of exercises that lead you through learning Python, and I’ve heard many good things about it from people new to programming. Zed Shaw’s also a good guy, so throw him a buck for the PDF. The other option is a great book from O’Reilly, “Learning Python” by Mark Lutz. I learned from this myself since LPTHW wasn’t around at the time, and I can attest to its greatness personally. As for the nested loops deal moving right into calculating compound interest, I’ve never seen something so strange in starter material, but keep an open mind. The exercises these tutorials lead you through tend to be more about getting the concepts into your head than being really practical. If you’re already an experienced programmer, well, try writing some pseudocode, but surround your conditionals with parens (), and open your blocks with a colon. You’ll probably be within shooting distance of having a working Python program, just fix whatever errors you get. Also, I’ve personally never touched IDLE, but I was using Linux at the same time I picked up programming, so I had easy access to the command line. If you’re on Windows, use IDLE. IDLE does give you some code hinting and multiple documents, so I’d probably still recommend it over “$ python” unless you have a strongly preferred editor. PS: you don’t need parens in conditionals: “if foo == 0:” works fine. A lot of the time, you don’t even need parens in tuples, “for key, value in mydict.items():” is the classic example, but also “a, b = b, a”. I gotta jump in and say you should learn some python too. I only learnt it late last year when I was at work one day and had the need to parse a microsoft tool generated file (internally it was XML) and link parts of it together and such into lua tables. My lead at work told me to do it in python, I mentioned not knowing the language, he told me to learn it. Half a day later I had a fully functioning script that is still a part of our tool chain. It really bothers me the way whitespace matters in the language, but as long as you’re cognizant of the fact it really is easy to learn and fast to code in. Also I’m going to agree with what someone up that ^ way said, write a script to parse whichever format you export in into a binary blob your game can read. It’ll help with load times and keep your code base less cluttered. Also means you could ditch that format should you choose just by writing a different conversion script. In response to the last picture: “Hey, Sticklyman, whaaaaat are you doing?” [pause while Sticklyman gestures] “Correct me if I’m wrong but are you asking for a challeeeeeeenge?!” /THANK YOU/. I hate people who use Y for up/down and Z for north/south. It doesn’t make sense, and it’s WRONG (according to me, about 3 of my teachers in college and my physics teacher in highschool). I don’t care if you want positive moment to make you go down, or something, but Z should always be up/down, Y north/south, and X east/west. Everyone seems to have it backwards. The only relevant attribute for a reference system is whether it’s left-handed or right-handed. The rest is a matter of how you orient your models, whether you align the terrain with the X-Y plane or the X-Z plane. :D Tell me Sticky the Stickman does not look like the Fonz in that last picture. I dare you. He does not look like the Foz in that last picture Reminds me more of a scout from TF2. He’s missing the leather jacket. Add that, and you’ve got a sure-fire Fonz. Hi Shamus I worked on modeling the human body as an engineer, and I may have a clue to your shoulder problem. Keep your arm still, in whatever position it is, and try moving your shoulder up and down. that is, move the shoulder blade, but do not rotate the shoulder joint. It moves right? The “shoulder joint” would be better described as the clavicle. It is placed where the horizontal piece of the shoulder connects to the spine. And its rotations allow the position of the shoulder itself to move without moving the arm, allowing you to shrug your shoulders and throw baseballs. The “upper arm joint” is the one we would normally call a shoulder joint, and is the ball and socket joint for the shoulder, allowing the arm to rotate any which way. The problem seems to be from a misuse of the naming system you mentioned. Here these two joints are referring to the bones they move (shoulder joint literally moves the shoulder, but isn’t at the shoulder, upper arm joint literally moves the arm, but isn’t part of the arm). maybe that will help. in other news, this is my first post here on this site, and I want to say I love to see what you’re doing here. Its inspiring me to start up my own project as well. Although one question (for anyone). I currently use a host file to block all ads everywhere, which I feel like I have to do for the usual reasons. But a handful of sites (like yours) both moderate their ads (so they are safe) and have earned my support. Anyone know a way to block ads just for everyone else’s site or support Twenty Sided in another way? The AdBlock for Google Chrome allows you to turn off ad blocking for the site or domain you’re currently on, adding it to a whitelist. Not sure about the current state of things in other browsers. If you would like to support the site in some other way: The Money Goes in the Money-Place. Edit: I was actually intending to reply to “TheBecker”… sorry about that. I am not a programmer. I took one BASIC class in high school then promptly forgot everything. Even so I find these programming discussions of yours fascinating, Shamus. when are you putting in the (insert some graphic thing that needs the newest video card)? Z is east/west? Maybe Notch is reminiscing to his school days where X points right, Y points up, Z points towards you? (If he’s looking from East-West) For mapping I much prefer bottom left as origin, it’s built into my>
https://www.shamusyoung.com/twentysidedtale/?p=12229
CC-MAIN-2019-30
refinedweb
6,773
70.63
================ Summary / TL;DR : ================ I have an OpenMV M7 Board. Memory allocation error. Unsure on how to produce frozen code. Questions : - Can I follow methods 1 and 2 listed below for my OpenMV M7 or do I lack the correct files ? - If so, can I do anything about it, or is it impossible for an OpenMV M7 ? - Is there an option 3 ? Ideally I'd like to cross-compile my frozen code and then load it onto the board. - => apparently it's possible to build a frozen code-capable firmware. OpenMV M7 uses the OPENMV3 firmware version - => if I understand correctly, there is no way to simply load code that's been frozen on Ubuntu on the OpenMV - => threading is not supported so far ; this seems to be the root cause of the memory problem THE PROBLEM ============ I use Ubuntu 4.15.0-51-generic x86_64. I have an OpenMV M7 Board with default firmware. It's barely used, mostly for running a few examples in OpenMV IDE and testing a UART-based GPS logger through minicom. I've written what seems to be a pretty memory-thrifty code (see at end of post) compared to some of the imaging sample applications that seem to run fine, but when testing it, it get this error : I don't know exactly why, I've only been working with this board for a few days. But one fix coming to mind would be to run frozen code. Even if I could use another method now, the bigger programs that will follow will require frozen code. Code: Select all MemoryError: memory allocation failed, allocating 4096 bytes So far, I've read about 2 methods of going about it : - Building and loading a new firmware : - Building (and loading?) a purpose-build mpy-cross binary for the STM32F765VI : - Problem with method #1 : in the github firmware folder, I don't see an OPENMV7 folder. It stops at OPENMV4. Although I don't know if the "M7" in "OpenMV M7" means that I have to have an "OPENMV7" firmware version. Is it the case ? - Problem with method #2 : same thing, really. In the github board folder, I can see a few STM32F7 supported, but not the STM32F765VI. Do I need an exact match, or can I use another STM32F7* version to produce a working mpy-cross? - Problem with #1 and #2 : If I understand correctly, these 2 options require flashing a new firmware onto the board, but I'm uneasy about it. I would feel much more comfortable if I could cross-compile on Ubuntu and then upload frozen code files onto the OpenMV. Is that feasible ? MY CODE (nothing else after this section) ======== The micropyGPS.py file can be found at ... ropyGPS.py And here is my code (could be faulty, I haven't had the chance to really test it). The memory exception happens when calling gpst.start() : Code: Select all from micropyGPS import MicropyGPS #import micropyGPS from machine import UART import _thread #from time import time class JwGPSThread(MicropyGPS): UPDATE_PERIOD_SEC = 1 TIMEOUT_SEC = 1 def __init__(self, uart): # using GMT +1 as time offset MicropyGPS.__init__(self, 1) self._running = False self._thread_joined = False self._uart = uart def start(self, period_sec=None): if period_sec is None: period_sec = self.UPDATE_PERIOD_SEC _thread.start_new_thread(self.update_loop, ()) def stop(wait_for_join=False): self._running = False if wait_for_join: return self.join() else: return True def join(self, timeout_sec=None): if timeout_sec is None: timeout_sec = self.TIMEOUT_SEC start_time = time.time() while True: if time.time() > start_time + timeout_sec: return False if self._thread_joined: return True def update_loop(self, period_sec=None): if period_sec is None: period_sec = self.UPDATE_PERIOD_SEC self._running = True self._thread_joined = False ") if s == "q": break elif s == "h" or s == "help": print("q, h/help, u, p, s, tstart, tstop, tjoin, tshow") elif s == "p": printout = not printout elif s == "tstart": gpst.start() elif s == "tstop": gpst.stop() elif s == "tjoin": print("Joining... ", gpst.join()) elif s == "tshow": print("LAT=",gpst.latitude) print("LON=",gpst.longitude) print("TIM=",gpst.timestamp) else: print("Unknown command") # end while gpst.stop() print("Joining... ", gpst.join())
http://forums.openmv.io/viewtopic.php?f=6&t=1363
CC-MAIN-2020-24
refinedweb
681
66.84
In this article, we will explore the Liskov’s substitution principle, one of the SOLID principles and how to implement it in a Pythonic way. The SOLID principles entail a series of good practices to achieve better-quality software. In case some of you aren’t aware of what SOLID stands for, here it is: The goal of this article is to implement proper class hierarchies in object-oriented design, by complying with Liskov’s substitution principle. Liskov’s substitution principle (LSP) states that there is a series of properties that an object type must hold to preserve the reliability of its design.. More formally, this is the original definition (LISKOV 01) of LSP: if S is a subtype of T, then objects of type T may be replaced by objects of type S, without breaking the program.. Now, this type might as well be just a generic interface definition, an abstract class or an interface, not a class with the behavior itself. There may be several subclasses extending this type (described in Figure 1 with the name Subtype, up to N). The idea behind this principle is that if the hierarchy is correctly implemented, the client class has to be able to work with instances of any of the subclasses without even noticing. These objects should be interchangeable, as Figure 1 shows: Figure 1: A generic subtypes hierarchy This is related to other design principles we have already visited, like designing for interfaces. A good class must define a clear and concise interface, and as long as subclasses honor that interface, the program will remain correct. As a consequence of this, the principle also relates to the ideas behind designing by contract. There is a contract between a given type and a client. By following the rules of LSP, the design will make sure that subclasses respect the contracts as they are defined by parent classes. There are some scenarios so notoriously wrong with respect to the LSP that they can be easily identified by the tools such as mypy and pylint. By using type annotations, throughout our code, and configuring mypy, we can quickly detect some basic errors early, and check basic compliance with LSP for free. If one of the subclasses of the Event class were to override a method in an incompatible fashion, mypy would notice this by inspecting the annotations: class Event: ... def meets_condition(self, event_data: dict) -> bool: return False class LoginEvent(Event): def meets_condition(self, event_data: list) -> bool: return bool(event_data) When we run mypy on this file, we will get an error message saying the following: error: Argument 1 of “meets_condition” incompatible with supertype “Event” The violation to LSP is clear—since the derived class is using a type for the event_data parameter that is different from the one defined on the base class, we cannot expect them to work equally. Remember that, according to this principle, any caller of this hierarchy has to be able to work with Event or LoginEvent transparently, without noticing any difference. Interchanging objects of these two types should not make the application fail. Failure to do so would break the polymorphism on the hierarchy. The same error would have occurred if the return type was changed for something other than a Boolean value. The rationale is that clients of this code are expecting a Boolean value to work with. If one of the derived classes changes this return type, it would be breaking the contract, and again, we cannot expect the program to continue working normally. this case, the problem would not lie in the logic itself (LSP might still apply), but in the definition of the types of the signature, which should read neither list nor dict, but a union of both. Regardless of the case, something has to be modified, whether it is the code of the method, the entire design, or just the type annotations, but in no case should we silence the warning and ignore the error given by mypy. Note: Do not ignore errors such as this by using # type: ignore or something similar. Refactor or change the code to solve the real problem. The tools are reporting an actual design flaw for a valid reason. LoginEvent must be an Event, and so must the rest of the subclasses). If any of these objects break the hierarchy by not implementing a message from the base Event class, implementing another public method not declared in this one, or changing the signature of the methods, then the identify_event method might no longer work. Another strong violation of LSP is when, instead of varying the types of the parameters on the hierarchy, the signatures of the mypy and pylint to catch errors such as this one early on. While mypy will also catch these types of errors, it is a good idea to also run pylint to gain more insight. In the presence of a class that breaks the compatibility defined by the hierarchy (for example, by changing the signature of the method, adding an extra parameter, and so on) such as the following: # lsp_1.py class LogoutEvent(Event): def meets_condition(self, event_data: dict, override: bool) -> bool: if override: return True ... pylint will detect it, printing an informative error: Parameters differ from overridden ‘meets_condition’ method (arguments-differ) Once again, like in the previous case, do not suppress these errors. Pay attention to the warnings and errors the tools give and adapt the code accordingly. The LSP is fundamental to good object-oriented software design because it emphasizes one of its core traits—polymorphism. It is about creating correct hierarchies so that classes derived from a base one are polymorphic along the parent one, with respect to the methods on their interface.). Carefully thinking about new classes in the way that LSP suggests helps us to extend the hierarchy correctly. We could then say that LSP contributes to the OCP. The SOLID principles are key guidelines for good object-oriented software design. Learn more about SOLID principles and clean coding with the book Clean Code in Python, Second Edition by Mariano Anaya. Views: 523 Comment You need to be a member of Data Science Central to add comments! Join Data Science Central
https://www.datasciencecentral.com/profiles/blogs/liskov-s-substitution-principle-in-solid
CC-MAIN-2021-21
refinedweb
1,036
56.49
One option that I use is to create a singleton in the module that I want to be global at the first import of that module. So lets take your mySitePool module for example: class mySitePool: def __init__(self): ' suite .... global mysitepool mysitepool = mySitePool() Now, in your mod_python request handler, when you add the import of mySitePool, the global singleton object of mySitePool will be created. Since, the module is imported, in mod_python, that module will be cached so it would not get imported again so to recreate the singleton. So your handler would look like this: import mySitePool def handler(req): mySite = mySitePool.mysitepool.get() return mySite(req) mySitePool.mysitepool.put(mySite) Let me know if this helps. Hozi Sébastien Arnaud wrote: > Hi, > > I have attempted to write my own little framework ("heracles" I named > it;) to develop applications faster using mod_python and it is coming > along great. Mainly I am using the framework to keep handy a bunch of > little utils functions/objects I have written along the years while > working with mod_python, such as a pool of connections for DB > connections, XML/XSLT rendering, Cheetah template rendering, etc... > > I am hitting the wall on a very stupid problem that I thought I had > working. I can't get an object declared as global to remain in memory... > Under Apache Pre-fork MPM model, this I understand, but under the MPM > Worker model, where I specify to start 10 threads in one process, I > kind of don't get it... Maybe some of you will be able to spot what I > am doing wrong. > > Basically, what I am trying to do is to keep the object mySitePool in > memory since each Site object (heracles.site) has everything needed > by a thread to process a request. At the first request the mySitePool > object should get initialized, but for any other requests it should > be simply the matter of retrieving one Site object via the Queueing > mechanism to process one request. Right now the problem is that I am > able to see for each request one entry in the apache log file that > the mySitePool is being initialized... > > Thank you in advance for any pointers or solutions you may have! > > Apache2 config (MPM worker): > --------------------------- > <IfModule worker.c> > StartServers 1 > MaxClients 10 > MinSpareThreads 10 > MaxSpareThreads 0 > ThreadsPerChild 10 > MaxRequestsPerChild 0 > </IfModule> > > > Apache2 mod_python (.htaccess): > ------------------------------- > PythonInterpPerDirective On > > PythonOption "SiteName" "Heracles Test site" > PythonOption "SiteDescription" "Testing the framework!" > PythonOption "SiteVirtualPath" "/heracles/" > PythonOption "SiteViewPath" "/xxx/webapp/views/" > PythonOption "MySQLhost" "xxx" > PythonOption "MySQLuid" "arnaudsj" > PythonOption "MySQLpwd" "xxx" > PythonOption "MySQLdb" "test" > > SetHandler python-program > PythonHandler heracles.site::handler > > PythonDebug On > PythonAutoReload On > > > heracles.site.py: > ----------------- > [...] > def handler(req): > """ > Standard mod_python handler code for Heracles Web Application > Framework > The SitePool is initialized based on Apache MPM setting > """ > global mySitePool > try: > mySite = mySitePool.get() > except NameError: > req.log_error("Initializing Heracles WAF at %s with %s thread > (s)" % (req.document_root(), apache.mpm_query(6))) > mySitePool = Pool(Constructor(Site, req), apache.mpm_query(6)) > mySite = mySitePool.get() > return mySite(req) > mySitePool.put(mySite) > [...] > > > Cheers, > > Sébastien > > > > > > _______________________________________________ > Mod_python mailing list > Mod_python at modpython.org > > >
http://modpython.org/pipermail/mod_python/2005-June/018294.html
CC-MAIN-2017-51
refinedweb
511
63.9
ClassType extends ReferenceType which should give you what you want. See ClassType.invokeMethod. i also use PyScripter but i use PyLint externally to check my code, which helps me to add the documentations and other little issues to be solved later rather than the PyScripter to show me red lines all the time. When you execute it in pyscripter, I assume you wrote import numpy as np a = np.array([0,1,2,3]) a in main. The function will return None (default return value of a function without a return), which is why you see nothing in the pyscripter interpreter. Try return a instead and you should see it. I don't think that this is possible as PyScripter does this based on a specified Python engine. For portable installation these settings are used and this is also what Portable Python shortcut PyScripter-Portable.exe does to set up the environment and configure Python engine to be used. The easiest way I know (on Windows) is, having used the installer executable, I select from the Start menu's PyScripter folder whichever version of Python I want to run. You can modify the PYTHONPATH (under Pyscripter>>Tools, for instance) You can modify your External Python Interpreter with Pyscripter>>Modify Tools>>Python &Interpreter>>Modify You can modify the default Python engine used with Pyscripter>>Options>>IDE Options>>Python Interpreter>>Python Engine Type UPDATE: I was executing the test function from pyscripter, and it was not working hence I thought there was a problem with the function. I tested the function in the python command line and it worked as well. running via command line: running via function in pyscripter: The function via pyScripter did not write to file, therefore I am assuming the problem lies with pyScripter and not with the function as previously thought. Solution: It turns out Pyscripter has an 'External Run' option, choosing this made the logging work correctly! I don't know whether it is the correct solution to this problem or not but this is what I tried and it was successful: Go to Tools ---> Options ---> IDE Options Then in the Code Completion part add cv2 to Special Packages This enabled the auto-complete for opencv in the editor! I don't know if this is the best way to do it, but those are the two ways I did it: WAY 1 (The best of two) Go to PyScripter>>Tools>>Options...>>Custom Parameters... and add the following values 1. PythonDir = C:Program FilesCustomPythonInstallation 2. PythonExe = C:Program FilesCustomPythonInstallationpython.exe 3. PythonVer = 3.3.3 Note: Adapt the Name = Value pairs above to your case. And close the window with OK button. Now select PyScripter>>Run>>Python Engine>>Remote and your are ready to go. WAY 2 (The more temporary solution) Go to PyScripter>>Run>>Configure External Run... set the "Application:" field to your python.exe file Close the window with OK button. Make sure you run your scripts with PyScripter>>Run>>External Run (Alt+F9) I hope this h I'm guessing that the version of Python that PyScripter is running is different than the one you get in EPD/Canopy (Python is compiled C code, so the version matters). There's another question about controlling the version of Python used with PyScripter. At first it sorta sounds like a path problem, but if you're able to run a debugger tutorial that's probably not the case. The behavior you're seeing if the program isn't compiled with the debug (-g) flag doesn't quite match my experience, but let's go ahead and check/set the debug flag anyway. Select Project/Edit Project_Properties Click the Switches tab. Click the Gnatmake tab. Check "Debug information". Note that a -g now shows up in the text box at the bottom of the tab page. Click OK on the dialog. Recompile your code so that it uses the changed option. Select Build/Clean/Clean All, and OK any dialog that pops up. Then do your build (you can just press F4). From, a direct method is "any of static, private, or constructor". However, static methods get their own invoke-static opcode, so invoke-direct is used for constructors and private methods. You are probably seeing locking on the session state. Check here for instructions on how to turn off locking: They're functionally equivalent. -Command is the only parameter the cmdlet takes that isn't in the CommonParameters set and the first one (by default, since it's the only one) when used positionally. All you're doing with the second example is being explicit with naming your parameter, instead of relying upon position. That's a good habit to get into. Verbose, but future-proof and it makes your intention crystal clear. There is. If you hit F-12 in later versions of IE, you'll get IE's dev tools. They're nowhere near as good as what's in Chrome or FF, but you can check ajax requests, inspect the dom, inspect and set breakpoints in JavaScript code, and even emulate previous versions of IE. Debuggers depend on some special capability of the hardware, which must be exposed by the operating system (if any). The basic idea is that the hardware is configured to transfer control to a debugger stub either after every instruction of the target program, or after certain types of instructions such system calls, or those meeting a hardware breakpoint condition. Typically this would look like an interrupt, supervisor exception, or the like - very platform-specific detail. As mentioned in comments, on Linux, you use the ptrace functionality of the kernel to interact with the debugger support provided by the hardware and the kernel, abstracting away a lot of the hardware-unique detail and managing the permission issues. Typically you must either be the same user id as the process bein Turns out the debugger was not the problem. Being connected to power was the problem. My device, Galaxy Note 2 behaves differently when connected to power. Connecting the USB cable also connected power and so the problem would not occur. The Solution was to switch my debugging connection from USB to TCPIP (over wifi). Now the problem is occurring with or without the debugger. You can use Firebug Lite. Simply include the following code in your master page: <script type="text/javascript" src=""></script> Debugging optimized code may "jump around", as some functions become inlined. The most telling thing is that local variables usually get optimized away, giving a message to that effect when trying to read them. If the jumping seems to make very little sense, though, then it's more likely you have the wrong PDB (which maps to line numbers) or source (which has the line numbers). Make a copy of /usr/lib/perl/perl5db.pl in some other directory, change it to your heart's content, and debug your scripts with perl -I/dir/that/contains/your/perl5db.pl/copy/ -d your_script.pl The -I<dir> switch will get perl to look in the directory you choose to find libraries before it looks in its default directories. Here are some websites that might suit you: compileonline writecodeonline viper-7 debugger is a keyword and it's not possible to delegate it's special meaning to a variable (you will find a similar question about aliases here). You can wrap debugger with a function: d = function() { debugger; }; and invoke it using d(). It will shorten the syntax, but you will have to always travel one level up in the call stack to get to the code that you are actually trying to debug. In my opinion you should simply set up a snippet (or a "live template") in your editor/IDE that will replace key combination d + TAB with debugger;. Snippets in Sublime Live templates in WebStorm Code templates in NetBeans As far as I know there isn't a debugger for c++ on notepad++. You might wanna check out some IDE's like Eclipse with CDT, or you can try Microsoft's Visual Studio if you are using Windows. Copied from comments to preserve the answer as an answer, (Comments are not permanently displayed): PEP 338 -- Executing modules as scripts – iMom0 1 hour ago The doc about -m flag: docs.python.org/2/using/cmdline.html#cmdoption-m – zhangyangyu 1 hour ago ah. main.py. That's what I was missing. Thanks. – numb3rs1x 1 hour ago Go to Extras -> Properties -> Compile & Run Section. See on tab Compiler if there is a compiler selected. If not select an compiler and set the corresponding debugger. Else you eventually have to manually add a compiler and locate your path to your debugger (MSVC and CDB for example). To debug your app, it needs to run in DevMode (code runs as Java in the JVM, and communicates with the browser through the plugin you installed there). To trigger DevMode in the browser, the URL needs to contain gwt.codesvr in the query string with the value being the host and port the DevMode app is listening on. -startupUrl passed to DevMode only makes it easier to get the URLs right, as DevMode will then append the appropriate gwt.codesvr to the URL and you can just copy/paste the resulting URL to your browser (or ask DevMode to directly open that URL in your browser). If you have several HTML host pages and move between them, then for a seamless experience you have to propagate the gwt.codesvr part of the URL to the other page. See If you're on ruby 2.0 (and this is not expected behaviour) you should try pry-byebug instead of pry-debugger -- ruby 2.0 changed some debugging API and the debugger gem (which pry-debugger relies on) sometimes acts strangely. Unless I'm mistaken, that comparison you're wondering about is comparing EDI (which is 0) with the the second argument, a string pointer. It's checking to see if the string is null. Here's your MessageBoxExA: 750AFCD6 /$ 8BFF MOV EDI,EDI ; ID_X user32.MessageBoxExA 750AFCD8 |. 55 PUSH EBP 750AFCD9 |. 8BEC MOV EBP,ESP 750AFCDB |. 6A FF PUSH -1 ; /Arg6 = -1 750AFCDD |. FF75 18 PUSH DWORD PTR SS:[EBP+18] ; |Arg5 750AFCE0 |. FF75 14 PUSH DWORD PTR SS:[EBP+14] ; |Arg4 750AFCE3 |. FF75 10 PUSH DWORD PTR SS:[EBP+10] ; |Arg3 750AFCE6 |. FF75 0C PUSH DWORD PTR SS:[EBP+0C] ; |Arg2 750AFCE9 |. FF75 08 PUSH D Some things to think about before enabling the werkzeug debugger: When you enable the werkzeug debugger, anyone triggering an exception will be able to access your code and your data (including database passwords and other sensitive credentials). Be careful and don’t leave it enabled longer than necessary, or add an extra protection layer to bar unauthorized users! Once you’re done with the debugging, restore your old wsgi.py file and push your code again (you can leave werkzeug in your requirements.txt if you like; that does not matter). Here's what you can do to set it up: 1) add the following to your wsgi.py # The following lines enable the werkzeug debugger import django.views.debug def null_technical_500_response(request, exc_type, exc_value, tb): raise exc_type, exc_va Maria-Helena I just hit this issue as well and discovered that facebook's scraper was appearing as a inbound JSON request. Since that particular route was set up to handle both JSON and HTML responses, FB was getting a big gnarly JSON blob instead of the actual web page. Not sure if this solves your exact problem, but hopefully sparks some fresh ideas! I think what you are looking for is Array.prototype.push.apply( results, newContext.querySelectorAll( 'div') ); The push method you are looking for is a prototype method of Array type. or as a short hand [].push.apply( results, newContext.querySelectorAll( 'div') ); First question: PDB files do not contain breakpoints info, they just have program information that enable debugging (matching statements with source code lines, etc ...) Second: Yes, that's the point of the remote debugger, so you can debug a remote process using your local visual studio. You should use po [self position] Dot syntax doesn't work well with GDB. Upgrading to LLDB/LLVM is a good idea as well. You can also: NSLog(@"%@", self.position); In your code. Or create a breakpoint and edit it, add this is an expression: expr (void)NSLog(@"%@", self.position) See this link for more info: You cannot find out if your process is being stopped in the debugger basically because your code is not being executed. As for special cases of expression value evaluation in debugger (Visual Studio) - the following happens: the active-in-debugger thread of your process is being hijacked by the debugger and then some code generated by Visual Studio expression evaluator gets executed by your thread. After the evaluation is complete, your thread is stopped again and its state is left unchanged as it was before evaluation. This process is called "funceval". Theoretically, you could somehow analyse call stack trace in your function to find out if it is being called through funceval, but I suspect this is hardly possible due to unmanaged nature of the CLR debugger. You can read more about fun roliu pointed me in the right direction and i figured it out. the route debugger has a web.config file which references the v2.0.0.0 version of system.web.webpages.razor and v4.0.0.0 of mvc. i changed all refereces of razor v2 to razor v3 and changes mvc 4 to mvc 5. You don't need to add declare(ticks) to all your files. one entry point would be enough: <?php function my_tick() { echo 'tick'; } register_tick_function('my_tick'); declare (ticks=1) { include("lib.php"); echo "1"; test(); } and lib.php: <?php echo "2"; function test(){ echo "3"; } and as you are looking for a code-based solution i assume your sources do provide single entry point. The line will always be printed in the logcat. Just hit run until it stops breaking, and check the logfile. It gives the complete stack trace there. Ok, I'm going to be the geeezer jerk for a minute. A debugger is a crutch. Your first reaction as a programmer should always be to check the log file. The log file will generally give context, and you won't always be able to run a debugger (you're running on an interrupt, the code is running on a remote machine you don't have access to, the code is running a language that doesn't have exceptions, the crash is a hard crash and brings down the computer, etc). Checking the debugger should be step 3 or 4, not step 1. Yes you can access to the values from this, and any another value that are accessible from the scope where the debugger stopped, but you should start the debugger repl, typing that command in the same debugger, and then type there the value that you would like to watch (this.constructor.name). To leave the debugger's repl, press ctrl + c and you will get back to the debugger prompt. Remind you that you constructor will have a name if it is a named function if not it will be empty string, for example var Processor = function Processor() {}; instead of var Processor = function () {};, Here is one way. If the following command display a non empty string, the process which process id is pid is traced: pflags pid | grep flttrace On older Solaris releases, pflags is in /usr/proc/bin. You shouldn't see a crash if this happens, or rather, a crash shouldn't be caused by/related to this warning. There are two ways to build your app with debug information on Mac OS X / iOS: "DWARF" and "DWARF with dSYM". (these are options in your Xcode project Build Settings) "DWARF" means that the debug information exists in your .o (object) files. It is not copied into the final executable binary for your app. Your app binary has pointers back to the debug information in the object files. This helps to speed up the link & run cycle. But for it to work, your object files need to be located in the same place as when your built your app. Copying your app to another computer would likely break this. Removing your build intermediates would result in the same problem. The "DWARF"
http://www.w3hello.com/questions/How-to-invoke-Pyscripter-debugger
CC-MAIN-2018-17
refinedweb
2,749
63.59
Red Hat Bugzilla – Bug 121185 missing library refs Last modified: 2007-11-30 17:10:40 EST Description of problem: When running up2date in graphical mode, I get the error: [jhanley@lava projects]$ up2date Traceback (most recent call last): File "/usr/sbin/up2date", line 1198, in ? sys.exit(main() or 0) File "/usr/sbin/up2date", line 852, in main from up2date_client import gui File "gui.py", line 16, in ? File "/usr/share/rhn/__init__.py", line 43, in ? File "/usr/share/rhn/__init__.py", line 43, in ? ImportError: /usr/X11R6/lib/libXext.so.6: undefined symbol: XESetFreeFontXESetCreateFont Version-Release number of selected component (if applicable): XFree86-libs-4.3.0-55 up2date-4.1.21-3 kernel-2.4.22-1.2174.nptl How reproducible: every time Steps to Reproduce: 1. Run up2date with these libraries 2. 3. Actual results: exit missing library refs Expected results: run up2date in x mode Additional info: This is a minor problem since I can still run up2date with --nox and it will run from command line. Seems as though I was able to fix this. I had to force a reinstall of up2date-4.1.21-3.i386.rpm, up2date-gnome-4.1.21-3.i386.rpm, XFree86-libs-4.3.0-55.i386.rpm, aand gtk2-2.2.4-5.1.i386.rpm It seems that some of the files were corrupted a while back from a powerfailure. On a related note, it would be nice if up2date could audit the installed packages to compare checksums against the files inside a package and the installed files for corruption (minus the config files) so that something like this can quickly be found and fixed.
https://bugzilla.redhat.com/show_bug.cgi?id=121185
CC-MAIN-2016-26
refinedweb
281
56.05
Summary: In this tutorial, you will learn how to import MXF footage from Canon C300 to Final Cut Pro (X), Avid, iMovie, Premiere for further editing without problems. With every new Camera, comes many new questions about workflow. The term 'workflow' can be fairly broad, but today I am looking at importing Canon C300 footage into various non-linear editing systems. Due to format compatibility issues, Canon C300 owners are always having troubles importing C300 MXF footage to Avid Media Composer, Final Cut Pro (X), iMovie, Premiere and more for native editing. Yes Canon offers plug-ins e.g. XF utility to help users import camera video clips, but only with very limited success. To make things much easier, below is great tutorial for you, guaranteed to edit C300 MXF files with your video editor smoothly. To achieve your goal flawlessly, the best method here is to transcode Canon MXF files to FCP/Avid/iMovie/Premiere native editable format along with some help from a third-party tool. After that you can open C300 footage in your video editor for further editing. Keep reading for a brief how-to below. Converting C300 MXF files for editing in Avid, FCP (X), iMovie, Premiere The quick tutorial is using an easy-to-use yet professional app called Pavtube Canon MXF Converter for Mac (available for windows) which can handle Canon C300 MXF files without any problem. The program can not only transcode Canon C300, XF100, XF105 and XF305 recorded MXF files for using in Avid Media Composer, Final Cut Pro (X), iMovie, Premiere without rendering, but also provide simple video editing functions for you. Step 1. Download and run the best Mac Canon MXF Converter to your PC or Mac, and then import your MXF files from C300 to the software. Step 2. Click the format bar, and move mouse cursor to "Final Cut Pro > Apple ProRes 422 (*.mov)" as output format for FCP 6/7 or FCP X; For Avid Media Composer, choose "Avid Media Composer -> Avid DNxHD 1080p (*.mov)"; For iMovie, choose "iMovie and Final Cut Express > Apple intermediate Codec (AIC) (*.mov)"; For Adobe Premiere, just choose "Adobe Premiere/Sony Vegas > WMV (VC-1) (*.wmv)". You can click settings button to customize the output settings. Such as resolution and bitrate. Edit function is also available in this best C300 MXF Converter for Mac. Step 3. Click "Convert" button to start transcoding Canon C300 mxf files for Avid/Final Cut Pro/Premiere/iMovie. After the conversion, you can smoothly import and edit MXF files on Mac with Canon C300 in FCP, Avid, iMovie or Premiere as you want. Solutions for Canon C300 MXF Editing Problems solved perfectly: edit Canon C300 MXF files in iMovie Canon C300 and Final Cut Pro (X) import Canon C300 MXF footage in Avid Studio Get Sony Vegas working well with Canon MXF files Hot News: Pavtube is providing 2012 Summer Holiday Sale, never miss the chance to join the promotion!!! 20% Coupon Pavtube MTS Converter 20% Coupon Pavtube MTS Converter for Mac
http://www.anddev.org/multimedia-problems-f28/importing-editing-c300-footage-in-avid-fcp-x-t2166990.html
CC-MAIN-2015-32
refinedweb
506
67.69
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "ssl" - -.20 - When you're about to do a payment and the payment form is loaded without an SSL connection/certificate... Come on, it's 2017.. - --- - -.68 -15 - - - was thinking of buying an s8, but fuck samsung. price of ssl certificate: about $15 price of samsung: about $254,000,000,00023 -.-24 - - One of our clients deploy their own server app. So this happened after a prod deployment. (4am) *Cellphone rings while sleeping* Client : we need you on the conference call now. URGENT! *Gets on conference call* *Client explain the problem* *Explaining to the client that the problem is in their side (https connection not working, either network or certificate problem)* *Client doesn't believe it and pushes me for a fix that I have no control on* *4 hours later in a heated conversation* Client : ok problem is on our side. We used our SSL certificate from staging with production and thought it would work. Me :5 - Fuck stupid client. Sorry: Boss: client want to white label the solution. Me: ok. They just need to create A record and send as SSL certificate and I will do it. Client : here is your SSL certificate. Me: spend whole night to make the transfer and setup server and check whole solutions one by one for reference to our company. Next day wake around 2 pm to 100 whatsapp message, call from client and noss. Turns out client IT team revoked the certificate without informing and the product stop working for all people. Me: go to back to sleep.6 - So according to some reddit user IKEA sends your password as a GET parameter in plain text.... Seems to be a network authentication thingy, but still 🤔35 - - At work: "I do not get your concerns over ssl, it works fine when we use ie" "What do you mean by xss? A regular use would not even try ans attempt something like that" "We need to keep the txt file with the passwords there, its an internal project, the public would not even attempt to reach our site, just put them back" Ah the many stories that I have from this place. It is an otherwise good place to work at tho, but oh well... Me on a daily basis tho9 - - - Had to give a 15 min presentation on web development. I somehow turned it into me giving a 1 hour lecture on ssl and end to end encryption to a bunch of accounting students 😅3 - By learning the basis of things instead of just using them. for example I learn cryptographic algorithms behind ssl instead of just using it.6 - - if you were code, you wouldn't compile I wouldn't catch you if you were the last exception in my code your brain is so tiny, indexing it would make no significant performance gain you are so embarrassing, I can only go out with you in SSL if you were a pointer I'd move to java2 - Why the hell do people make websites with VALID SSL certs redirect BACK TO HTTP? What the fuck is wrong with them?!5 - - - Android and Full Stack dev here. Also first post. No boss, i won't call that client to tell him how to configure ssl for his Outlook.10 - I'm not sure if this entirely qualifies and I might have ranted about it a few years ago but fuck it. My last internship. Company was awesome and my mentor/technical manager got along very well with me to the point that he often asked me to help out with Linux based stuff (he preferred Linux but was a C# guy and wasn't as familiar with it as me (Linux)). We had to build an internal site thingy (don't remember what it was) and we delivered (me and some interns) and then the publishing moment came so I went to out project manager (a not-as-technical one) and asked if he could install a LetsEncrypt certificate on the site (he knew how and was one of the only ones who had direct access to the server). He just stared at us and asked why the fuck we needed that since it was an internal thing anyways. I kindly told that since it's free and can secure the connection, I preferred that and since its more secure, why the fuck not? He wasn't convinced so it was off. Next day I came in early and asked my mentor if he could do the SSL since he usually had access to that stuff. He stared at me with "what?" eyes and I explained what the PM said. Then he immediately ssh'd in and got the damn cert with "we're going to go secure by default, of course!" A minute later it was all set.2 - Eh ehe hehe he eh ehehe On top of burnout, codebase issues, spec issues, burnout, the product butt that keeps on crapping, burnout, burnout, loathing for my employer... My local Apple SSL cert expired. I can’t finish this and push it anywhere for testing. I can’t even run my own specs anymore. And I don’t have permissions to make a new one. I can’t do anything at all. Ehe he hehe Deadline is in two days, and I’m just sitting here laughing quietly to myself. I might finally be going crazy I found a loose bit of tangle, started to pull, and the world decided it was time to fall apart. Reality said it’s time to go. And I wasn’t even a good screwdriver dev. Byeee ~12 - - In 2018, while working in Tokyo for a Fukuoka-based startup, one of my co-workers insisted that he wanted an SSL certificate installed on his local dev machine, but he didn't know how to do that. So I created and self-signed one for him. When our CEO came to visit our Tokyo office from Fukuoka, the coworker proudly showed him how his browser would display that green lock icon when visiting localhost:3000. This apparently impressed my CEO, because a few days later the coworker was invited to work at the HQ in Fukuoka while everybody else at the Tokyo office (incl. me) was let go. This coworker would also only copy whole open source repositories, foo/bar/g all occurrences of the project name with our company name, and tell our CEO that he wrote that code. I don't know how to deal with this bullshit.10 - Security for 2017: Because SSL has nothing to do with security, and just Google's way of increasing it's monopoly..15 - After struggling for weeks with SSL settings I finally asked @linuxxx for help. Guess what, he made it work in about 5 minutes!4 - - That moment when your entire application goes down ... Because someone forgot to renew the SSL certificate. Of course...13 - - - - Me: You need TLS since your users submit confidential data on your website. Boss: Our hoster has an SSL-Domain Me: Yeah. But you need TLS not your hoster... Boss: *confused*4 - SSL FYI for anyone using Thawte, VeriSign, Equifax, GeoTrust, and RapidSSL Certs, Chrome will distrust next year. - - - I bypassed SSL certificate verification because that goddam certificate had some flags which my JVM did not understand and threw errors. Still in prod after 10+ years 🤐1 - (Instant Message) Client: Are you there? Me: Yes speak please. (Why don't you just leave the message? It's not like having a phone call…) Client: The contract is ready. I'll send it to you. (Waited for an akaward 10 minute…) Client: ???Can you receive it??? (Omg are you doing SSL handshake or what? Just send the file!) Me: Yes I can pleasre send it to me thank you so much. So after promoting Flutter to the clients (for whom cross plateform solution are perfect match) for almost a year, today I finally got the first ever Flutter App contract. I believ e time for Flutter is really coming. Wish me luck!5 - I spent about 5 hours rewriting an in company C# toolbox because I thought it's connection to a Web API was broken. 5 FUCKING HOURS. Only to then see I was using port 80 for HTTPS...3 - - - Had a configure issue on a site running through CloudFlare hosted at WPEngine. Support on chat guy says "can I take a look at your setup" so I screenshot him! He says they're are new ways to point to WPEngine whilst using SSL so I say OK and he points me to a support article which seems accurate. He then says now I want you to change two records so I say ok (not thinking) which I do (stupidly) Result site no longer reachable. What do I do now? He says very seriously "you need to wait 24-48 hours for the DNS to propogate" "Your joking it's a huge site with 20k visitors per day with advertisers on it" "I'm sorry there is nothing I can do until the DNS YOU changed has propagated" "I changed?" "Yes you changed the CloudFlare settings" "You told me to!" "Is there anything else I can help you with?"7 - - - - Does anyone know a provider for webhosting with this needs? - decently priced (~4€/month) - domain included - no analytics/cookie stuff from the provider (that's the point of change) - easy sftp access - ssl included12 - Me: I need an SSL certificate. Support: No problem. Just fire up your command line and generate one via OpenSSL. Me: I'm on Windows. Support: Ok, so what you do is code a Linux command line from scratch that will run in Powershell. Next, compile OpenSSL from your favorite of 60,000 versions available. Now, just fire it up and you're all set. Me: Goodbye everything I've ever enjoyed doing in my free time.16 - - - Right, I've been here before. Our app requires an internet connection, and one of our clients wants to roll it out on a strictly managed network. We told them which addresses our app communicates with and their network team opened them up for traffic. Should work, right? Nope, doesn't work. So I request them to use Fiddler to do some debugging of the network traffic, and lo and behold, it does work when Fiddler is active. One important detail is that Fiddler uses it's own SSL certificate to debug HTTPS communications. I've had moments where expired certificates were the cause of things not working and running Fiddler "fixes" this because of their own certificate. So I point this out in numerous mails to their network team, every time I get a response saying "nah, that can't be it". I keep insisting "I have had this before, please check if any installed Root CA Certificates is expired" At this point I'm certain they have updates turned off on these machines, and their certificates must not have been updated for a long time. At one point they come back to me. "Hey, when Fiddler is off, WireShark shows the app communicating with ICMP calls, but when it's on it shows HTTP calls instead". ...YOU'RE THE SUPPOSED NETWORK EXPERTS?! You think data can be send via ICMP? Do you even know what ICMP is? Of course you'll see ICMP calls when the network is rejecting the packages instead of HTTP calls when everything's fine. (ICMP is used to communicate errors) I'm trying to keep my patience with these guys until they find exactly what's wrong because even I am somewhat grasping at straws right now. But things like this makes me doubt their expertise...6 - My company compromises SSL certificates in the name of "security". I can't even use Gmail because Google has identified my intranet as a malicious network executing a man in the middle attack. So they break security in the name of security5 - - A year ago it took me hours to get SSL working on my Digital Ocean droplet I was using to host my website. I had no idea what I was doing and even though I 'knew' how to use the terminal and do most things, I wasn't confident or competent to only rely on the CLI. About a year later (today) I get an email that my SSL is about to expire and needs renewed. Done and taken care of within 20 minutes, (with a 2 hour gap due to waiting for the cert authority to send me the zip of files) All that time using i3 and moving to Linux is paying off. Maybe by the time I can afford to build my next desktop I can make my main OS linux7 - Yesterday evening I began working on an SSL proxying system for dynamic domain names using Let's Encrypt. I finished just a few hours ago and it's working flawlessly!3 - I keep trying to follow Google deepdream tutorials in Python and it requires an older version of python27 whenever I try to pop install a module, I get an error because of python27 not supporting ssl. So annoying.8 - - There’s no better feeling then doing a full server rebuild, modifying several projects heavily to be portable and keep working under new infrastructure and loosing access to dependent systems. Migrating everything across, firing up Apache.... and BAM the fucker just works and ssl labs gives it an A (it was a giant F with multiple vulnerabilities yesterday on the old server - - - half day gone try to find or remember the password of some SSL/key/encrypt/crt/shit/whatever. Blaming myself for hours, how could I not save the password somewhere? (I pressed enter, no password). it works. I love IT security -! - - Oh my gosh I hate SSL so much. A cert expired this morning, and with it, 29 digital signs are now offline. Shoot me now.3 - HTTPS requests in most languages: Import a couple of libraries, you may need to install a few as well. It's possible that you will need to initialize and set up the socket. Be sure to specify SSL settings. Create the connection, provide it a URL, and attempt the connection. Read the response, usually in chunks. You may need to manually create a buffer of fixed size, depending on if the language has buffer helper classes or not. You will probably need to convert the input stream response to a string to do anything with it. Close the connection and clean up any buffers used. HTTPS requests in Python: import urllib urllib.YEET()6 - - Diffie–Hellman key exchange is not allowed in this area. For your convenience, an SSL stripper was placed on to every near network.2 - could never figure out how to configure ssl because of google clouds insanely complicated documentation. today i found a digital ocean guide that explains its a simple installation of certbot, run it once and set it to auto renew.... fuck you google5 - - - Did you know, that you can just type 'thisisunsafe'? This will tell Chrome to skip certificate validation 🤯1 -* - When I was a wee little lad of 13, still with that hopeful gleam in my eye, I signed up to work as the webmaster for a local org. At the time, I had played around with HTML and CSS and a little JavaScript, and I thought all I'd be doing was updating some pages with announcements or whatever I got paid in SSL, which is a thing kids in Maryland have to do to graduate, and the whole idea is that you need to do 75 hours of volunteer work in your community The people there promised me 8 hours a month for what I thought would be easy work, and so I eagerly signed up. What I thought would be updating a few html files and emailing them to the org was actually having to manage a full on server running PHP4 LAMP stack Needless to say, I was overwhelmed. I tried to make the updates they wanted, but I had no idea how to write PHP, let alone manage a database and server. I think I got out of it by just never responding to their emails once I realized how fucked I was, but that was definitely the worst learning experience of my dev career1 - Got my program hooked up to an external sql database that’s ready to be fed into by PHP, hosted on an SSL website, fixed up all the other bugs with TONS of other stuff. 4:23 A.M. and feelin’ good.1 - - - Did anyone else notice how setting up a letsencrypt.org certificate for a domain became a lot easier as this year went on? Certbot + automatic renewal was set up in four commands on my RasPi, I remember it being more difficult to set everything up 🤔 - when I hear clients says they spent a fortune on SSL cert. I Wonder for that poor soul, you know what I mean.4 - - - What's up with almost every other site having invalid ssl certs, even though they are signed with a future date and by LetsEncrypt, did chrome again distrust a batch - FUCK YOU YOU SHITTY COCK SUCKING BITCH MOTHERFUCKER. GO DIE IN A HOLE THEN GET RAPED IN HELL. I REALLY HATE THIS SHIT. FUCK OFF GOOGLE.13 - - - Am I bad? I charged a client for an SSL Certificate and installation, but just used LetsEncrypt instead, cost me fuck all.6 - - - Decrypt api responses in an iOS app which my “senior” dev thinks it is more secure to encrypt responses in stead of setting up a proper SSL cert (they use plain http to save money 🙄) They disable the encryption since it does not function as we wanted and set up SSL instead🙄4 - - SSL should really stand for "Satan's Security Layer", because anytime I have to deal with it, it's always a major pain in the rear. (And an expensive pain at that!) Why in 2016 is the SSL process so bad.3 - So a week ago my boss asked me to design + build/write code for our new site from scratch. Meanwhile the old website they have had for 5 years is still without SSL and looks pre-2000. It's supposed to be finished and be mobile responsive by tomorrow. I'm the digital marketer.3 - - Boss: We need you to configure our Apache Tomcat server for SSL. Me: Okay, what version of Tomcat is installed? Boss: 5.5.20 Me: - - Cheapskate's website deployment stack for new projects: namecheap ($10 domain) + heroku (free hosting) + mailgun (free email) + Cloudflare (free SSL) = $10/year - - Spending hours trying to figure out why the stack just won't work with SSL. Nearly lost my mind as we started feeling dumber than ever. I really started to doubt my skills after it did not even work with the most minimal nginx site config I could imagine. The next day I discovered that we missed the 443 port mapping in the docker-compose file...it only had port 80 mapped. Yup, stepping back from a problem and getting some sleep is really worth it sometimes. - I don't understand why they're still calling it SSL. It was buried long ago by TLS. Fuck this marketing bullshit, just fucking call it TLS already.1 - - Vodafone India is so shit omfg Run npm install, ERROR json parse error due to ssl exception Run pip install, again ssl exception Run gradle build, again ssl exception!!! Now everytime i gotta make a new project or install a dependency in anything, i have to pray to the blood god that cache contains a valid/uncorrupted package dependency or else ill have to nuke cache and borrow internet from someone else. Once i port it to some other operator, i am gonna incinerate this mf sim.12 - -.7 - Since, I am already using Mullvad's vpn service, I also stumbled on https proxies. Is it still safe to enter my devRant login data, when I would use a https proxy in FF's settings? The Proxy is a free elite https proxy. And devRant also uses SSL. The traceroute would seem like this I guess.: VPN(*le me sendin my password -> SSL Proxy -> SSL DevRant) -------------------- Following that path, I would assume that it would be like this in detail: HTTPS Request -PW gets encrypted by VPN service -" " " again " HTTPS Proxy -" " " again " devRant itself9 - spent the entire day trying to get ssl to work 😩 it works now, but honestly, it's draining to spend so much time on config4 - Server migration status: One of our Windows servers took less than 20 mins. SSL and bla bla everything done. Linux server was a lil bitch but we got it going for the most part .....sigh... Still using Linux as my primary desktop at home but geezus man. We really need a dedicated master wizard Linux sys admin for this mofocka1 - - I'm fiddeling around with progressive web apps. I made something and hosted it on a subdomain. Today I made a typo and found my app on an other domain. All my assets and files are copied there. He even uses my SSL certificate. It's not that spectacular. The app is nothing "revolutionary". It's just the first time it happend to me. Have you ever found your code on other websites ? How did you react - What disturbs me is when companies uses invalid ssl certs for internal services where you have to login with your company credentials - So I walk past this everyday decided to finally take a picture: that's the link to their website2 - -... - - - - Why in fuck's sake would you create a new service and not offer TLS/SSL to your free tier clients ?10 - On every website I visit, first thing is to snoop who gave the SSL certificate to the domain Idk why I do this2 - What makes free ssl "Unsuitable for e-commerce websites", Please read to end to see my view point. From Namecheap:.... * - FUCK MY MOTHERFUCKING LIFE! FOR GOOD THIS TIME! I worked about 6 hours straight today to get SSL up and running, so you can include your own certs in my framework. This worked without any problem in Netty. Even forcing SSL was without any problem. And then I tried to fucking show an image and this motherfucker won't load. I tried to copy code examples from fucking any source I could. As I gave up I tried to comment out a Netty decoder.... AND IT FUCKING WORKED! FUCK YOU NETTY DOCUMENTATION!!! FUCK NETTY, LONG LIVE NETTY!7 - - -3 - Omg, freaking web sockets.. But I figured out how to run a socket server in SSL with the certificates in a root folder. Seems like an early night for me! - Docker with nginx-proxy and nginx-proxy-le (Lets Encrypt) is fucking awesome! I only have to specify environment variables with email and host name when starting new containers with web servers, and the proxy containers will automatically make a proxy to the new container, and generate lets encrypt ssl certificates. I don’t have to lift a fucking finger, it is so ducking genius2 - made a web project that can generate gifs from webcam. Can't complete it because I don't understand how to use SSL with Azure ☹️12 - - - When your Comapny uses way to many certificateS, .p12 and .msc files so Everyone's local breaks after each package release.... It's like building a house of cards on a windy day - There it is: a nice working nginx webserver with SSL, PHP, MySQL and HTTP2 on a Raspberry Pi3, but I have no idea what to do with it. Do you have one?15 - view-source: “Oh my GOD! I've heard of obfuscation, but this is just hell in text format!”5 -. - - In last episode of "How SystemD screwed me over", we talked about Systemd's PrivateTMP and how it stopped me from generating SSL certificates. In today's episode - SystemD vs CGroups! Mister Pottering and his team apparently felt that CGroups are underused (As they can be quite difficult to set up), and so decided to integrate them into SystemD by default. As well as to provide a friendlier interface to control their values. One can read about these interactions in the manual page "systemd.resource-control" All is cool so far. So what happened to me today? Imagine you did a major system release upgrade of a production server, previously tested on a standalone server. This upgrade doesn't only upgrade the distribution however, it also includes the switch from SysVInit to SystemD. Still, everything went smooth before, nothing to worry now then, right? Wrong. The test server was never properly stress-tested. This would prove to be an issue. When the upgrade finishes, it is 4 AM. I am happy to go to bed at last. At 6 AM, however, I am woken up again as the server's webservices are unavailable, and the machine is under 100% CPU load. Weird, I check htop and see that Apache now eats up all 32 virtual cores. So I restart it, casting it off to some weird bug or something as the load returns to normal. 2 hours later, however, the same situation occurs. This time, I scour all the logs I can, and find something weird - Many mentions that Apache couldn't create a worker thread? That's weird. Several hours of research and tinkering later, I found out the following: 1 - By default, all processes of a system that runs SystemD are part of several CGroups. One of these CGroups is the PID CGroup, meant to stop a runaway process from exhausting all PIDs/TIDs of a system. This limit is, by default, set to a certain amount of the total available PIDs. If a process exhausts this limit, it can no longer perform operations like fork(). So now, I know the how and why, but how should I solve this? The sanest option would be to get a rough estimate of just how many threads the Apache webserver might need. This option, though, is harder, than apparent. I cannot just take the MaxRequestsWorkers number... The instance has roughly double the amount of threads already. The cause being, as I found out, the HTTP/2 module, which spawns additional threads that do not count towards this limit. So I have no idea what limit to set. Or I could... Disable the limit for just the webserver via the TasksAccounting switch. I thought this would work. And it did seem to... Until I ran out of TIDs again - Although systemctl status apache2.service no longer reported the number of tasks or a task limit of the process, the PID CGroup stayed set to the previous limit. Later I found out that I can only really disable the Task Accounting for all the units of a given slice and its parents. This, though, systemctl somewhat didn't make apparent (And I skimmed the manual, that part was my fault) So... The only remaining option I had was to... Just set the limit to infinite. And that worked, at last. It took me several hours to debug this issue. And I once again feel like uninstalling systemd again, in favor of sysvinit. What did I learn? RTFM, carefully, everything is important, it is not enough to read *half* the paragraph of a given configuration option... Oh, and apache + http/2 = huge TID sink. - FINALLY got Chrome to accept my self signed ssl certificate on OSX!!!! F*ck this has taken waaay to long.... For anyone seeking advice, look here: - Does anybody knows if letsencrypt SSL works with Cloudflare or not? Because I'm unable to use letsencrypt SSL while using free version of Cloudflare :?1 - - I made a wordpress website to one of my friends long time back as he wants to teach online and sell his videos. (he is studying MBBS) Yesterday suddenly he calls me and says our site has been compromised and its not longer secure. Me: After seeing screenshot, no actually site doesn't have ssl and in recent chrome updates http site is being flagged. He: Okay, I saw video on youtube how to buy ssl. Me: its not just installing the certs, all the links and images has to be on https so it will take sometime for me. He: Today, Website is no longer opening please help after putting ssl as per the video... Me: What the hell? Who asked you to do that? Are you nuts? He:................. Sorry, 😐2 - Why is Docker + SSL certificates so confusing? Or do I just have bad resources? I just want to know how to compose an Docker, Nginx setup with encryption.11 - Apple and its bundle identifiers, APN SSL certificates, provisioning profiles and review process just took a 5 hours of my life.2 - When your website's SSL certificate expired two months ago, the likelihood of me trying your software is less than zero. - WTF. So many problems with this question. Am I alone in my frustration? What problems do YOU see with that question?7 - - - So, I made this API which logins to the system and Used it in an android app, there was one roadblock to it, that everytime user enters a password, it has to match the password hash so I, excitingly, used password_verify($password,$passwordHash), unknowingly that it is fucking unsafe and the code is still there, and here's where it gets interesting it is not over SSL/TLS. Fuck me, any bright solutions?27 - I fixed one problem we had at work with SSL over a year ago. Since then, whenever anyone has any problem vaguely to do with SSL, they come to me. The "expert". So I guess I'd like to become what I'm already perceived as... SSLman've created instructions for myself the next time I encounter cpanel. rallen@rallen ~ $ cheat cpanel #SSH'ing into the fucking cpanel #Figure out combination of 5 usernames and passwords given by client to log in. #Pray that WHM isn't involved. #Ignore several ssl warnings and cancel several .htaccess password prompts. #Call in to enable that shit. #Wait no less than 15 minutes on hold. #SSH enabled. #Create public private key pair. #Notice the ppk conversion for windows 'devs'. Sigh. #Copy key pair to ~/.ssh/ #chmod that shit to 600. #Note for the user name it's not anything the clients given you or what you've named the key. Look in the cpanel for the /home/<user> directory. ssh -i ~/.ssh/key <user>@<dedicatedip> - - - Dear facebook/instagram When in sandbox mode, please dont require https redirects, my localhost server has no concept of what an SSL cert is, its sandbox for a reason.5 - - - GoDaddy. Is. The. Worst. I'm working on an SSL cert domain verification for a client. The chat support tech at GoDaddy has no freaking clue what she's doing. She keeps telling me to follow the same help article I already knew about the first second I heard I needed to do this job. It didn't work. But she keeps going back to it, sure that I'm just a complete and utter moron who doesn't read. Never mind that I have screenshots to prove everything she's telling me is 100% wrong according to every error message this process is generating. Now she's checking with the "SSL team". Which is code for "I have absolutely no idea what I'm doing and I'm frantically searching the FAQ database to figure out what this SSL thing even is." That's what the last hour of my life has been. And 20 minutes of that was waiting in the chat queue.5 - -'); - Successfully wasted more than 12 hours in debugging SMTP issue. ColdFusion email script was throwing SSL error. What was real issue? The Web Server IP Address was blacklisted in the Email Server. - Hired to make an end-point that leverages their SSL cert-only REST service, I'm still waiting for the certificate. This started months ago. Thank god for that flat rate $$$ - Ok can someone explain this to me, i cant get it to function properly on chrome. Others are fine...7 - - - Searched for an error message hoping to find StackOverflow. Found GitHub showing me the code that produced the error message instead. I haven't had enough coffee to understand somebody else's code today. I'll keep debugging myself before I read your code, thanks. - Which of the following is related to Alert Protocol in SSL? A. SELECT, ALARM B. ALERT, ALARM C. WARNING, FATAL D. FATAL, ALARM E. SELECT, FATAL F. I don't always use SSL3 - - Does anyone know how to use certbot on a Debian stretch azure web service app to generate an SSL cert? I've got the cert generated and Apache to serve it but it's giving me errors. I need to bind it in azure somehow but I can't figure out how to export the cert.7 - - SSL issues when behind a proxy.. i think. Troubleshooting and solving issues are difficult when you just follow a guide about something you need :i - - Yesterday I spent 7 hours on a silly SSL certificate error. The exact same webpage gave me "certificate revoked" error when viewed in one browser/device but it displayed fine on others! But everything is back to normal today! As if nothing happened! I'm not a web dev, so I have no idea why this happened. I'm just pissed that I wasted 7 hours on a thing that wasn't my fault... - So.. I spent some non-trivial time trying to call a soap service via SSL in a java application struggling with SSLHandhakeException. I tried quite a few things with the certificates, none of them worked.. until we found out, that I added the right certificates to the truststore of the WRONG java :-/ Conclusion: when working with java cacert files, run echo %java_home% first (you can thank me later).... - Spent a couple hours trying to obtain an SSL certificate to encrypt my site last night... No luck so far. It kept saying it doesn't have access, when I verified that nginx serves to port 443...20 - - - - - "Upgraded" to nginx over the weekend. Setup SSL to be secure and felt good about myself. Woke up to find PhantomJS can no longer access the site to generate PDFs. Had to remove the ciphers block until I figure out what it's compatible with. FML.6 - I have been trying to use the digits by Twitter Api for web and have been contacting them on everything possible and finding any help for a month and have got almost nowhere. I added ssl and did all the stuff they told me to do.... Wtf Twitter still trying to get this to work2 -? - - App of a little social network I'm member of didn't connect to the server anymore, since the social network changed their SSL-certification and my smartphone is too dumb to accept the new one. So, I pulled the source code of the app from GitHub and added some code dealing with SSL-connection-exception-handling. A warning appears, that there were some errors with the SSL-cert with the question how to proceed and three options: Quit, Ignore for now, Ignore and don't ask me again. The code to ignore ssl-errors is just for debug-/develop-purposes, but hey, app with that little "hack" is running only on my phone x) Now, the app is working again at my smartphone \o/2 - What is the use of https in local host? Do I really need to enforce it in local server even tho I'll add ssl cert after it gets deployed anyway! For example an express server in localhost .Does it need ssl in local server?8 - let rant: (Bool, Bool, String) -> Void = { (isRant, isDev, contents) in print(contents) } rant(false, true, " So, a year ago more or less, I set out to teach myself some server-side programming on the side. Many (MANY) tutorials, Digital Ocean droplets created and destroyed, coffee mugs and FMLs later, I can say 'Hello World' from Node.js - built from source and not running as a sudoer - using express and forever on Ubuntu, behind another Ubuntu server running nginx - also built from source so to add headers-more and naxsi - using all sorts of goodies to enhance security and talking to each other via SSH. Oh, and taking to the world over HTTPS with a grade A on SSL Labs (I know this doesn't mean much to you. Yeah you, rolling your eyes over there. So why don't you just bugger off before even commenting? Haha) Feels good man. ")2 - So, some of you know that I'm having struggle manipulating Youtube iframes with jquery or plain javascript, please note that the same thing can be done via YouTube API but I personally do not want to rely on API, So after 2 days of struggling I've officially given up, I feel so fucking angry and sad at the moment I can't even describe. For some solutions to work I need SSL certificates. the closest I could get was $(iframe#youtubeiFrame)['content']; This leads to the youtubeIframe root #document but I am unable to access that DOM Next task, to configure another IDE except Eclipse for Demandware. $options = array('Aptana'=>'IDE','IntelliJ=>'IDE','VSCode'=>'textEditor'); -" - Why is GitHub's certificate showing up on semver.org? I can no longer access the site normally because of the browser warning. Who's responsible for this atrocity? I checked with a VPN and without, same result. Can someone confirm? -. - - - - Spent the last 4 hours trying to figureout how to A) buy a ssl cert that supports wildcards and B) install that bitch on a ubuntu box with nginx... now github is complaining about the cert chain... :l - As i wrote A DR doc I suddenly thought that making a backup of our SSL certs is *probably* a good idea. Hello pfx 🔒1 - Got paid to follow the wrong instructions on installing an SSL certificate. It's working now but only after a few hours of trying different things1 - New ad self-service portal too hard to integrate ssl and can't have users send their passwords in plaintext. Setup apache proxy with ssl in same vpc to encrypt traffic to and from vpc. All good as long as nobody is in my vpc sniffing traffic... - me: FE in work, but doing fullstack on my passion projects and somewhat confident on small VPSs - heck, I have a beard, I can do server stuff :) - migrating a WP site that just wont work, copied everything, didn't work, used a migration tool, didn't work, always getting "Connection refused"... must be something with the SSL certificates.. 3 fckn days passed by and nothing when I stumbled upon a forum post with similar issue where the guy stated: I tried all the obvious like copying files, db, certificates, enabled ssl on apache... then it hit me, this is a new installation, I didn't enabled SSL in apache sudo a2enmode ssl restarted apache and BOOM everything is working part of me was like how stupid you have to be - but the other part is like I guess I learn something every day, this is how you migrate a WP site with the domain #IloveIT - So, watching this video on Youtube about security. He mentions how SSL was designed back when Yahoo was a website with 30 links... And basically for passive websites.... Why do we still use this again? - - - Why the fuck does my subdomain work with https but my main domain returns an ssl error. Wouldnt nether work if the ssl was the issue Its midnight I want to fucking sleep not deal with this shit. I'm probably doing something stupid but don't have the fucking experience to recognize what I'm doing wrong5 - Is it a good approach to have a master SSL key for all your servers when making the authentication? I am a Developer, but when you work in a company with two developers and you are the senior one you have to learn a lot of stuffs. I am learning more in depth things about how to secure the servers and network. Now, I am expanding the servers. Splitting the code and database in three different servers (code, Master DB, Slave DB) and configuring Master-Slave databases. My questions are: 1. Is it a good approach to have a master SSL key for all your servers? 2. Is is a good approach to use the same SSL key for Master database server and Slave database server? Any other suggestions are welcome. Thank You in advance!2 - - - Ok. I still can't get SSL working on my site so I'm gonna assume its my fault. Time to go back to a default template test that get it to work and if that works go from there. Ive done EVERYTHING my host says to on the dashboard side I can short of crying to them. And honestly. Fuck that2 - - Using boot2docker behind a corporate proxy that fucks with your SSL certs will drive anyone insane!! 👹 -? - does anyone know a hosting service that allows installing 3rd party SSL certificate (Comodo) FOR FREE, WITHOUT buying overpriced 70$/year additional dedicated IP?4 - RubyGems had to have SSL configuration errors for right when I'm working on a demo rails project for a firm 🙃🙃🙃 Got it working, but still - - I was scrolling through my past rants and found this gem... I posted it when images were not loading due to expired ssl. Looks like everyone who saw this got tricked!! Top Tags
https://devrant.com/search?term=ssl
CC-MAIN-2021-39
refinedweb
7,055
72.26
Dridex Trojan Dridex in a Nutshell Dridex is a famous banking Trojan which appeared around 2011 and is still very active today. This is because of its evolution and its complex architecture, which is based on proxy layers to hide the main command and control servers (C&C). The APT known as TA505 is associated to Dridex, as well as with other malwares such as TrickBot and Locky ransomware. Dridex is known for its unique anti-analysis techniques which combines API hashing with VEH (Vectored Exception Handling) manipulation. As a consequence, Dridex is able to effectively hide its intentions and requires skillful reverse engineers to accurately dissect it. Once installed, Dridex can download additional files to provide more functionality to the Trojan. Technical Summary - Dridex uses API hashing to conceal its imports. It’s using CRC32 hashing, as well as another layer of XORing with hard-coded key. It’s prasing the loaded DLLs in memory and its export tables. As a consequence, Dridex can resolve any imported win APIs then jumps to their addresses. - Another layer of complication is done with Vectored Exception Handling manipulation. Dridex inserts a lot of int 3and retinstructions everywhere to make the reverse engineering harder. Furthermore, the use of int 3triggers a custom exception handler planted by the malware. This malicious handler alters the execution flow to effectively jump between APIs. - Dridex comes with encrypted strings on its .rdatasection. These strings are used as API parameters/settings for the malicious impact. Therefore, they are must be decrypted to know its intentions. Dridex uses RC4 to do the decryption. The first 40 bytes of every data chunk is the key (stored in a reverse order) then followed by the encrypted data. - Dridex stores its network configuration in plain text on its .datasection. Obviously, it establishes connection with its C&C for further commands, and also to download additional malware modules. These modules extend its functionality. Dridex comes with 4 embedded C&C IP addresses. Technical Analysis Defeating Anti-Analysis API Hashing Dridex is famous for its anti-analysis techniques which include API hashing. API hashing -in a nutshell- is when a malware hashes the names (strings) of its imports, making it harder to know what APIs it will resolve at run-time. API hashing is famous among shellcodes. That’s because a tightly crafted shellcode can’t make use of the OS loader, it’s not a PE file and it must depend on itself to find where DLLs are residing in memory. Once it finds the targeted module, it parses its export table to know where it’s providing its exported APIs (the address in memory). One way to spot API hashing techniques, is to look for a function which takes constant (random-like data) inputs, and finding that they are using its return value as a function pointer. We can see that sub_6015C0 matches the description we have just stated. It’s called twice to resolve two Windows APIs. Also, we can notice that the 1st parameter is the same during the two calls here. This may indicate that the 1st parameter is likely to be the hashed DLL name and the 2nd parameter is likely to be the hashed API name. We can label sub_6015C0 as a potential API resolving routine. Now let’s dive into it for more detailed analysis. We can see that it’s depending on two more functions: sub_607564 and sub_6067C8. In sub_607564, we find that Dridex is parsing the process PEB structure in order to get the loaded modules- in the process address space-. By using the appropriate structs in IDA Pro, the code looks more readable right now. As we can see, Dridex is using the Flink pointer to parse the loaded modules (DLLs) as _LDR_MODULE structs. The BaseDllName of every loaded module is obtained, and properly converted to the right form for further comparison. The BaseDllName is hashed by sub_61D620 and XORed against the 38BA5C7B hard-coded key. We can determine the type of hashing algorithm using PEiD tool. Using the Krypto ANALyzer plugin, it was able to identify the hashing algorithm as CRC32 based on the used algorithm constants. After hashing and XORing the BaseDllName of the loaded module, it’s compared against the target hash. Once there is a match, at 0X60769A, the BaseAddress of the wanted module (DLL) is returned. This address is used later for locating the wanted API within the module’s export table. This address also points to the IMAGE_DOS_HEADER aka MZ header of the module. All that is purely done in the memory without the need of exposing the malware’s imports. We proceed to reverse sub_6067C8. The routine accepts the previously returned DLL BaseAddress as a parameter along with the second hash. We can make a strong prediction that this function is using those parameters to return the API address in order to be used by Dridex. As we can see, The malware is parsing the module header in order to locate its export table. The export table of a certain DLL contains the addresses which its exported APIs are residing in memory. The malwares first references the e_lfanew field at offset 0X3C from the beginning of the module. This field denotes the offset which the NT Headers begin. From there and by offset 0X78 -i.e. at offset 0X3C + 0X78 = 0X160 from the beginning of the DLL-, the malware can access the Data Directory. The first two fields of this array is the address of the Export Directory address and its size. We can use PEBear tool to visualize all these offsets within the PE header. We use the _IMAGE_EXPORT_DIRECTORY struct with the variable EXPORT_TABLE_Start_Address to make the code more readable. Hence, we can see the malware parsing AddressOfNames, AddressOfNameOrdinals, and AddressOfFunctions to make a mapping between every exported API’s name and its memory address. If the hashed -and XORed- API name matches the 2nd argument of the function, its memory address is returned. By using this way, Dridex is able to effectively hide its needed APIs from security solutions. For more details about how to find an API address in memory check this out. Combining all together from the previous analysis, we now know that Dridex is doing API hashing using CRC32 + another layer of XORing. We can try to write a script to create a hash table of the famous Windows DLLs and their exports. Generating this table, we can then search into it using the hashes that Dridex uses. As a consequence, we can know which API and DLL Dridex is trying to resolve without the need of dynamic code analysis. Fortunately, we don’t have to create this script. We can use the amazing hashdb IDA plugin from OALabs. It will automate everything for us. We just need to identify the hashing algorithm and the XOR key to make hashdb ready. This announces our victory over API hashing anti-analysis, and we can easily use the newly added enums to make the malware code more readable right now. For instance, at 0X5F9E47 we find that CreatThread is being resolved at that particular address. Vectored Exception Handling To fully understand the intention of this anti-analysis technique, we need to know how Dridex is utilizing API hashing: The returned API address from sub_6015C0 (labeled as mw_API_Resolver) is not used as call instruction operand. Rather, at sub_607980, Dridex is registering (adding) a new customized Exception Handler using RtlAddVectoredExceptionHandler API, which accepts an _EXCEPTION_POINTERS arguemnt. This customized exception handler will adjust the thread stack and EIP register, in order to alter the process flow to the previously resolved API address (via the ret instruction). After calling the mw_API_Resolver function, EAX now contains the address of the resolved API. Dridex then traps the debugger or -more accurately- generates an EXCEPTION_BREAKPOINT using int 3 instruction. This exception is passed to the process exception handlers vector in order to be properly handled. The previously planted customized exception handler will be the first to process the exception. This malicious handler will execute and alter the process’ context if and only if the exception is caused by int 3 instruction -which Dridex exactly wants-. The final process’ context will be altered by these steps: Incrementing EIPby 1 in order to make it point to the retinstruction. Mimicking a push EIP+1instruction, in order to save the address of the instruction after reton the stack ( manually building a stack frame). Also mimicking a push EAXinstruction, in order to make ESP = Resolved API Address. Successfully achieving these steps, the flow will exactly resume at the ret instruction, pointed by the corrupted EIP, which will pop the address on top of the stack and jumps to it. This will make the wanted jump to the resolved API with no call instruction. Furthermore, after executing the resolved API code, the flow resumes at the previously saved address of the manually built stack frame (step no. 2). This will make the flow resume at the instruction after the ret, successfully returning back to the previous normal flow before the int 3 instruction. Not to forget, this technique makes the dynamic code analysis harder, because you will deal with hundreds of debugger traps everywhere in the code. Moreover, inserting ret instructions everywhere in the code tricks the disassemblers when trying to identify functions. Some disassemblers use ret instructions to identify the end of the functions. This makes another layer of complication using this anti-disassembly technique. To overcome all this, we need to create a script which parses the code section of the sample, in order to fix those complication. We can create a small IDA Python script to search for the opcodes int 3 and ret, and then patch them to be call EAX . This means that we are looking for the bytes 0xCCC3, then we patch them to be 0xFFD0. The script is below: import idautils import idaapi import ida_search def get_text_section (): for seg in idautils.Segments(): if idc.get_segm_name(seg) == ".text": return [idc.get_segm_start(seg), idc.get_segm_end(seg)] def search_N_patch(pattern, patch): search_range = [] search_range = get_text_section() flag = True while(flag): # The value 16 is the default. addr = ida_search.find_binary(search_range[0], search_range[1], pattern, 16, ida_search.SEARCH_DOWN) idc.patch_word(addr,patch) if (addr == ida_idaapi.BADADDR or addr >= search_range[1]): flag = False pattern = 'cc c3' # int 3 ret patch = 0xd0ff # call eax (Little Endian) search_N_patch(pattern,patch) PS: This script only alters the IDA database and not the actual binary. To patch the sample in order to open it in a debugger, use the pefile python library instead. Now, using hashdb as well as our IDA Python script, we have a better chance to understand Dridex functionality. First, we edit the data types of the mw_API_Resolver arguments to be hashdb_strings_crc32 enum instead of integers. This in order to make IDA Pro automatically resolve the hashes, Secondly, we use IDA Pro Xrefs to know which API is being resolved at any particular location. Strings Decryption Dridex contains a lot of malicious functionality. From simple host profiling up to DLL hijacking, there are a lot to cover when reversing Dridex. I will not dive deeply into all of its functionality, I will rather focus on the interesting parts only. To get the most of its intentions, you need to decrypt all the embedded strings. They are stored on the .rdata section in chunks. These strings are used as parameters with the resolved APIs to perform certain malicious impact. We can use the amazing capa tool from Mandiant to find out if it can detect any encryption algorithms. Fortunately, capa was able to identify RC4 is being used at sub_61E5D0. Also, from capa’s output, we can detect the operation: “key modulo its length” at the address 0x61E657. From here, we can trace the Xrefs to sub_61E5D0 to find out where the key is located and what is the key length. Taking the last Xref, at sub_607B30, we can trace back the function arguments to find that the key is loaded from a certain offset at the .rdata section. The key length is 40 bytes and the data to decrypted starts after the key. As a consequence, we can deduce that for every chunk of data, their decryption key is the first 40 bytes then followed by the encrypted data. Also from other threat intel resources, we can know that Dridex stores the decryption key bytes in a reverse order. Let’s try to use CyberChef to manually decrypt the data at the address 0X629BC0. The key starts at 0X629BC0 with a length of 40 bytes in a reverse order. The encrypted data starts at 0X629BE8. We can see the fully decrypted strings clearly now. The first two words are “Program Manager”. That’s why I didn’t prefer to reverse all of Dridex functionality. The more important is to find out how the decryption is happening and then you can decipher any code snippets. From this point, you can try yourself to decrypt every chunk of data and find out how they are being used for every malicious impact. Extracting C&C Configuration Dridex of course tries to connect with its threat actor. It’s a must to find these remote ends in order to block them and cut out the lines between the malware operators and the infected machines. One way to find out the C&C servers, is to look for where networking functions are being called. From Xrefs to mw_API_Resolver, we can find that there are two important functions which are responsible for networking functionality; sub_623370 and sub_623820. At sub_623820, it seems that it is used for further download activity, because it’s resolving the InternetReadFile API. Inside sub_623370, we can see Dridex is resolving InternetConnectW API which accepts the lpszServerName parameter. This parameter identifies the remote end to where the connection is happening. Tracing the only Xref to sub_623370, we can spot Dridex parsing a data offset to extract the embedded IPs. This is at the address 0X5F7232 just a little before the call to sub_623370. The network configuration is not encrypted. Starting at offset 0X62B024. The ports can be converted via simple hex to decimal conversion. Yet, for the IPs we can use this small Python script to convert them into a human-readable format: import socket import struct def int2ip(addr): return socket.inet_ntoa(struct.pack("!I", addr)) print(int2ip(0xC02ED2DC)) # First IP The extracted C&C IPs are below: Conclusion The techniques of Dridex are somehow unique when combined together. We can easily defeat API hashing once we know the hashing algorithm and the XOR key. The use of VEH makes the reverse engineering process very painful and needs urgent patching. Dridex has a lot of capabilities and techniques, I’ve decided to rather focus on defeating anti-analysis and strings decryption. From there, you are able to identify any of its intentions.
https://cyber-anubis.github.io/malware%20analysis/dridex/
CC-MAIN-2022-05
refinedweb
2,484
63.39
< Development | Tutorials Development/Tutorials/Graphics/Migrate Qt Quick Controls 1 Import import QtQuick.Controls 1.4 to import QtQuick.Controls 2.8 Icon Button { iconName: "file-new" iconSource: "my-file-new.svg" } to Button { icon.name: "file-new" icon.source: "my-file-new.svg" } ToolTip Button { tooltip: "Create new file" } to Button { ToolTip.visible: hovered ToolTip.text: "Create new file" } ExclusiveGroup ExclusiveGroup { id: filterGroup} Button { exclusiveGroup: filterGroup } Button { exclusiveGroup: filterGroup } to ButtonGroup { id: filterGroup} Button { ButtonGroup.group: filterGroup } Button { ButtonGroup.group: filterGroup } SplitView You need at least Qt 5.13 and QtQuick.Controls 2.13. For older system, this might not be supported. You can use RowLayout/ColumnLayout instead if you want to lower system requirements. SpinBox The interface in Qt Quick Controls 2 is totally different, especially how decimals are supported. You have to rewrite the whole part following the new documentation. TableView You need at least Qt 5.12 and QtQuick 2.12. Note: - It is from Qt Quick 2, not Qt Quick Controls 2. - It doesn't support column header. - It by default only support one column. To support more columns, you have to change your table model in C++. - The interface is totally different. Even though we have a TableView, but it is far from useful. In most cases, it is not even better than a ListView. If possible, use ListView instead. Examples This page was last edited on 19 October 2019, at 11:00. Content is available under Creative Commons License SA 4.0 unless otherwise noted.
https://techbase.kde.org/Development/Tutorials/Graphics/Migrate_Qt_Quick_Controls_1
CC-MAIN-2020-45
refinedweb
252
55
With a few examples from AWS Lambda, Azure Functions and OpenWhisk. Cloud computing is going through an interesting evolution. It has gone from a platform for deploying virtual machines to planet-scale systems with extensive collections of data storage, analysis and machine learning services. Most recently we have seen the emergence of “cloud native” computing, which in its most basic form involves a design pattern of microservices where big applications are decomposed into hundreds of basic stateless components that run on clusters managed by tools like Kubernetes, Mesos and Swarm. Serverless computing is the next step in this evolution. It addresses the following challenges. Suppose I have a small computation that I want to run against some database at the end of each month. Or suppose I want to have the equivalent of a computational daemon that wakes up and executes a specific task only when certain conditions arise. For example, when an event arrives in a particular stream or when a file in a storage container has been modified or when a timer goes off. How do I automate these tasks without paying for a continuously running server? Unfortunately, the traditional cloud computing infrastructure model would require me to allocate computing resources such as virtual machines or a microservice cluster and my daemon would be a continuously running process. While I can scale my cluster of VMs up and down, I can’t scale it to zero without my daemon becoming unresponsive. I only want to pay for my computing WHEN my computation is running. This is not a totally new idea. Paying only for the compute that we use goes back to early timesharing and persists with compute-hour “allocations” on supercomputers today. And there are cloud services such as Azure Data Lake Analytics, Amazon Kinesis or the AWS API gateway, that charge you only for the computation that you use or data that you move and they do not require you to deploy server infrastructure to use them. However, there is something deeper going on here and it has to do with triggers and another computational paradigm called “Function-as-a-Service”(FaaS). Unlike the serverless examples above which depend upon me invoking a specific well-defined service, FaaS allows a cloud user to define their own function, and then “register” it with the cloud and specify the exact events that will cause it to wake up and execute. As mentioned above, these event triggers can be tied to changes in state of a storage account or database, events associated with queues or streams of data from IoT devices, web API invocations coming from mobile apps. Triggers can even be defined by steps in the execution of a workflow. And, of course, the user only pays when and while the function is executing. Figure 1. Function, triggers and output concept. (from markoinsights.com) There have been two conferences that have focused on the state of serverless computing. The paper “Status of Serverless Computing and Function-as-a-Service (FaaS) in Industry and Research” by Geoffrey Fox, Vatche Ishakian, Vinod Muthusamy and Aleksander Slominski provides an excellent overview of many of the ideas, concepts and questions that surround serverless computing that surfaced at these conferences. In that paper they refer to an IBM tutorial that defines serverless FaaS as 1. short-running, stateless computation 2. event-driven applications 3. scales up and down instantly and automatically 4. based on charge-by-use Notice that there is a distinction between FaaS and serverless FaaS and it has to do with item 4 above. A good example of this is Google’s App Engine, which was arguably the first FaaS available from a commercial cloud. In its current form App Engine can run in one of two modes. In its standard mode, your applications run in a sandbox and you are charged only when the app is running. In the “flexible” mode you deploy a container and then specify the compute infrastructure needed in terms of CPU power, memory, disk and you are charged by the hour. You could say that App Engine in running Flexible mode is server-lite, but clearly not fully serverless, while standard mode is truly serverless. What are the serverless FaaS choices? There are a number of FaaS implementations. Some of these are used for research while others are commercial products. The Status report refers to many of these and the slides for the workshop are on-line. A good example of the research work is OpenLambda from the University of Wisconsin and first introduced dat HotCloud ’16. Based on this experience the Wisconsin team described Pipsqueak, an experiment to reduce the deployment latencies caused by Python library initializations. Ryan Chard described Ripple which is an excellent example of distributing event trigger management from the source to the cloud. Ripple has been designed and use for several significant science applications including beamline science (ALS and APS). Another related technology is if-this-then-that IFTTT that is a service for chaining together other service. Two other open source projects raise an interesting question about what is behind the curtain of serverless. Funktion and Fission are both implementations of FaaS on top of Kubernetes. As we discuss serverless computing we must remember that there is a “server” somewhere. The basic infrastructure for serverless computing needs to run somewhere as a persistent service and hence a microservice platform like Kubernetes is a reasonable choice. This relates to the economics of severless computing and we return to that at the end of this report. The most commonly referenced open source FaaS service is Apache OpenWhisk which was developed by IBM and is available on their Bluemix cloud as a service. The other commercial services include Google Function, Microsoft Azure Functions and Amazon Lambda. At the end of this article we will show some very simple examples of using some of these systems. When can FaaS replace a mesh of microservices for building an app? The Status paper also makes several other important observations. For example, they note that while serverless is great for the triggered example described above, it is not good for long-running or statefull applications like databases, deep learning training, heavy stream analytics, Spark or Hadoop analysis and video streaming. In fact, many large-scale cloud-native applications that have thousands of concurrent users require continuous use of massive networks of microservices. This will will not be based on serverless FaaS. However, there may be many cases where a user-facing application running in the cloud could be easily implemented with serverless FaaS rather than as a big microserivce deployment. What we do not know is where the cross-over point from a serverless FaaS implementation of an app to a full kubernetes based massive microservice deployment lies. This relates to the economic of FaaS (discussed briefly at the end of this article). The Cloud Native Computing Foundation has a working group on serverless computing that is addressing this topic. There is one interesting example of using serverless FaaS to do massively parallel computing and it called pywren. The lovely paper “Occupy the Cloud: Distributed Computing for the 99%” by Eric Jonas, Qifan Pu, Shivaram Venkataraman, Ion Stoica and Benjamin Recht describes the concepts in pywren which allow it to scale computation to thousands of concurrent function invocations achieving 40teraflops of compute performance. Pywren uses AWS Lambda in a very clever way: it serializes the computational function which are then passed to a lambda function to execute. We will return to Pywren in another post. Computing at the Edge? Perhaps the hottest topic in the general area of cloud computing is when the computing spills out of the cloud to the edge of the network. There are many reasons for this but most boil down to latency and bandwidth. A good example is the use cases that motivate the Ripple system described above. In order to generate a stream of events from sensors at the edge of the network, one needs very light weight computing that can monitor them and generate the events. In many cases it is necessary to preprocess the data in order to send the message to the cloud where a function will be invoked to respond. In some cases, the response must return to the source sooner than a remote invocation can respond because of the latencies involved. The computing at the edge may need to execute the function there and the actual communication with the cloud may be just a final log event or a trigger for some follow-up action. Another possibility is that the functions can migrate to the places where they are needed. When you deploy the computing at the edge you also deploy a list of functions that that must be invoked. When the edge process starts up it could cache some of the function it needs locally. We expect there will be many variations on these ideas in the future. A Tiny FaaS Tutorial By their very nature FaaS systems are very simple to use: you write short, stateless function and tie them to triggers. We will take a quick look at three of these: AWS Lambda, Microsoft Function and IBM Bluemix OpenWhisk. AWS Lambda Functions Amazon was the first to introduce serverless functions as a service and it is very well integrated into their ecosystem of other services. Called Lambda functions, they can be easily created in a number of standard programming languages. Lambda functions can be associated with a variety of trigger events including changes to the state of a storage account, web service invocations, stream events and even workflow events. We will illustrate Lambda with a simple example of a function that responds to Kinesis Stream events and for each event it adds an item to a dynamoDB table. Here is the python code for the function that accomplishes this task. from __future__ import print_function #import json import base64 import boto3 def lambda_handler(event, context): dyndb = boto3.resource('dynamodb', region_name='us-west-2') table = dyndb.Table("lambdaTable") for record in event['Records']: #Kinesis data is base64 encoded so decode here payload=base64.b64decode(record["kinesis"]["data"]) x = eval(str(payload)) metadata_item = {'PartitionKey': x['PartitionKey'], 'RowKey': x['RowKey'], 'title': x['text']} table.put_item(Item=metadata_item) There are several critical items that are not explicit here. We need to invoke the AWS Identity and Access Management (IAM) system to delegate some pemissions to our function and we will need to grab copies of the Amazon Resource Names (ARNs) for this and other objects. First we create a IAM role that will allow access to Kineses streams and DynamoDB. Using the IAM portal we created a role called: “lambda-kinesis-execution-role” and attached two policies ‘AmazonDynamoDBFullAccess” and “AWSLambdaKinesisExecutionRole”. We then made a copy of the Role ARN. The next step is to install this function into the lambda system. To do that we put the function above into a file called “ProcessKinesisRecords.py” and then zipped it. We then uploaded the zipped file to S3 (in the us-west-2) in the bucket “dbglambda”. With that we can create our function with the boto3 call from our laptop as: lambdaclient = boto3.client('lambda') response = lambdaclient.create_function( FunctionName='lambdahandler', Runtime='python2.7', Role='arn:aws:iam::066xx534:role/lambda-kinesis-execution-role', Handler='ProcessKinesisRecords.lambda_handler', Code={ 'S3Bucket': 'dbglambda', 'S3Key': 'ProcessKinesisRecords.zip', } ) The final item we need is the link between this function and the Kinesis Stream service. To do that we went to the portal for Kinesis and created a stream called “mylambdastream” and grabbed its ARN. Creating the binding is accomplished with the following. response = lambdaclient.create_event_source_mapping( EventSourceArn= 'arn:aws:kinesis:us-west-2:06xxx450734:stream/mylambdastream', FunctionName='lambdahandler', BatchSize=123, StartingPosition='TRIM_HORIZON', ) We can verify the properties of the function we have created by looking at the AWS Lambda portal pages. As shown below, we can verify that our lambdahandler function does indeed have our stream as its trigger. Figure 2. AWS Lambda Function dashboard showing our trigger. Finally we can invoke it by pushing an event to the kinesis stream. This is shown below. client = boto3.client('kinesis') record = "{'PartitionKey':'part2', 'RowKey': '444', 'text':'good day'}" resp = client.put_record(StreamName='mylambdastream', Data=bytearray(record), PartitionKey='a') Checking the DynamoDB portal will verify that the function has picked the message from Kinesis and deposited it in the database. The full details are in the notebook “simple-kinesis-lambda-sender.ipynb”. Azure Functions The Azure function portal has a large number of basic templates we can use to build our function as shown below. Figure 3. Azure Function Template samples We have selected one of the Python examples that creates a simple web service. Bringing this up on the portal we see the code below. import os import json postreqdata = json.loads(open(os.environ['req']).read()) response = open(os.environ['res'], 'w') response.write("hello world to "+postreqdata['name']) response.close() To get this running, we now must go to the “integrate” page to provide one more detail: we set the authorization level to “anonymous” as shown below. Figure 4. Setting the authorization level for the trigger to “anonymous”. The client is also simple and shown below. import os import json import requests import json data = {} data['name'] = 'goober' json_data = json.dumps(data) r = requests.post("", data=json_data) print(r.status_code, r.reason) print r.text Using Azure Functions to pull data from a queue. We now give another version of the function that reads messages from a queue and puts them in a table. There is no Python template for this one yet, so we will use JavaScript. To use template this we must first create a new storage account or use an existing one. Go to the storage account page and you will see Figure 5. Creating a new table and queue in a storage account. We click on Table and create a new table and remember its name. Then we go to Queues and create a new one and remember its name. Looking in the storage explorer should show for these items. Clicking on the table name should give us a picture of an empty table. Go back to the portal main page and click + and look for “Function App” and click create. There is a table to fill in like the one below. Figure 6. Creating a new function. We give it a name and allow it to create a new resource group. For the storage we want to use the dropdown and look for the storage account name. (It is important that the storage account is in the same location as the function. We click create and wait for the function to appear on the function portal page. You should see it in your resource groups. Follow that link and you will see that it is running. Go to the functions tab and hit “+”. It will ask you to pick one of the templates. At the top where it says “language” select Javascript and pick the one that is called QueueTrigger. This will load a basic template for the function. Now edit the template so that it looks like the following example. The main difference between this and the template is that we have added an output table and instructions to push three items into the table. The function is assuming that the items in the queue are of the form {‘PartitionKey’: ‘part1’, ‘RowKey’: ’73’, ‘content’: ‘some data ‘} Next we need to tie the queue to our queue and the table to our table. So click on “Integrate” on the left. You need to fill in the form so that it ties it to your stuff as illustrated below. Figure 8. The association between the storage account queue and table service with the variable in our function. Here we have highlighted the “show value” to verify that it has the right storage account. You should see your storage account in the dropdown menu. Select it. And they add the Table name. You need to do the same for the AzureQueueStorage. Once this is done and your function is saved and the system is running your function should be instantiated and invoked as soon as you send it queue items. For that we have a simple python script in a Jupyter notebook. You can get from. You will need to fill in your account key for the storage account, but then you should be able to step through the rest. The notebook runs a few tests and then sends 20 items to the queue. Using the Azure storage explorer we see the results in the table as shown below. Figure 9. The view of the table after running the Jupyter Notebook OpenWhisk and IBM Bluemix OpenWhisk is the open source serverless function system developed by IBM that is also supported as a commercial service on their Bluemix cloud. Like the others it is reasonably easy to use from their command-line tools, but the Bluemix portal provides an even easier solution. Creating a function is as easy as selecting the runtime and downloading a template. Figure xx below illustrates a simple function derived from template for a Python3 function. Figure 10. An OpenWhisk function that decodes a dictionary input and returns a dictionary. Notice that the parameter to the function is a python dictionary (all Openwhisk messages are actually Json objects, which for Python are rendered as dictionaries). While this template can be run as a web service in its present form, it is also possible to connect it to an external trigger. To illustrate this we connected it to a trigger that monitors activity in a Github account. Using the portal it was easy to create the trigger (called “mygithubtrigger” and bind it to push and delete actions on a repository called dbgannon/whisk. All this required was an access token which was easy available by logging in to the GitHub portal. In the case of GitHub triggers the event returns a massive Json object that is a complete description of the repository. In the action “myaction” we go two levels down in the dictionary and extract the official repository description and make that the response of the action to the event. When a trigger fires you need a rule which bind the firing to an action. We bound it to the trivial “myaction” example above. The view from the portal of the rule is below Figure 11. The rule view of our association of the trigger to the function We next added a new file to the repository. This activated the trigger and the rule invoked the action. The portal has a nice monitor facility and the image is below. Figure 12. This is the “Monitor” view after the trigger has fired. Finally, drilling down on the “myaction” log event we see the description of the GitHub repository we created. Figure 13. The output of the “myaction” function after a file was added to the GitHub repository. Finally These examples above are all very trivial. The next thing to explore is how functions can be composed into workflows. Each of the three systems has its own way of doing that and if we have time later we will show some examples of this capability. We have also not disussed performance or cost which is greatly dependent on the rate at which your triggers are firing and the amount of work in each function execution. The economics of serverless computing are also very interesting. As we pointed out earlier, for a cloud provider to offer a FaaS serverless capability it must be supported by actual infrastructure. How can the provider afford this if it is only charging by the second? There are two possible answers. First if your FaaS is serving very large numbers of function per second then it will certainly pay for itself. But there is another consideration. Many of the big cloud providers are running vast microservice frameworks that support most, if not all of their big internal and publiclly available applications. Running a FaaS system on top of that is as easy as running any other microservice based application. One only needs to set the cost per second of function evaluation high enough to cover the cost of that share of the underlying platform the FaaS system is using. For Amazon Lambda “The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.” This amounts to $0.06 per GB hour. So for a function that take 8 GB of memory to execute that is $.48 per hour, which is 4.8 times greater than the cost of an 8GB m4.large ec2 instance. Consequently a heavily used FaaS system is a financial win and a lightly used one may have little impact on your infrastructure. Of course, single executions of a FaaS function are limited to five minutes, so you would need 12 5 minute concurrent executions to reach an hour of execution time. Furthermore AWS give you 1 million invocations (at 100ms) each month for free or 400,000 GB-seconds per month free. That is 111 GB hours per month for free. This is not a bad deal and it would seem to indicate that Lambda is not a big drain on their infrastructure yet. Acknowledgement: Thanks go to Ryan Chard for helpful comments and suggestions on a draft of this post.
https://esciencegroup.com/2017/09/06/
CC-MAIN-2021-39
refinedweb
3,583
63.59
adno=537599-01 Thursday, September 7, 2017 Vol. 133, No. 10 Oregon, WI ConnectOregonWI.com $1 Phone: 835-8276 Fax: 835-8277 Mon., Fri. & Sat. appointment only Tues. & Thurs. 10 a.m.-6 p.m., Wed. 12 p.m.-6 p.m., in time Oregon history comes alive in new photo book SCOTT DE LARUELLE Unified Newspaper Group Back to school 126 pages. Then, once society The group will cele- members were finished brate both the release of with a massive project sift- the book and their 30th ing through items donat- anniversary with a Sept. ed by historian Florice 17 gathering. The book Paulson after her death in will be available for sale 2013, they decided to go Kids from Netherwood Knoll and Prairie View elementary schools were back to school on Sept. 5, the first day of ($25 or $30 by mail). A ahead with the project. school. Some arrived with their parents and the others got off school buses by themselves. School staff welcomed them cemetery tour of Prairie Swenson said during a and gave high fives to each other on their way to school. Mound/St. Marys Cem- recent trip to Georgia and etery will follow, with South Carolina, she saw in Oregon helping Houston A semi truck was load- ed up at J.L. Richards in Oregon Thursday, Aug. 31, with supplies for Ray Leslie to drive down to Houston to help with recovery efforts following Hurricane Har- vey. Leslie met with Uni- versity of Wisconsin-Mad- ison graduate and current Houston Texans defensive lineman JJ Watt, who orga- nized an effort through social media that drew more than $20 million in dona- tions as of Tuesday, accord- ing to a post on his Twitter account. On the web See more about JJ Watts donation drive: twitter.com/JJWatt Get Connected a better sense of the villages end- da shortly after village administrator hit with a big snow event this fall, of-year financial accounts. Trustees Mike Gracz gave an overview of the Bollig said. expressed support for the Oregon villages financial situation for the Trustee Jeanne Carpenter agreed Area Food Pantry, but noted they had coming five years. with postponing the decision and said approved donating $10,000 to the I certainly support the food pantry, she felt the board has been trying to Find updates and project in January, when they also but weve heard tonight about all our be a good neighbor. waived consultants fees for the con- costs that are mounting up, Bollig I do feel good that weve given links right away. struction. said. Id like to wait until the end of them $10,000 already, she said. Facebook as Oregon Observer and then LIKE us. Inn with a pool is not yet sion at the August meeting, pitality would most like- Contact Scott Girard at at the public hearing stage, though that proposal had ly operate the hotel. ungreporter@wcinet.com hansonelectronics.net adno=515703-01 when members of the pub- 64 rooms. According to the appli- and follow him on Twitter lic can weigh in before Commissioners were cation from the developer, @sgirard9. commissioners vote. That mostly pleased with the the hotel would employ 10 INTRODUCING Planning in brief CALL NOW 1-608-338-1170 For more than 50 years, Coronado has been the paint of choice the arena will apply for a for those seeking trusted quality and dependable value. A favorite Ice arena gets third permit for any events that among professional painters and homeowners alike. 70% OFF public hearing include music. A proposal to add out- The hearing is for both door volleyball courts and the second and third (final) a mini baseball field to the stage of development pro- north of the Oregon Ice posals in the village. The Village Board can make INSTALLATION Arena is back for a third public hearing this month. a final decision after the The commission asked commissions recommen- OIA manager Ben Cowan dation. to pare down the potential uses of the area, on which on a new bath or shower! Tough Walls Interior Paint EMERALD INVESTMENTS New orders only. Minimum purchase required. Does not include material costs.* Engineered for washability his previous proposal Ideal for high-traffic areas 60 Months 0% Interest!* Easy application Wisconsin Athletics 10'x25' $90 Month adno=514699-01 T Unified Newspaper Group encourages lively public debate on his month, the Senior Center to an appointment. the conclusions it draws should be issues, but it reserves the right to limit the number of exchanges will be hosting its annual Volunteering gives me a warm, very motivating to anyone who is between individual letter writers to ensure all writers have a chance volunteer appreciation event. happy feeling, one told me. It looking for an enjoyable, low-cost to have their voices heard. It is our opportunity to say thank makes me feel good to give back. way to benefit their health. you to our generous volunteers for Another said, I love the thank- Among the conclusions in the This policy will be printed from time to time in an abbreviated form all that they have done in the past yous I get from the seniors I work article is the fact that when people here and will be posted in its entirety on our websites. year. with, but I feel like I should be volunteer they not only help their In 2016, our volunteers contrib- thanking them for all that they communities, but they also experi- uted over 6,000 hours of time. We teach me. ence better health. Those improved are on track to top that number in A common theme raised by health benefits include higher func- 2017. I tend to look at that number volunteers was connecting with tional ability, lower rates of depres- as representing people. sion and greater longevity. the equivalent Volunteering gives me an Comments from our volunteers Thursday, September 7, 2017 Vol.133, No. 10 of three full- opportunity to talk with people seem to speak to those conclu- USPS No. 411-300 time staff peo- that I otherwise wouldnt have the sions. Periodical Postage Paid, Oregon, WI and additional offices. ple. opportunity to talk to, said one One of the volunteers that I Published weekly on Thursday by the Unified Newspaper Group, Considering volunteer. spoke to on that recent morning at A Division of Woodward Communications, Inc. that the center The people are the reason I the center told me, I just feel bet- POSTMASTER: Send Address Corrections to The Oregon Observer, PO Box 930427, Verona, WI 53593. has only two keep doing this, said another, very ter when I volunteer. I forget about Office Location: 156 N. Main Street, Oregon, WI 53575 full-time and active volunteer. my aches and pains for a while, five part-time Brickner I just love to put smiles on peo- which is really nice. Office Hours: 9 a.m. to 3 p.m. Monday and Thursday staff members, ples faces, said another. Another said, Volunteering Phone: 608-835-6677 FAX: 608-835-0130 having volun- A 2012 University of Michigan takes me outside of myself. It e-mail: ungeditor@wcinet.com teers contribute the hours of three study showed that, especially for makes my world bigger so I dont Circulation customer service: (800) 355-1892 more staff positions is an amazing seniors, volunteering can decrease focus so much on myself and my ConnectOregonWI.com donation of resources, and it has a ones mortality risk. Simply put, worries. This newspaper is printed on recycled paper. significant dollar value. that means if you are a senior, you This year, when we host our In this era of tight budgets, the can lower your chance of dying volunteer appreciation event at the Senior Center struggles to add any simply by volunteering at least 100 end of September, in addition to General Manager Circulation staff hours let alone three full-time hours a year. conveying to our volunteers how Lee Borkowski Carolyn Schultz people. Suffice it to say, without What is it about volunteering valuable they are to the center, lborkowski@wcinet.com ungcirculation@wcinet.com our volunteers, we would simply that has such an impact on seniors well point out how valuable vol- News not be able to do everything that health and longevity? It turns out unteering can be to them. Sales Manager Jim Ferolie we do. that for seniors, volunteering pro- We will do our best not to Kathy Neumeister ungeditor@wcinet.com This year, as we prepare to say vides a sense of purpose that can neglect to mention any of the doz- kathy.neumeister@wcinet.com Sports thank you to our more than 150 otherwise be lost as people leave ens of ways in which they contrib- Advertising Jeremy Jones active volunteers, I am trying to the workforce and no longer have ute, but now we can give the vol- Dawn Zapp ungsportseditor@wcinet.com shift my thinking from how those the responsibilities of an active unteers another reason to feel good donated hours benefit the center parenting role. about the hours they are investing. oregonsales@wcinet.com Assistant Editor and focus instead on how those I started volunteering when As they make life better for those Classifieds Scott Girard hours benefit the volunteers. my job was eliminated and I felt I around them, they are making life Diane Beaman ungreporter@wcinet.com We do truly enjoy being on the needed to be involved in something better for themselves, as well. ungclassified@wcinet.com Reporters receiving end of the volunteers positive, one of the volunteers There are many ways to vol- Inside Sales Chuck Nowlen, Bill Livick, generosity, but it turns out that as said of her original motivation to unteer in the Oregon area. If you Monica Morgan Anthony Iozzo, our mothers used to tell us it is donate her time and energy. would like to volunteer but are insidesales@wcinet.com Amber Levenhagen, indeed better to give than receive. It turns out this volunteer was unsure in which direction to take Scott De Laruelle, Helu Wang On a recent morning at the cen- doing just what needed to be done your time and talents, call us at the ter, I stopped five volunteers to talk to reduce the depression and isola- Senior Center, and we can put you Unified Newspaper Group, a division of with them about their motivation tion that can be the result of a job in touch with a variety of possibili- Woodward Communications,Inc. for volunteering. One was staffing loss or retirement. ties in the community. A dynamic, employee-owned media company our reception desk, another was In 2007, The Corporation for Think of it as a prescription for Good People. Real Solutions. Shared Results. helping out in the kitchen, two National and Community Service better health. were volunteering in our adult day published The Health Benefits of Printed by Woodward Printing Services Platteville program, and the fifth had stopped Volunteering, which summarized Rachel Brickner is the director by to double check on the details much of the research that had been of the Oregon Senior Center. of a ride she was scheduled to pro- done up to that point. It is easy to NATIONAL NEWSPAPER vide to a senior who needed to get read and easy to understand, and ASSOCIATION SUBSCRIPTION RATES One Year in Dane Co. & Rock Co. . . . . . . . . . . . . . . . $37 Corrections One Year Elsewhere . . . . . . . . . . . . . . . . . . . . . . . . . . $45 Due to reporting errors, there were several mistakes in the Aug. 24 story previewing the Brooklyn Labor Day Truck and Tractor Pull. The event was hosted by both the Oregon Sno-Blazers and Brooklyn Oregon Observer Sno-Hornets snowmobile clubs, and the pancake breakfast was sponsored by Monona Bank-Brooklyn. Stoughton Courier Hub Verona Press The Observer regrets the errors. ConnectOregonWI.com September 7, 2017 Oregon Observer 5 September community ed, rec classes Fifth annual Grill for a Oregon School District by Kelly Petrie 6:30-8:30 14- Oct. 19 from 5:00- Cause Sept. 16 Community Education p.m. Thursdays, Sept. 21- 6:00pm at the Oregon and Recreation will hold Nov. 2 (no class Oct. 12), Senior Center. Just $45 public classes for children at Oregon Middle School. for all 6 class meetings. and adults in September. Cost is $90 for all six To register, visit ore- class meetings. Reiki and Yoga: The SCOTT GIRARD Grillers must provide their gonsd.org/community. Pi-Yo Path to Healing Unified Newspaper Group own beef and can cook it however they want. If You Go For information, call 835- 4097. Reiki and Yoga: The The fifth annual Grill For Brown Paws Rescue is a What: Grill For a Cause Adults and older teens Path to Healing combines will enjoy a workout a Cause event in Brooklyn dog adoption organization When: 11a.m. to 6p.m. Mindfulness for that combines the mind/ m e d i t a t i o n , y o ga p o s - Sept. 16 will benefit Brown based in Waunakee. They es and Reiki to decrease will have some of their dogs Saturday, Sept. 16 Runners body practices of Pilates anxiety, improve flexibil- Paws Rescue. Where: Legion Park, 205 and yoga. Modifications The event, which has at the event, though adop- This workshop will challenge all skill lev- ity of mind and body and raised an increasing amount tions cannot be finalized on S. 1st St., Brooklyn help runners discover how increase energy. For adults of money each year for site, and the judges for the Info: facebook.com/grill- to bring yoga-inspired els. Taught by Deborah and older teens. Taught by Gillitzer 4:14-5:05 p.m. different charities, raised competition are from the 4cause strengthening, stretch- Kelly Scholz from 6:15- $1,400 for Neighbors in organization. ing, stability and balance Tuesdays, Sept. 12- Oct. 7:15 p.m. Thursdays, 17 and Thursdays, Sept. Need of Assistance last year. Food will be available to their running. No pri- Sept. 14 to Oct. 19, at the The event, from 11a.m. for purchase for $5 a plate, or yoga experience nec- 14- Oct. 19, at Nether- Oregon Senior Center. wood Knoll Elementary. to 6p.m. at Legion Park, which will include a pork Spectators can also vote essary, but the class will 205 S. 1st St., includes pork chop, chicken breast or brat on their favorite grill include in-class running. Cost is $56 for six class Kick Boxing meetings. chop, chicken and brat din- as well as potato or pasta sal- camp, which grillers can Taught by Kelly Petrie Kick Boxing is a high ners for purchase, chicken ad. create with thematic decora- from 6-8 p.m. Monday, Zumba intensity cardio work- bingo, a silent auction, raf- Live music will be provid- tions or costumes, McCart- September 11, at Prairie out that burns calories, fles, live music and activities ed by Keaton Unplugged and ney explained. View Elementary. Cost is Zumba is a mixture of improves strength and bal- b o d y s c u l p t i n g m ove - for kids like a bounce house. Back 40 throughout the day. For information, visit $25. ance and reduces stress. There will also be a The silent auction begins facebook.com/grill4cause. ments and fun dance Led by Ida Dempich from Yoga steps that combine to give kid-specific raffle with no Sept. 13 at Firefly Coffee- 9-10 a.m. Saturdays. fee, event organizer Milly house, 114 N. Main St. It Contact Scott Girard at Yoga On and Off the adults and older teens a Class is $10 and reser- McCartney told the Observ- will continue there until Fri- ungreporter@wcinet.com Mat combines yoga prac- great workout. vation is required. Call er, and she hopes that and day afternoon, and run at the and follow him on Twitter @ tice and aerobic fitness Tuesday classes, taught 835-4097 by the Thursday by Deb Billitzer, run 5:15- the bounce house can add event site from 11a.m. to sgirard9. into one experience for before the class to reserve to the crowd at this years 3p.m. on Sept. 16. adults and older teens. 6:15 p.m. Sept. 12-Oct. a spot. event. Taught by Kelly Petrie 6-8 17. Wednesday classes, Itd be nice to try and get p.m. Monday, September taught by April Girga, Pickleball FALL All CLEARANCE SALE! more families down there, 18, at Prairie View Ele- run 6:30-7:30 p.m. Sept. Pickleball is a fast she said. mentary. Cost is $25. 13-Oct. 18. Cost is $56 for six classes. fun sport played on ten- The winner of the grill- Mindfulness nis courts. Outdoor play ing contest, which anyone plant material Flow Yoga Tu e s d a y s a n d T h u r s - interested can sign up for Fundamentals days at the Oak St. Ten- with a $20 fee that includes 30% OFF regular price. Flow Yoga leads adults nis Courts beginning at 6 a t-shirt, will choose their Great selection of shade & fruit trees, flowering shrubs, adno=537557-01 This series will explore and older teens through p.m. through October. The meditation, gentle move- fundamental yoga poses in game is free and open to favorite charity for the 2018 evergreens, roses & perennials. ment and other practices rhythm with their breath. event and receive a travel- *Excludes mums to bring help bring mind- fulness into daily practice. Yoga can help increase flexibility, strength and teens and older adults. ing trophy. Those interest- ed in cooking can contact Toddle-In Nursery McCartney at Grill4Cause@ Open Daily 9 a.m.-5:30 p.m. It is designed for adults focus. Taught by Kelly and older teens. Taught Scholz Thursdays Sept. gmail.com or 212-1653. Hwy. 51 & Exchange St. McFarland, WI 838-8972 10 The Oregon Observer, there are many ways to contact $ us. OREGON LARGE For general questions or 710 Janesville St. inquiries, call our office at 608-835-0883 835-6677 or email ungedi- tor@wcinet.com. Our website accepts sto- VERONA ry ideas, community items, 1021 N. Edge Trail photos and letters to the editor at ConnectOregon- 608-848-7000 WI.com. Births, engage- TASTE OF TUSCANY ments and anniversaries can GOURMET DELITE TUSCAN CHICKEN & SAUSAGE PIZZA Chicken, Sausage, Spinach, Roma Tomatoes, Parmesan, also be sent to the website. Olive Oil & Garlic, Zesty Herbs on Crispy Thin Crust TAKE & BAKE PIZZA adno=538046-01 Several types of items Good thru 9/25/17 papamurphys.com Not valid with other offers have specific emails where they can be sent directly. Advertising inquiries oregonsales@ wcinet.com Business announcements ungbusiness@ COME JOIN US!! wcinet.com Sunday, September 10 College notes/ graduations 9:15 am ungcollege@ 625 E. Netherwood St., Oregon wcinet.com 608-291-4311 Community news Join us for Brunch and also communityreporter@ registration for Sunday School & wcinet.com Conrmation. Upcoming events Sunday School classes begin September 17, with ungcalendar@ Conrmation beginning Wednesday, September wcinet.com 13. (Parent meeting on September 6.) Website questions Fall Worship Hours beginning ungweb@wcinet.com September 10 Any other news tips or Saturdays at 5:00 p.m. and questions Sunday mornings at 8:00 a.m. and 10:30 a.m. Education hour at 9:15 a.m. ungeditor@wcinet.com adno=536315-01 adno=537996-01 6 September 7, 2017 Oregon Observer ConnectOregonWI.com Coming up Churches All Saints Lutheran Church Good Shepherd Lutheran Coloring group Womens lunch starting Sept. 15 through Oct. 14. 2951 Chapel Valley Rd., Fitchburg Church ECLA The events are designed for ages (608) 276-7729 Central Campus: Raymond Road and The senior center will offer an adult The Oregon Town and Country 2-6 and registration is not required. Interim pastor Whitney Way coloring group at 12:30 p.m. the fourth Womens Club will host a luncheon, For information, call 835-3656. SUNDAY SATURDAY - 5 p.m. Worship Thursday of each month. open to area women, at the Stough- 8:30 a.m. classic service SUNDAY - 8:15, 9:30 and10:45 a.m. Coloring materials are provided. Just ton Country Club, 3165 Shadyside Grill for a Cause 10:45 a.m. new song service Worship West Campus: Corner of Hwy. come to relax your mind, tap into your Dr., Stoughton, at 12:30 p.m. Tues- PD and Nine Mound Road, Verona The annual Grill For a Cause fund- Brooklyn Lutheran Church SUNDAY - 9 &10:15 a.m., 6 p.m. creativity and spend time with others. day, Sept. 12. raiser will be held 11 a.m. to 6 p.m. 101 Second Street, Brooklyn Worship (608) 271-6633 For information, call 835-5801. Entertainment will be provided by Saturday, Sept. 16, at Legion Park in (608) 455-3852 area line dancers. There will also be Brooklyn. Pastor Rebecca Ninke Hillcrest Bible Church Wellness Walks a 50/50 raffle. SUNDAY 752 E. Netherwood, Oregon The featured fundraiser this year 9 a.m. Holy Communion Eric Vander Ploeg, Lead Pastor The Oregon Area Wellness Coali- Tickets are $10 per person. is Brown Paws Dog Rescue, and vol- 10 a.m. Fellowship (608) 835-7972, tion is sponsoring Wednesday Well- To make a reservation, call Sue unteers from the organization will SUNDAY ness Walks, which start at the senior Capelle at 835-9421. Community of Life Lutheran 8:30 a.m. worship at the Hillcrest judge the contest. Winners of the Church Campus and 10:15 a.m. worship with center at 9 a.m. Wednesdays. People will be taking a brisk walk for 45 min- Assisted living program contest will determine the charity for PO Box 233, Oregon Childrens ministries, birth 4th grade next year. (608) 286-3121, office@ utes each week, rain or shine, through Visit the senior center at 6:30 p.m. There will also be a viewers choice communityoflife.us Holy Mother of Consolation October. Tuesday, Sept. 12, for a program award for best grill camp. Voting Pastor Jim McCoid Catholic Church Those interested should bring an ID about assisted living care with infor- will be done throughout the day and SUNDAY 651 N. Main Street, Oregon 10 a.m. Worship at 1111 S. Perry Pastor: Fr. Gary Wankerl and water bottle. Coffee and water will mation provided by Avalon Assist- winners will be announced prior to Parkway, Oregon (608) 835-5763 be available at the senior center after ed Living Community, Main Street the grill competition award. holymotherchurch.weconnect.com the walk. Quarters, Sienna Crest Assisted Liv- The event will feature live music, Brooklyn Community United SATURDAY: 5 p.m. Worship For information, call 835-5801. ing and BeeHive Homes. Methodist Church SUNDAY: 8 and 10:15 a.m. Worship Bingo, raffles and food. All proceeds 201 Church Street, Brooklyn The presentation will cover what is will benefit Brown Paws Dog Res- AARP driving class independent and assisted living, what cue. (608) 455-3344 Peoples United Methodist Pastor George Kaminski Church The AARP Smart Driver course will is an RCAC and CBRF and financial The silent auction begins Sept. 13 SUNDAY 103 North Alpine Parkway, Oregon be held at the senior center from 11:30 options for assisted living care. 9 a.m. Worship (Nov.-April) Pastor Jason Mahnke at Firefly Coffeehouse, 114 N. Main 10:30 a.m. Worship (May-Oct.) (608)835-3755, a.m. to 4 p.m. Thursday, Sept. 7. Refreshments will be served. The St., through Sept. 15. It will contin- Communion is the 1st & 3rd weekend The class is specifically designed for program is free but registration is ue at the Brooklyn Legion Park and Faith Evangelical Lutheran SATURDAY - 5 p.m. Worship drivers age 50 and older. requested. ends the day of the fundraiser at 3 Church SUNDAY - 9 a.m. Worship and Sunday A light snack will be provided in the For information, call 835-5801. 143 Washington Street, Oregon school; 10:30 a.m. Worship p.m. (608) 835-3554 afternoon. The class is $15 for AARP members and $20 for non-members. Mix-it-up Fridays For information, visit brown- Interim pastor St. Johns Lutheran Church pawsrescue.com. SUNDAY - 9 a.m. Worship 625 E. Netherwood, Oregon Scholarships are available. The library will host different Holy Communion 2nd & last Pastor Paul Markquart (Lead Pastor) For information, and to register, call activities, like art, dance, STEM and Sundays (608) 835-3154 835-5801. more, from 10-10:45 a.m. Fridays WEDNESDAY - 6 p.m. Worship First Presbyterian Church SATURDAY - 5 p.m. Worship 408 N. Bergamont Blvd. (north of SUNDAY - 9 a.m. Worship CC), Oregon, WI (608) 835-3082 - fpcoregonwi.org Vineyard Community Church Community calendar Pastor Kathleen Owens SUNDAY Oregon Community Bank & Trust, 105 S. Alpine Parkway, Oregon - Bob Groth, 10 a.m. Service Pastor Thursday, September 7 Wednesday, September 13 way Marketplace, 1122 Sunrise 10:15 a.m. Sunday School (608) 513-3435, welcometovineyard. 6-7:45 p.m., Sew What: snap 10 a.m., Everybody Storytime Rd., 575-4097 11 a.m. Fellowship com bags, library, 835-3656 (ages 0-6), library, 835-3656 11:15 a.m. Adult Education SUNDAY - 10 a.m. Worship Sunday, September 17 6:30-7:30 p.m., Lifetree Cafe, 3:30-5:30 p.m., Computer Class: 1 p.m., Oregon Area Historical Fitchburg Memorial UCC Zwingli United Church of Christ Headquarters, 101 Concord Dr. Protecting You and Your PC ($20), Society 30th Anniversary Celebra- 5705 Lacy Road, Fitchburg Paoli senior center, 835-5801 (608) 273-1008,. At the Intersection of Hwy. 69 & PB Sunday, September 10 tion, senior center, 835-5801 org Rev. Sara Thiessen 1-5 p.m., Musical Jam, Ziggys, Thursday, September 14 Monday, September 18 Interim pastor Laura Crow (608) 845-5641 135 S. Main St., 228-9644 1 p.m., Movie matinee: Gifted, SUNDAY SUNDAY - 6:30-8 p.m., Estate Planning 9:30 a.m. Worship 9:30 a.m. Family Worship senior center, 835-5801 workshop (free), Krause Donovan Tuesday, September 12 6:30-7:30 p.m., Lifetree Cafe, Estate Law Partners, 116 Spring 10 a.m., Teetering Toddlers Story- Headquarters, 101 Concord Dr. time (12-36 months), library, 835- St., 268-5751 3656 Friday, September 15 Tuesday, September 19 Support groups 11 a.m., Bouncing Babies Story- 10-10:45 a.m., Mix-it-up activity 10 a.m., Teetering Toddlers Story- Alcoholics Anonymous Relationship & Divorce time (0-18 months), library, 835- (ages 2-6), library, 835-3656 time (12-36 months), library, 835- meeting, First Support Group, State 3656 1 p.m., Movie Matinee: Gifted, 3656 Presbyterian Church, Bank of Cross Plains, 12:30 p.m., Womens lunch ($10), senior center, 835-5801 11 a.m., Bouncing Babies Story- every Monday and every other Monday at Stoughton Country Club, 3165 time (0-18 months), library, 835- Friday at 7 p.m. 6:30 p.m. Shadyside Dr., 835-9421 Saturday, September 16 3656 2-6 p.m., Oregon Farmers Market, 9 a.m., Autumnal Equinox Scav- Caregiver Support Veterans Group, enger Hunt, starts at library, 835- 2-6 p.m., Oregon Farmers Market, Group, Oregon Area Oregon Area Senior Dorn True Value Hardware parking Dorn True Value Hardware parking lot, 131 W. Richards Rd. 3656 Senior Center, third Center, every second lot, 131 W. Richards Rd. Monday of each month Wednesday at 9 a.m. 9 a.m. to 3 p.m., Madison Speed- at 9 a.m. Weight-Loss Support Dementia Caregivers Group, Oregon Area Supper and Support, Senior Center, every fourth Wednesday of Monday at 3:30 p.m. Community cable listings Senior center every month from 6-7:30 Navigating Life Elder Monday, September 11 Monday, September 11 p.m., Sienna Crest, 845 Chicken Salad on Wheat Bun Morning - Reflexology, Foot Care Support Group, Peoples Village of Oregon Cable Access TV channels: Market St., Suite 1 United Methodist WOW #983 & ORE #984 Carrot Sticks 9:00 CLUB Phone: 291-0148 Email: oregoncableaccess@charter.net Marinated Tomatoes 10:30 StrongWomen Diabetes Support Church, 103 N. Alpine Website: ocamedia.com Facebook: ocamediawi Fruit Cup 11:45 Eyeglass Adjustments Group, Oregon Area Pkwy., every first New programs daily at 1 p.m. Senior Center, second Monday at 7 p.m. and repeats at 4, 7 and 10 p.m. and 1, 4, 7 and 10 a.m. Sugar Cookie 1:00 Get Fit VO- Egg Salad on Bun 1:30 Bridge Thursday of each month Thursday, Sept. 7 Monday, Sept. 11 Tuesday, September 12 3:30 Weight Loss Support at 1:30 p.m. WOW: Sounds of WOW: Village of *Ham and Swiss Croissant Tuesday, September 12 Summer: Red Hot Horn Oregon Board Meeting (Low Salt Turkey Croissant) 8:30 Zumba Gold Advanced Dawgs (of Aug. 15) LIVE - 5pm 9:30 Wii Bowling Look for the Helpers Kidney Bean Salad Religion that God our Father accepts as pure and fault- ORE: Friday Night ORE: Oregon School Banana 9:45 Zumba Gold Live: Panther Football vs District Board Meeting 10:30 Parkinsons Exercise less is this: to look after orphans and widows in their Lemon Bar distress and to keep oneself from being polluted by the Monona Grove (of Sept. LIVE -6:30pm VO- Cheese Sandwich 12:30 Sheepshead 1) Tuesday, Sept. 12 12:30 Shopping at Pick-N-Save world. James 1:27 NIV Wednesday, September 13 Friday, Sept. 8 WOW: Oregon *Roast Pork with Gravy 5:30 StrongWomen WOW: Oregon Community Band (of 6:30 Navigating Assisted Living Fred Rogers, the creator and host of Mister Rogers (Chicken Breast with Gravy) Neighborhood, reported that his mother had once said Community Band (of June 27) Greens with French Dressing Wednesday, September 13 June 6) ORE: OHS Panther 9:00 CLUB that whenever something horrible happens, something Corn tragic or catastrophic, to always look for the helpers. ORE: BKE Presents: Volleyball vs. Stoughton Fruit Cocktail 9:00 Wednesday Walkers Alice in Wonderland (of (of Sept. 7) 1:00 Get Fit They may be on the sidelines, or even behind the Whole Wheat Bread scenes, but they will always be there, and this gives march 3) Wednesday, Sept. 13 Vanilla Pudding 1:00 Euchre Saturday, Sept. 9 WOW: Oregon 3:30 Protecting you and your PC us reason for hope. No matter how many times we see VO- Veggie Patty these horrific terror attacks which kill innocent people, WOW: Oregon Chamber of Commerce Thursday, September 14 Thursday, September 14 Community Band (of Meeting: Bob Lindmeier Morning: Chair Massage among them often children, we will see the helpers **My Meal, My Way Lunch rushing to the scene to do whatever they can. Natural June 20) ORE: OHS Panther at Ziggys Smokehouse (drop in 8:30 Zumba Gold Advanced ORE: RCI Fine Arts Soccer vs. McFarland (of 9:00 Pool Players disasters are the same. People from around the world between 11:30 a.m. and 1 p.m.) will offer their time and money, and often their very Week (of April 13) Sept. 9) Friday, September 15 9:00 COA Sunday, Sept. 10 Thursday, Sept. 14 9:45 Zumba Gold lives to help others, and this should give us hope Biscuits and Gravy for humanity. Despite our pettiness, our pugnacious WOW: Community WOW: Village of Hash Brown Patty 10:30 StrongWomen of Life Lutheran Church Oregon Board Meeting 12:30 Shopping at Bills tendencies and our downright depravity, most of us Tomato Juice want to be decent human beings, and one way we can Service (of Sept. 11) (Low Salt 3 TomatoSlices) 1:00 Cribbage ORE: OHS Fine ORE: Oregon School 1:00 Movie: Gifted be decent and good is by helping our fellow human Mandarin Oranges beings in need. You dont have to look far for someone Arts Week: Music District Board Meeting Cinnamon Roll 5:30 StrongWomen Composition (of April 12) (of Sept. 11) Friday, September 15 who needs your help. The next time you are tempted VO- Spinach and Cheese to write off humanity as vile and totally depraved, look Quiche 9:00 CLUB 9:00 Gentle Yoga for the helpers, and consider being one yourself. SO- Harvest Salad Christopher Simon 9:30 Blood Pressure *Contains Pork 1:00 Get Fit Sports Jeremy Jones, sports editor 845-9559 x226 ungsportseditor@wcinet.com Thursday, September 7, 2017 7 Anthony Iozzo, assistant sports editor The Oregon Observer 845-9559 x237 sportsreporter@wcinet.com ConnectOregonWI.com A serving of confidence JEREMY JONES Panthers look to be more consistent Sports editor Bychowski, Boerigter with the loss, while the Silver Eagles improved to 3-0, 1-0. The Panthers also moved the ball well on their opening possession, INVENTORY CLEARANCE SALE TENT Girls XC: Panthers take 13th ond runner. He finished was just too balanced as 81st in 18:34. 52 seconds separated the Seniors Connor Brick- Regents top five runners. Continued from page 7 Clara came to every- ley and Tait Baldus both That is compared to a 1:27 one of our summer runs. c r o s s e d t h e fi n i s h l i n e gap between Middletons in the 19-minute range. first and fifth runners. 0 % Sixa.m. every morning, 5 Sun Prairie placed its top Brickley nearly broke 19 Madison West finished Of f five in within the first nine Monday through Friday theres Clara Hughes work- minutes, finishing 99th in with 65 points and Mid- f ro m spots, and all seven of its 19:01. Baldus meanwhile, dleton had 70. Madison p ro d c t s u ing her tail off. Its nice varsity scorers in the top c r o s s e d t h e fi n i s h l i n e La Follette took third with he al l ove r t to see those kids rise and 21, to win the meet with a every one else to see what 114th in 19:48. a 106. Big Eight schools Stor e! gaudy score of 26. Cardinal senior Katie Rose Blacho- that work does. Freshman Brenden Diet- er was the Panthers final made up four of the top five s c h o o l s , w i t h S u n Bastian, who is com- wicz won the race in 18:31. ing off a strong track and varsity scorer, finishing Prairie taking third with Wisconsin Dells finished 120th in 20:07. 170. Only Monona Grove Housewares Lawn & Garden a distant second 70 points field season after an inju- Juniors Will Oelke and (115) breaking things up ry-marred 2016 cross coun- Paint Hardware behind with a 96. Big try campaign, (21:56) and Gabe Karr also competed with its fourth-place fin- Eight rival Middleton (103) but did not count toward ish. Plumbing, Electrical & More!! rounded out the top three Frank (21:57) crossed the t h e fi n a l va r s i t y t e a m Ve r o n a i s a l w a y s a line in 61st and 62nd place. schools. score. good test to see where a New Products Two seconds separated Sophomore Bryanna Oregon finished 18th out team stands in the Badger Salazar was the fifth run- added every week freshman Clara Hughes, ner, taking 89th place in of 20 schools with a team Conference, Haakenson senior Bree Bastian and 23:03. score of 494. said. Monona Grove and adno=535442-01 sophomore Zoe Frank. Juniors Julie Bull and As a whole, the Pan- Monroe both had strong Hughes was the teams Kaity Kliminski both ran thers had eight individuals performances and will be second runner, finishing on varsity but did not count set new personal records challenging to compete 60th in 21:55. toward the final score. for themselves Ben against. ConnectOregonWI.com September 7, 2017 Oregon Observer 9 Volleyball Boys soccer $ 750 OFF Oregon traveled to the Balance and Believe Invitational Wednesday at Blackhawk Country Club. The results will be in next weeks Observer. The Panthers travel to Pleasant View Golf Course at 11:30a.m. Saturday for the Middleton invite and travels to rival Stoughton at 3:30p.m. Wednesday, Sept. 13, at CLOSING COSTS. * Coachmans Golf Resort. adno=530409-01 orderly conduct and unlawful use of the disturbance. room. The downstairs neighbor report- Parcel No. 165-0509-121-1021-1 E. DISCUSSION ITEMS a computerized communication sys- A copy of the amended General De- 7:10 1. Committee Reports: ed hearing profanities yelled by both. velopment Plan and amended Specific a. Policy tem after he allegedly sent threatening July 29 The man left before police arrived. Implementation Plan is on file at the of- fice of the Village Clerk. Office hours of b. Vision Steering F. INFORMATION ITEMS text message to a 62-year-old Oregon 11:15 p .m. A 27-year-old woman the Clerk are 7:30 a.m. to 4:30 p.m., Mon- 7:15 1. Learning Pathways Update Scott Girard day through Friday. 7:30 2. Back-to-School Update woman. The man also showed up at the Subsequent to the hearing, the Com- 7:35 3. Gorman Building/Nether- mission intends to deliberate and act wood Elementary School Certified Sur- upon the request. vey Map Any person who has a qualifying 7:45 4. Superintendents Report disability as defined by the Americans G. CLOSING with Disabilities Act that requires the 7:50 1. Future Agenda Send it here meeting or materials at the meeting to 7:55 2. Check Out be in an accessible location or format H. WORK SESSION must contact the Village Clerk at (608) 8:00 1. Board/DOMay 18Workshop 835-3118, 117 Spring Street, Oregon, Wis- Activity If you have news youd ungbusiness@wcinet. consin, at least twenty-four hours prior I. CLOSED SESSION like to share with readers of to the commencement of the meeting so that any necessary arrangements can be 9:00 1. Superintendent and Admin- istrative Contracts/Mid-year Evaluations The Oregon Observer, there com made to accommodate each request Consideration of Adjourning to Closed WERE Peggy S.K. Haag Session on Item I.1 as Provided Under are many ways to contact College notes/ Village Clerk Wisconsin Statutes 19.85 (1) (c) Published: August 31 and J. ADJOURNMENT us. September 7, 2017 Go to: For general questions or graduations meetings/agendas for the most updated ALL WNAXLP version agenda. inquiries, call our office at *** Published: September 7, 2017 835-6677 or email ungedi- ungcollege@wcinet. OREGON SCHOOL DISTRICT WNAXLP EARS tor@wcinet.com. Our website accepts sto- com Community news BOARD OF EDUCATION HELPING STUDENTS ACQUIRE THE SKILLS, *** NOTICE OF ORDINANCE VILLAGE OF OREGON, ry ideas, community items, KNOWLEDGE, AND DANE COUNTY photos and letters to the communityreporter@ ATTITUDES NEEDED TO ORDINANCE ADOPTING Questions? editor, at ConnectOregon- ACHIEVE THEIR INDIVIDUAL AMENDMENTS TO WI.com. Births, engage- wcinet.com POTENTIAL COMPREHENSIVE PLAN FROM OREGON SCHOOL PLEASE TAKE NOTICE that on Au- gust 7, 2017, the Village of Oregon, Dane Upcoming events DISTRICT MISSION County, Wisconsin, adopted Ordinance Story Ideas? also be sent to the website. STATEMENT DATE: MONDAY, No. 17-11, entitled Ordinance to Adopt Amendments to the 2004 Village of Or- Several types of items ungcalendar@wcinet. SEPTEMBER 11, 2017 egon Comprehensive Plan, Relating to Let us know have specific emails where com TIME: 6:30 PM Planning and the adoption of a Compre- hensive Plan (the Ordinance). they can be sent directly. PLACE: OSD INNOVATION The Ordinance adopts an amend- how were doing. Website questions CENTER, OHS, 456 NORTH ment to the Village of Oregon Compre- hensive Plan, including amendments Advertising inquiries PERRY PARKWAY Order of Business to the Future Land Use Map, changing future land use categories for several lo- 115 Cemetery Lots NOW HIRING Econoprint is looking for PHONES SALES Associates needed. 548 Home Improvement 646 Fireplaces, GREENWOOD APARTMENTS & Monuments part time, take charge Champion in our No cold Calls. commissions paid daily. Furnaces/Wood, Fuel Apartments for Seniors 55+, currently fulfillment/shipping department.We need For more information call 920-234-0203 A&B ENTERPRISES has 1 & 2 bedroom units available CEMETARY PLOT in Verona/St. a quick learner who is self-motivated and Light Construction Remodeling SEASONED SPLIT OAK, starting at $795 per month, includes Andrew's in SectionQ26B. Plots priced 449 Driver, Shipping & No job too small Hardwood. Volume discount. Will deliver. takes initiative. We have flexible daytime 608-835-7791 heat, water, and sewer. $600, asking $375. Will cover cost of hours M-F within a window of 9:00am Warehousing 608-609-1181 608-835-6717 Located at: transfer. Call 608-609-9965. - 3:00pm approximately 3-5 hours per 139 Wolf St., Oregon, WI 53575 FEED MILL Attendant/driver. Full time HALLINAN-PAINTING 652 Garage Sales day. No experience necessary but basic WALLPAPERING HEATED CLEAN Shop space. sub-leas- 143 Notices computer knowledge and accuracy are positions M-F 7:30-am-4pm. Good Ben- EVANSVILLE- 351 S Madison St 9/7-9/8 ing 3 year term, $1,650 a month. 4,700 efits Package. Warehouse, general labor **Great-Summer-Rates** ARONIA BERRIES You Pick. Friday, Sat- a must. Responsibilities include picking, 35 + Years Professional 7am-6pm. 9/9 7am-noon. All household sq ft. 2 large overhead doors, utilities packing and shipping fulfillment orders, and deliveries. CDL Required. Email urday, Sunday 8-4. Already picked(Call Resume to David@middletoncoop.com Interiior-Exterior items from top to bottom. TV's, freezer, not included Oregon Area. Call Mike ahead 608-843-7098.) 18235 W Emery inventory management and professional Free-Estimates entertainment center, glassware, clothes, for details. 608-259-6294. Sub Lease to communications both written and verbal. or mail to Middleton Coop C/O David, PO Rd., Evansville. Box 620348, Middleton, WI 53562-0348. References/Insured bedding, bedframe, tables, storage cabi- start.10-1-17. Econoprint is also looking for an on-call Arthur Hallinan nets, etc. All items must go! 402 Help Wanted, General courier to fill in as needed, to make 608-455-3377 OREGON- 130 Cell Ct. Thurs-9/7, OREGON 2-Bedroom in quiet, well-kept deliveries in Madison and the surround- TRUCK DRIVER/MERCHANDISER: building. Convenient location. Includes all CLEANING HELP wanted for an appre- ing areas.The position requires lifting of Looking for a person to drive and stock 4pm-7pm, Fri-9/8 7am-5pm, Sat-9/9 appliances, A/C, blinds, private parking, ciative 2 person household. 608-513- RECOVER PAINTING Offers carpentry, boxes, interacting with customers and our products on shelves in the grocery drywall, deck restoration and all forms of 7am-2pm. Man-cave collectibles, Harley laundry, storage. $200 security deposit. 2893. a good driving record. Apply in person stores we deliver to. Grocery store expe- painting Recover urges you to join in the Davidson Hot Wheels, Cats OK. $690/month. 608-219-6677 ENTRY LEVEL Service Technician posi- or send your resume and cover letter to rience helpful. 35-40 hours per week. fight against cancer, as a portion of every Diecast Nascar Models, Household, tion available. Full/part-time, no expe- jobs@econoprint.com M-F with few Saturdays's during holiday job is donated to cancer research. Free antiques, Dickens Village Army uniforms STOUGHTON TOWNHOUSE rience necessary, will train on the job. weeks. No CDL required. Call or email estimates, fully insured, over 20 years of and stuff, vacuum cleaner, lots of Harley 2 Bedroom, 1.5 Bath All appliances PART-TIME MERCHANDISER with experience. Call 608-270-0440. T-shirts including W/D FF Laundry C/A Send inquiries to: Service Technician, PO Smart Source, placing ads in stores Ter- Darrell at L&L Foods 608-514-4148 or Box 617 Monroe, WI 53566 dmoen@landfoods.com STOUGHTON-ATT: SPORTS Fans Nel- Basement Attached garage. $920 ritory includes Madison South, Stough- 554 Landscaping, Lawn, son Cards Sale 3166 County Rd A. Sept Month No pets. No smoking. EXPERIENCED AG Mechanic need- ton, Cottage Grove, Monona and sur- 452 General Tree & Garden Work 835-8806 ed. Full-time position, overtime after 40 rounding area. Flexible hours, reliable 7-8, 3pm-6pm, Sept 9, 8am-2pm. Lots of hours. Excellent benefit package. Send transportation needed, XP Windows or OFFICE CLEANING in Stoughton Mon- POWERWASHING HOMES, businesses, other misc. Partial list on Craigslist VERONA 2 Bedroom Apartment $820. inquiries to Service Technician, PO Box above computer. Please contact Kathy at: Fri 5pm-9pm. Visit our website: www. sheds, free estimates! Fast and efficient. VERONA. 412 Rita Ave. Wed-Sat Available Now and Sept 1 Small 24 unit 617, Monore WI 53566 kjlasarge@charter.net capitalcityclean.com or call our office: Also deck staining. GreenGro Design. 7:30am-5pm. Household, collectibles, building. Includes heat, hot water, water 608-831-8850 608-669-7879. hutch, clothing, Monster High collection, & sewer, off-street parking, fully carpeted, SNOW PLOWING tools, tablesaw, jointer, fishing tackle. dishwasher and coin operated laundry 516 Cleaning Services Residential & Commercial Service Technician Wanted and storage in basement. Convenient to TORNADO CLEANING SERVICES Fully Insured. 696 Wanted To Buy Madison's west side. Call KC at 608-273- 608-873-7038 or 608-669-0025 0228 to view your new home. LLC- Your hometown Residential Clean- WE BUY Junk Cars and Trucks. Honey Wagon Services Inc. is looking for a full-time ing Company. 608-873-0333 or garth@ 602 Antiques & Collectibles We sell used parts. 720 Apartments garthewing.com Monday thru Friday 8am-5:30pm. service technician. Qualifications to include a current, Newville Auto Salvage, 279 Hwy 59 ROSEWOOD APARTMENTS for Seniors COLUMBUS ANTIQUE MALL & valid Class B CDL drivers license with tanker endorsement or CHRISTOPHER COLUMBUS MUSEUM Edgerton, 608-884-3114 55+. 1 & 2 bedroom units available ability to obtain, customer service skills, problem solving "Wisconsin's Largest Antique Mall"! 705 Rentals starting at $795 per month. Includes Customer Appreciation Week heat, water and sewer. Professionally skills and a willingness to learn. We offer great pay, health and dental insurance, and 401K. B & R PUMPING 20% DISCOUNT Sept 4-10 Enter daily 8am-4pm 78,000 SF GARAGE PARKING/STORAGE- Ore- managed. Located at 300 Silverado Drive, Stoughton, WI gon. One stall garage space with opener Please mail a resume to SERVICE LLC 200 Dealers in 400 Booths for $90/mo. on S Perry Pkwy. Great for 53589 608-877-9388 Third floor furniture, locked cases storage or an extra vehicle. Call 608-255- Dave Johnson 750 Storage Spaces For Rent adno=537172-01 Columbus, WI 53925 P.O. Box 139 (608) 835-8195 920-623-1992 DANE COUNTYS MARKETPLACE. ALL SEASONS SELF STORAGE We recommend septic Road Reconstruction Hwy 60 & 16 in City The Oregon Observer Classifieds. Call 10X10 10X15 10X20 10X30 Stoughton, WI 53589 pumping every two years 873-6671 or 835-6677. Security Lights-24/7 access BRAND NEW OREGON/BROOKLYN Increase Your sales opportunitiesreach over 1.2 million households! Credit Cards Accepted Advertise in our Wisconsin Advertising Network System. FREE TWO-DAY VACATION CALL (608)444-2900 For information call 835-6677. Chula Vista Resort & Waterpark, WI Dells AGRICULTURAL/FARMINGSERVICES Paid CDL Training Be a truck driver in Wisconsin. Hired on the GOT LAND? Our Hunters will Pay Top $$$ To hunt your land. first day. Its a job, not a school. You get paid while you learn. Sun Valley Apartments PAR Concrete, Inc. Call for a Free info packet & Quote. 1-866-309-1507- (CNOW) 3620 Breckenridge Ct #8, Fitchburg, WI 53713 CampLeasing.com (CNOW) 608-271-6851 liveatsunvalley.com Driveways MISCELLANEOUS Large 1, 2, &3 bedroom apartments. Nicely decorated and Floors HELP WANTED- TRUCK DRIVER Stop OVERPAYING for your prescriptions! SAVE! Call our li- censed Canadian and International pharmacy, compare prices priced just right. New kitchen cabinets and counter tops. New Patios CDL A or B drivers needed to transfer vehicles to and from vari- bathroom vanities and countertops. Beautiful park-like setting. adno=535689-01 ous customer locations throughout U.S.-No forced dispatch- and get $25.00 OFF your first prescription! CALL 1-866-936- Sidewalks Weekly Special: 2 Bdrm 2 bath $895 We specialize in connecting the dots and reducing deadhead. 8380 Promo Code CDC201725 (CNOW) Decorative Concrete adno=509470-01 adno=538425-01 at and attach letter of interest and UNION ROAD STORAGE resume to your application by September 11, 2017. Minority candidates are strongly encouraged to apply. 10x10 - 10x15 10x20 - 12x30 1350 S. Fish Hatchery Road, Oregon (608) 835-0551 24 / 7 Access Security Lights & Cameras Credit Cards Accepted 608-835-0082 1128 Union Road Oregon, WI adno=537704-01 adno=430106-01 by the Oregon Police Department. Some evening hours changed because of holiday work sched- required. The application and draft job description are ules. Call now to place your ad, 873-6671 Mail or email resum to: available on the Village website:, or 835-6677. Carnes Company P.O. Box 930040, Verona, WI 53593 hr@carnes.com and at the Village Clerks Office, Village of Oregon, 117 Spring Street, Oregon, WI 53575. For full consideration return a completed Village application, letter of interest and resume to Lisa Novinska at the same address or by email to lnovinska@vil.oregon.wi.us no later than adno=537870-01 Material Handlers - BUCK NAKED TM UNDERWEAR! adno=537279-01 70% 0% OFF O groups in the present day, At right, construction (but) its just different; its a workers peer out of open product of our times. Then, windows on the Nether- people were out on their wood Building in this 1898 front porches. photograph. The men were Swenson said as mem- Italian workers who stopped bers searched through pho- in Oregon to do the project Installation of New Windows! tos for what to include, the on their way home to Chica- go from building projects in New orders only. Minimum purchase required. Does not include material costs.* process kind of brought back little stories in our Madison. Through the years, many businesses were locat- 60 Months R i $500 iin upgrades Receive rad mind of a particular place ed in the building, which still with your window purchase!* or person. When she and O% Interest* her daughter looked at an stands today. 1880s layout of the village, Of course, the nature history, too. said. It will be interest- t h ey w e r e a s t o u n d e d of those business was so Woodworth encouraged ing to get the response and *Visit for full offer details how many businesses there much different than today, Oregon residents or for- feedback from people. We om were. she said. I hope (the book) mer residents to share certainly hope they enjoy it adno=538257-01 will make people think their familys files and we enjoyed doing it. about their history, and photographs with the soci- maybe want to come and ety for future projects. Email Unified Newspaper The Fitchburg & Oregon Area Senior Centers are proud see the historical society We dont necessarily Group reporter Scott De co-sponsors of: here and the museum and have to keep the originals, Laruelle at scott. Aging Mastery Program dig a little bit from their we can copy them, she delaruelle@wcinet.com.
https://www.scribd.com/document/358175276/OO0907
CC-MAIN-2019-35
refinedweb
8,569
66.23
In this article you will learn how to install Python packages using the pip package installer. of third party libraries available for doing even more useful or obscure things with python. Things like handling To install these extra packages, Pip is the preferred package manager, which has been part of the standard installation of python from python.org starting with Python 3.4, and is now included by default. To use pip, you will need to be able to use a terminal or command prompt. This can be a bit scary for newcomers, so I’m going to walk you through it. These instructions are for Windows 10. On Mac or Linux, you need to find a program called terminal instead of PowerShell, but the rest is basically the same. For example, let’s install matplotlib and do some simple graphing: - Go to the start menu - Begin typing powe...until you see the PowerShellapplication listed as below: - Click on the item to open the PowerShell - Once the PowerShell is open, type pip install matplotliband press enter - That’s it! If all went well you will now be able to use matplotlibin your code, as in this example: import matplotlib.pyplot as plt plt.plot([1, 2, 3, 4]) plt.ylabel('some numbers') plt.show() Cool eh? Please note these instructions assume you installed Python from python.org , version 3.4 or above, and you selected the default options during installation. If there are problems then more yak shaving will be required than we will cover here, but do feel free to ask for help in the comments. Also, you may have already unknowingly installed matplotlib, in which case you will get a helpful message to that effect from PowerShell when you try to reinstall it. In a recent article about using APIs with Python there was a section where we added some sound, and this required a third party package called playsound. If you didn’t know how to install packages before, go back now and see if you can get the full version of the program (complete with meow) working. You should now hopefully be able to install whatever packages you want to extend Python’s functionality. here’s an article describing some of the more popular ones. Happy coding, and if you get stuck write a comment and I’ll see if I can help. PS You might also be interested to know that you can run python in the PowerShell simply by typing python then pressing Enter (assuming you installed Python using the standard version from python.org which adds python to your system path..). Also you can run files with python my_file_name.py assuming the file is at the current location, or you can either navigate to the folder containing the file or type the full path. 1 Comment on “Installing Python Packages with Pip”
https://compucademy.net/installing-python-packages-with-pip/
CC-MAIN-2022-27
refinedweb
479
73.37
Re: fat binary for Mac OSX 10.5.X 2009-04-30 23:12:55 GMT Thanks Yang, and Toby. Both methods work fine. I ended up using the single pass method. I had to do the following changes: Changed the contents of include/curl/curlbuild.h to #ifdef __LP64__ #include "curlbuild64.h" #else #include "curlbuild32.h" #endif where curlbuild64.h and curlbuild32.h were created by running configure separately for x86_64 and i386 architecture. Also, applied the following patch: diff -Naur curl-7.19.4-old/lib/config.h curl-7.19.4/lib/config.h --- curl-7.19.4-old/lib/config.h 2009-04-30 15:37:28.000000000 -0700 +++ curl-7.19.4/lib/config.h 2009-04-30 15:20:10.000000000 -0700 <at> <at> -848,19 +848,35 <at> <at> #define SIZEOF_INT 4 /* The size of `long', as computed by sizeof. */ +#ifdef __LP64__ #define SIZEOF_LONG 8 +#else /* !__LP64__ */ +#define SIZEOF_LONG 4 +#endif /* __LP64__ */ /* The size of `off_t', as computed by sizeof. */ #define SIZEOF_OFF_T 8 /* The size of `size_t', as computed by sizeof. */ +#ifdef __LP64__(Continue reading)
http://blog.gmane.org/gmane.comp.web.curl.library/month=20090501
CC-MAIN-2016-26
refinedweb
182
73.03
[ ] Hoss Man updated SOLR-1725: --------------------------- Attachment: SOLR-1725.patch bq. Are those two consecutive openReader calls redundant or needed? that was a very bad bug, thank you for catching that. bq. Andrzej - does ScriptFile now address your needs with it being ResourceLoader savvy? I'm not sure it does - I think you were asking for scripts to be loaded from the factory configuration directly? Just checking if we're still missing a use case Andrzej had in mind.) bq. So at the very least, the .js scripts should all be fully fleshed out to what would work for real. good catch .. i thought i already did that. bq. But I really think we should default to no-op on all methods that don't exist when tried to invoke. Is that so bad? I think it would be horrific. Consider a small typo in your example... {code} def processadd(cmd) doc = cmd.solrDoc doc.addField('foo_s', 'bar') $logger.info("Added field to #{doc}") end {code} ..) >
http://mail-archives.apache.org/mod_mbox/lucene-dev/201207.mbox/%3C1164076899.676.1341277798386.JavaMail.jiratomcat@issues-vm%3E
CC-MAIN-2015-22
refinedweb
165
78.75
+ Reply to Thread Results 31 to 50 of 50 Thread -. ffmpeg that way, hopefully without flicker? Here again is how Ted Burke does it in C under Linux:. So ruling out API's. Sticking with -> load video. Simple like set sliders, press run to run a subprocess, get a raw image, put it on screen. But you need filter for that or custom made ffmpeg filter, like -filter_complex in cmd line with your math. Is it doable? Ask direct questions, like that, ffmpeg forums, stackoverflow . com, reveal your math, you might get lucky like that guy. What about different setup, for example using opencv to fetch and fix values with math done in your language. And handling pixel values. If you insist that Vapoursynth is not for you and difficult. Is there opencv modul and codes to handle opencv in pure basic? You do not have to use opencv as a gui, just for RGB pixel handling. I found out it here, it has 54 pages so perhaps it is doable Last edited by _Al_; 7th Dec 2018 at 13:39. I downloaded and installed vapoursynth and Python 3.71. This code: Code: from vapoursynth import core import vapoursynth as vs file=r'C:\Archive\Python\C0058.mp4' clip = core.lsmas.LWLibavSource(file) #checking what color space file is, if you do not know if clip.format.color_family == vs.ColorFamily.YUV: space = 'YUV' #there is also RGB, GRAY, YCOCG and COMPAT value = 10 Y_expr = 'x {value} -'.format(value=value) U_expr = '' V_expr = '' clip = core.std.Expr(clip, expr = [Y_expr,U_expr,V_expr]) #clip is YUV , so you need to get it on screen as RGB 8bit, assuming input is BT709: if space == 'YUV': rgb = core.resize.Bicubic(clip , matrix_in_s = '709', format = vs.RGB24, range ='limited') #to get a frame from your rgb, for example first frame(0 , pyhon indexes from 0), but it could be any frame : rgbFrame = rgb.get_frame(0) #now you have RGB24 image that most gui's can put on screen, you just need to know what array and how arranged it has to be, so it is gui specific further #to get it out thru vspipe.exe (encoding, piping it somewhere) you specify output: clip.set_otput() ==================== RESTART: C:\Archive\Python\test2.py ==================== Traceback (most recent call last): File "C:\Archive\Python\test2.py", line 1, in <module> from vapoursynth import core ImportError: cannot import name 'core' from 'vapoursynth' (unknown location) >>> More filters and gewgaws are all well and fine, but if vapoursynth et al. are such hot shit then why don't they have built-in legalizers already? That workflow where you convert to RGB, do filtering in RGB and look at scopes, hardclip , then export YUV422 is potentially problematic . You don't have to use it if you don't want to. Fair warning: expect many more error messages and hair pulling with debugging trying to get it to work. It's seriously painful at first. It's the same when someone first uses anything, like ffmpeg too for the first time . I had to revisit avisynth a few times when first learning it. It wasn't a pleasant experience. Vapoursynth becomes slightly easier if you know avisynth. At least you have some programming background, it shouldn't be difficult to get it working . Then there is the hassle of collecting plugins, dlls, scripts, not fun if you're new to it. But in the end, worth the hassle x10 . Very powerful video manipulation stuff that you can't do with other programs If you still want to pursue this, did you follow the getting started instructions, install correct vapoursynth version and matching python version ? Did you try typing this in python command line, as per the instructions ? And what messages did it print ? Was it the same unknown location ? Code: from vapoursynth import core print(core.version()) More filters and gewgaws are all well and fine, but if vapoursynth et al. are such hot shit then why don't they have built-in legalizers already? And obviously, there are many other types of AV manipulations besides "legalization" . Good luck with the flicker and GUI , I hope it works out Last edited by poisondeathray; 7th Dec 2018 at 15:41. You cannot install Python and Vapoursynth properly, you don't even say what you did, portable version, or installation, very important step if Python works at all, or it is just vapoursynth, nothing, you said nothing, just copy/paste error.Some complaining, give , give me. What you want to hear? Are you serious? You were not even hold by my script at all, because you do not have it right in the first place, yet giving the lecture about correct approach in discussions? I am doing one pass to one file at 4:2:0 -> RGB, then a second pass at RGB -> 4:2:2.
https://forum.videohelp.com/threads/391111-C-C-Graphics/page2?s=a6e8b5a4ac17c181b11c6e68801582b3
CC-MAIN-2020-40
refinedweb
808
75.1
Context A few days ago, I had to extract a data from Oracle Database using SQL, and then PIVOT a long set of values in a column, to multiple columns. The sintax to do this, requires to write every single value you want to be as a column, not one but twice to avoid the default quotes. See reference There are a few options like pivoting using PIVOT XML, or even more recent to build a JSON column using JSON_ARRAYAGG and JSON_OBJECT to get dynamic columns or any value as an attribute, but still not so straightforward. First try: Lets try with a python notebook (I use VSCode notebook). First import pandas: import pandas as pd We will use a random free dataset: # source file from df = pd.read_csv('./vgsales.csv') df Using the function pivot_table we will transponse the values of the column Genre to be a column for every value in the dataset. pivot_df = df.pivot_table(index=['Year', 'Publisher'], columns=['Genre'], values=['Global_Sales'], aggfunc=np.sum, fill_value=0) pivot_df Now let's reset_index() to "try" flattening the indexes. mi_pivot_df = pivot_df.reset_index() mi_pivot_df Ook, not sure, but, lets try to export to an Excel, with index=False to avoid showing the index column to the left. pivot_df.to_excel('./global_sales_by_publishers_genres.xlsx', index=False) So, it's not implemented. Although, it does work when you remove the index=False part. However it will show the index column to the left and the two column indexes, which I don't want to be there. Second try I searched the solution to flat the most, but no answer was full, complete enough, and free of errors. I collected all, and by try and error, got here, a working solution. I also works after using pivot function: flat_index_pivot_df = pivot_df.copy() flat_index_pivot_df.columns = flat_index_pivot_df.columns.droplevel(0) flat_index_pivot_df.reset_index(inplace=True) flat_index_pivot_df.columns.name = None flat_index_pivot_df Now you can get a clean Excel Sheet, free of MultiIndex. flat_index_pivot_df.to_excel('./global_sales_by_publishers_genres.xlsx', index=False) Do you know another solution? Discussion (1) Nice article. I wrote a small utility for pandas called sidetable which flattens multi-indexes and gives some additional options. It might be useful to you and your readers. You can see it here - github.com/chris1610/sidetable#fla...
https://dev.to/rolangom/flatten-pandas-dataframe-multiindex-40al
CC-MAIN-2021-49
refinedweb
374
55.34
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Frans Pop wrote: > On Monday 15 January 2007 15:34, you wrote: >>> Yes, I know, I am aware of this issue and was aware of it from the >>> start. The main problem is that the interfaces file does not allow >>> comments at the end of lines with useful information, so doing >>> something like I did for the /etc/resolv.conf file is not possible. > > IMO the interfaces file should be created in the d-i environment just as > is done in netcfg. For me it makes sense to keep these two as much in > parallel as possible. Ok. >>> I was thinking of setting netcfg's templates to choose "Do not >>> configure the network at this time" and make it skip directly to the >>> hostname question. I did some tests this morning, but they are >>> inconclusive yet. Still I am confident I can do this. >> I have tested a few combinations of netcfg templates setting, but no >> luck yet. Still I don't understand why this didn't help: > > This really is totally broken. The value "Do not configure network" could > in theory be tested in later udebs and then the code there would assume > that you do not have networking even when in fact you do. I see. >> The main idea is to let the network unconfigured, but let netcfg to set >> hosts, lo and hostname. > > No, ppp-udeb should take care of that all by itself. At least if you want > this implemented in time for Etch. Yes, I want that in time for Etch :-) > I've attached an alternative patch (against current version in unstable). > This does things as I think it is best to do them for Etch, resulting in > a properly setup network and with the least risk of unexpected > side-effects. Thanks, I was stuck and out of ideas. > I've also fixed a number of minor errors in the existing script and your > patch. Thanks. > Please check it thoroughly. If you have any questions, let me know. > Note that the patch is almost entirely untested. I guess I could call people to test the new ppp-udeb version and setup in qemu a server and a client (although I have failed in the past). > + [ > + > + -- Frans Pop <fjp@debian.org> Wed, 17 Jan 2007 16:57:46 +0100 > + > ppp (2.4.4rel-4.1) unstable; urgency=low > > * Non-maintainer upload with maintainer's consent. > diff -urN ppp-2.4.4rel.orig/debian/control ppp-2.4.4rel/debian/control > diff -urN ppp-2.4.4rel.orig/debian/ppp-udeb.postinst ppp-2.4.4rel/debian/ppp-udeb.postinst > --- ppp-2.4.4rel.orig/debian/ppp-udeb.postinst 2007-01-17 16:41:27.000000000 +0100 > +++ ppp-2.4.4rel/debian/ppp-udeb.postinst 2007-01-17 16:41:01.000000000 +0100 > @@ -74,26 +74,47 @@ > } > > reset_if_needed() { > -# bring down the pppoe connection made before, if that's the case > -PIDF=/var/run/ppp-provider.pid > -if [ -e $PIDF ] > -then > - PID=$(cat $PIDF) > - log "found pid file $PIDF which reffers to process $PID; searching the pppd process" > - if [ -z "$(ps | grep "^\s*$PID" | sed "s/^\s*$PID\s.*$/$PID/")" ] > - then > - log "$PID not found; removing pid file" > - else > - log "$PID found; killing it and removing pid file" > - kill $PID || true > + # Bring down an earlier pppoe connection, if there is one > + PIDF=/var/run/ppp-provider.pid > + if [ -e $PIDF ]; then > + PID=$(cat $PIDF) > + log "found PID file $PIDF which refers to process $PID; searching for the pppd process" > + if [ -z "$(ps | grep "^\s*$PID" | sed "s/^\s*$PID\s.*$/$PID/")" ]; then > + log "$PID not found; removing pid file" > + else > + log "$PID found; killing it and removing pid file" > + kill $PID || true > + fi > + rm -f $PIDF > fi > - rm -f $PIDF > -fi > > -# bring down previously rised interface > -[ "$PPPOE" = "_" ] || ifconfig "$PPPOE" down && db_set ppp/interface "_" || true > + # Bring down previously raised interface > + [ "$PPPOE" = "_" ] || ifconfig "$PPPOE" down && db_set ppp/interface "_" || true > } > > +valid_hostname() { > + if [ $(echo -n "$1" | wc -c) -lt 2 ] || > + [ $(echo -n "$1" | wc -c) -gt 63 ] || > + [ "$(echo -n "$1" | sed 's/[^-\.[:alnum:]]//g')" != "$1" ] || > + [ "$(echo -n "$1" | grep "\(^-\|-$\)")" ]; then > + return 1 > + fi > + return 0 > +} > + > +# Sanity check: we rely on some netcfg functionality but cannot depend on it; > +# netcfg should always be present, but bail out if it is not > +if [ ! -e /bin/netcfg ]; then > + fail "required package netcfg is not installed" > + exit 1 > +fi > + > +# Bring up the loopback interface > +if [ -z "$(ip link show lo up)" ]; then > + ip link set lo up > + ip addr flush dev lo > + ip addr add 127.0.0.1/8 dev lo > +fi > > if [ -z "$INTERFACES" ]; then > fail "no Ethernet interfaces detected" > @@ -104,9 +125,7 @@ > > reset_if_needed > > -# test each of the interfaces for a concentrator, > -# then stop when one is found. > - > +# Test each of the interfaces for a concentrator; stop when one is found > for IFACE in $INTERFACES; do > if ppp_concentrator_on $IFACE; then > log "setting pppoe connection on $IFACE" > @@ -131,23 +150,71 @@ > fi > >.) > db_get ppp/username > > db_input high ppp/password || true > -db_go || true > +db_go || exit 30 > db_get ppp/password > > -# just to be sure that the answers will not be cached if the script > -# is run a second time > +# Clear answers in case the script is run a second time > db_unregister ppp/password > db_unregister ppp/username > > > +# Ask for the hostname to use for the system (using the netcfg template!) > +while true; do > + db_input high netcfg/get_hostname > + db_go || exit 30 > + db_get netcfg/get_hostname > + + if valid_hostname "$HOSTNAME"; then > + break > + fi > + db_input high netcfg/invalid_hostname > + db_fset netcfg/get_hostname seen false > +done This looks ok. > +# FIXME: lo snippet should not be ppp-udeb's job > +cat > /etc/network/interfaces <<EOF > +# This file describes the network interfaces available on your system > +# and how to activate them. For more information, see interfaces(5). > + > +# The loopback network interface > +auto lo > +iface lo inet loopback > + > +# PPPoE connection > +auto provider > +iface provider inet ppp > + pre-up /sbin/ifconfig $ETH up (Just checking) In order for this to work, the interface name should be preserved. This happens, AFAIK, even for interfaces which are not configured, right? > + provider provider > +EOF > + > +# Set hostname > +if [ "$HOSTNAME" ]; then > + echo "$HOSTNAME" >/etc/hostname > +fi > + > +# Create a basic /etc/hosts file > +cat > /etc/hosts <<EOF > +127.0.0.1 localhost <<EOF > /etc/ppp/peers/provider > -# kernel space PPPoE driver example configuration > +# kernel space PPPoE driver configuration > # > # See the manual page pppd(8) for information on all the options. > > @@ -207,9 +274,19 @@ > log-output -t depmod > log-output -t ppp-udeb modprobe pppoe > > -log-output -t ppp-udeb pppd call provider || true > - > -log-output apt-install ppp || true > +RET=0 > +log-output -t ppp-udeb pppd call provider || RET=$? > +if [ $RET -eq 19 ]; then > + fail "wrong login info detected" > + db_input critical ppp/wrong_login || true > + db_go || true > + exit 1 > +elif [ $RET -ne 0 ]; then > + fail "unhandled error detected" > + db_input critical ppp/unhandled || true > + db_go || true > + exit 1 > +fi > > #DEBHELPER# > > diff -urN ppp-2.4.4rel.orig/debian/ppp-udeb.todo ppp-2.4.4rel/debian/ppp-udeb.todo > --- ppp-2.4.4rel.orig/debian/ppp-udeb.todo 2007-01-17 16:41:27.000000000 +0100 > +++ ppp-2.4.4rel/debian/ppp-udeb.todo 2007-01-17 16:41:01.000000000 +0100 > @@ -1,4 +1,3 @@ > -- wrong authentication must be treated, not ignored > - how to blend well with netcfg? > - the new patch proposes assigning menu order 17 to ppp-udeb, so > is ran before netcfg > diff -urN ppp-2.4.4rel.orig/debian/rules ppp-2.4.4rel/debian/rules > --- ppp-2.4.4rel.orig/debian/rules 2007-01-17 16:41:27.000000000 +0100 > +++ ppp-2.4.4rel/debian/rules 2007-01-17 16:41:01.000000000 +0100 > @@ -138,7 +138,7 @@ > > ifdef BUILD_UDEB > dh_installdirs -p ppp-udeb etc/ppp/peers/ usr/sbin/ \ > - usr/lib/pppd/$(PPPDDIR)/ usr/lib/finish-install.d/ > + usr/lib/pppd/$(PPPDDIR)/ usr/lib/post-base-installer.d/ > grep '^[a-zA-Z0-9]' extra/options > $D-udeb/etc/ppp/options > # cp $B/chat/chat $D-udeb/usr/sbin/ > cp $B/pppd-udeb/pppd $D-udeb/usr/sbin/ > @@ -150,8 +150,8 @@ > cp extra/ppp-udeb.ip-up \ > $D-udeb/etc/ppp/ip-up > chmod 0755 $D-udeb/etc/ppp/ip-up > - install -m755 extra/50config-target-ppp \ > - $D-udeb/usr/lib/finish-install.d/ > + install -m 755 extra/config-target-ppp \ > + $D-udeb/usr/lib/post-base-installer.d/30ppp;. > dh_installdebconf > endif > > diff -urN ppp-2.4.4rel.orig/extra/50config-target-ppp ppp-2.4.4rel/extra/50config-target-ppp > --- ppp-2.4.4rel.orig/extra/50config-target-ppp 2007-01-17 16:41:27.000000000 +0100 > +++ ppp-2.4.4rel/extra/50config-target-ppp 1970-01-01 01:00:00.000000000 +0100 > @@ -1,6 +0,0 @@ > -#!/bin/sh -e > -# Copy the ppp configuration to the target system. > - > -mkdir -p /target/etc/ppp > -cp /etc/ppp/*-secrets /target/etc/ppp/ > -cp -a /etc/ppp/peers /target/etc/ppp/ > diff -urN ppp-2.4.4rel.orig/extra/config-target-ppp ppp-2.4.4rel/extra/config-target-ppp > --- ppp-2.4.4rel.orig/extra/config-target-ppp 1970-01-01 01:00:00.000000000 +0100 > +++ ppp-2.4.4rel/extra/config-target-ppp 2007-01-17 16:57:33.000000000 +0100 > @@ -0,0 +1,16 @@ > +#!/bin/sh -e > +# Configure ppp for the target system > +# Note: netcfg takes care of general networking configuration files > + > +# We can only do this after ppp has been installed to ensure correct permissions > +apt-install ppp || true > + > +if [ ! -d /target/etc/ppp/peers ]; then > + logger -t ppp-udeb "Error: directory /target/etc/ppp/peers does not exist" > + logger -t ppp-udeb "There may have been an error installing ppp" > + exit 1 > +fi > + > +# We copy over already existing files, so permissions are preserved > +cp /etc/ppp/*-secrets /target/etc/ppp/ > +cp /etc/ppp/peers/provider /target/etc/ppp/peers/ Not sure if copying the whole /etc/ppp/peers/ directory isn't a better idea (in the style of what is done with the configuration files from d-i). Having that in mind, aren't the configuration files already copied in the target *with* the correct permissions when this script is ran? - -- Regards, EddyP ============================================= "Imagination is more important than knowledge" A.Einstein -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFFrz7aY8Chqv3NRNoRAhWFAJ9LLPlA9gEauG1VtU8z4x8P/L/gVwCfQA9V Yzqp62xzYw4OszohIHyrCks= =ItDP -----END PGP SIGNATURE-----
https://lists.debian.org/debian-boot/2007/01/msg00775.html
CC-MAIN-2015-27
refinedweb
1,713
63.39
Linq component The Linq component of the driver is an implementation of LINQ IQueryProvider and IQueryable<T> interfaces that allows you to write CQL queries in Linq and read the results using your object model. When you execute a Linq statement, the component translates language-integrated queries into CQL and sends them to the cluster for execution. When the cluster returns the results, the LINQ component translates them back into objects that you can work with in C#. Linq query execution involves expression evaluation which brings an additional overhead each time a Linq query is executed. 1.- Add a using statement to your class: using Dse.Data.Linq; 2.- Retrieve an ISession instance in the usual way and reuse that session within all the classes in your client application. 3.- You can get an IQueryable instance of using the Table constructor: var users = new Table<User>(session); New Table<T> ( IQueryable) instances can be created each time they are needed, as short-lived instances, as long as you are reusing the same ISession instance and mapping configuration. Example public class User { public Guid UserId { get; set; } public string Name { get; set; } public string Group { get; set; } } // Get a list of users from DSE using a Linq query IEnumerable<User> adminUsers = (from user in users where user.Group == "admin" select user).Execute(); You can also write your queries using lambda syntax IEnumerable<User> adminUsers = users .Where(u => u.Group == "admin") .Execute(); The Linq component creates new instances of your classes using its parameter-less constructor. Configuring mappings In many scenarios, you need more control over how your class maps to a CQL table. You have two ways of configuring with Linq: - decorate your classes with attributes - define mappings in code using the fluent interface An example using the fluent interface: MappingConfiguration.Global.Define( new Map<User>() .TableName("users") .PartitionKey(u => u.UserId) .Column(u => u.UserId, cm => cm.WithName("id"))); You can also create a class to group all your mapping definitions. public class MyMappings : Mappings { public MyMappings() { // Define mappings in the constructor of your class // that inherits from Mappings For<User>() .TableName("users") .PartitionKey(u => u.UserId) .Column(u => u.UserId, cm => cm.WithName("id"))); For<Comment>() .TableName("comments"); } } Then, you can assign the mappings class in your configuration. MappingConfiguration.Global.Define<MyMappings>(); You should map one C# class per table. The Linq component of the driver will use the configuration defined when creating the Table<T> instance to determine to which keyspace and table it maps to, using MappingConfiguration.Global when not specified. Linq API examples The simple query example is great, but the Linq component has a lot of other methods for doing Inserts, Updates, Deletes, and even Create table. Including Linq operations Where(), OrderBy(), OrderByDescending(), First(), Count(), and Take(), it translates into the most efficient CQL query possible, trying to retrieve as less data possible. For example, the following query only retrieves the username from the cluster to fill in a lazy list of string on the client side. IEnumerable<string> userNames = ( from user in users where user.Group == "admin" select user.Name).Execute(); Some other examples: // First row or null using a query User user = ( from user in users where user.Group == "admin" select user.Name).FirstOrDefault().Execute(); // First row or null using lambda syntax User user = users.Where(u => u.UserId == "john") .FirstOrDefault() .Execute(); // Use Take() to limit your result sets server side var userAdmins = ( from user in users where user.Group == "admin" select user.Name).Take(100).Execute(); // Use Select() to project to a new form server side var userCoordinates = ( from user in users where user.Group == "admin" select new Tuple(user.X, user.Y)).Execute(); // Delete users.Where(u => u.UserId == "john") .Delete() .Execute(); // Delete If (Cassandra 2.1+) users.Where(u => u.UserId == "john") .DeleteIf(u => u.LastAccess == value) .Execute(); // Update users.Where(u => u.UserId == "john") .Select(u => new User { LastAccess = TimeUuid.NewId()}) .Update() .Execute();
https://docs.datastax.com/en/developer/csharp-driver-dse/2.8/features/components/linq/
CC-MAIN-2020-24
refinedweb
654
56.05
Power of Java MemoryMapped File In JDK 1.4 an interesting feature of Memory mapped file was added to Java, which allows to map any file to OS memory for efficient reading. A memory mapped file can be used to develop an IPC type of solution. This article is an experiment with memory mapped file to create IPC. Some details about Memory Mapped File, definition from WIKI. Sample Program Below we have two Java programs, one is a writer and the other is a reader. The writer is the producer and tries to write to Memory Mapped file, the reader is the consumer and it reads messages from the memory mapped file. This is just a sample program to show you the idea, it does’t handle many edge cases but it is good enough to build something on top of a memory mapped file. MemoryMapWriter import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; public class MemoryMapWriter { public static void main(String[] args) throws FileNotFoundException, IOException, InterruptedException { File f = new File("c:/tmp/mapped.txt"); f.delete(); FileChannel fc = new RandomAccessFile(f, "rw").getChannel(); long bufferSize=8*1000; MappedByteBuffer mem =fc.map(FileChannel.MapMode.READ_WRITE, 0, bufferSize); int start = 0; long counter=1; long HUNDREDK=100000; long startT = System.currentTimeMillis(); long noOfMessage = HUNDREDK * 10 * 10; for(;;) { if(!mem.hasRemaining()) { start+=mem.position(); mem =fc.map(FileChannel.MapMode.READ_WRITE, start, bufferSize); } mem.putLong(counter); counter++; if(counter > noOfMessage ) break; } long endT = System.currentTimeMillis(); long tot = endT - startT; System.out.println(String.format("No Of Message %s , Time(ms) %s ",noOfMessage, tot)) ; } } MemoryMapReader import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; import java.io.RandomAccessFile; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; public class MemoryMapReader { /** * @param args * @throws IOException * @throws FileNotFoundException * @throws InterruptedException */ public static void main(String[] args) throws FileNotFoundException, IOException, InterruptedException { FileChannel fc = new RandomAccessFile(new File("c:/tmp/mapped.txt"), "rw").getChannel(); long bufferSize=8*1000; MappedByteBuffer mem = fc.map(FileChannel.MapMode.READ_ONLY, 0, bufferSize); long oldSize=fc.size(); long currentPos = 0; long xx=currentPos; long startTime = System.currentTimeMillis(); long lastValue=-1; for(;;) { while(mem.hasRemaining()) { lastValue=mem.getLong(); currentPos +=8; } if(currentPos < oldSize) { xx = xx + mem.position(); mem = fc.map(FileChannel.MapMode.READ_ONLY,xx, bufferSize); continue; } else { long end = System.currentTimeMillis(); long tot = end-startTime; System.out.println(String.format("Last Value Read %s , Time(ms) %s ",lastValue, tot)); System.out.println("Waiting for message"); while(true) { long newSize=fc.size(); if(newSize>oldSize) { oldSize = newSize; xx = xx + mem.position(); mem = fc.map(FileChannel.MapMode.READ_ONLY,xx , oldSize-xx); System.out.println("Got some data"); break; } } } } } } Observation Using a memory mapped file can be a very good option for developing Inter Process communication, throughput is also reasonably well for both produce & consumer. Performance stats by run producer and consumer together: Each message is one long number Produce – 10 Million message – 16(s) Consumer – 10 Million message 0.6(s) A very simple message is used to show you the idea, but it can be any type of complex message, but when there is complex data structure then serialization can add to overhead. There are many techniques to get over that overhead. More in next blog. Ashkrit, Thanks again for a great article. The example is straight forward and very clear. A couple of questions: 1. Are you assuming that MappedByteBuffer is always direct? I read elsewhere that it can be non-direct also. However, looking at the source code of MappedByteBuffer, it clearly says that its direct buffer backed. A bit confused on that. Could you pls. shed some light on it? 2. You have spoken about IPC in this article. By giving reading/writing examples in separate files, are you alluding to IPC? What use cases do you see in practical implementation? How will the other process know not to write the same sections of a file as the other process? Is IPC somehow process-safe? 3. How could you inject the usage of Unsafe for better efficiency here? Would it further enhance performance? Thanks again for this article. Hi Ashley, Thanks for interest in my blog. 1 – Yes in java MappedByteBuffer is always direct, there is only one implementation(DirectByteBuffer) available in java for MappedByteBuffer. I did cross check in JDK 8, but nothing has changed on this. So it is always direct. 2 – Yes in the atricle, it is the simple example of IPC. There are many use case of IPC, If you have some producer/consumer type of problem or some kind of request dispatch to service. Reliable persistence is one of the very common use case why you will use memorymapped file and then you can some consumer that handles those message. ZeroMQ peer to peer message on same box is built using memory mapped files. Having multiple writer for memory mapped file can be very tricky because you have to manage everything. read & Writes from memorymapped files have same rules as Java memory model, so it is not difficult to build system that does not have to worry about when changes will be visible to other consumer. I have not explored the option of injecting Unsafe in memorymapped file, but it will be interesting , if it is possible to do so. Hi Ashkrit, Thanks a lot for a nice intro article on a memory-mapped files in Java. I have the following question for you and a bit of an observation If you start with an empty file, and run the reader code only, it will execute the FileChannel fc = new RandomAccessFile(new File(“c:/tmp/mapped.txt”), “rw”).getChannel(); long bufferSize=8*1000; MappedByteBuffer mem = fc.map(FileChannel.MapMode.READ_ONLY, 0, bufferSize); Which will allocate 1024 bytes, therefore making the buffer filled with 0 bytes. We will read the first 1024 empty bytes until hitting the point when we wait for a new message, which is incorrect in my opinion. Also, when we await a new message, it might be the case that the writer maps another 1024 bytes, but does not write anything there yet. The reader would theck the new size, which would be 2048 (as the size is what is currently allocated NOT written by the writer) and it will continue in the same manner, reading empty zeros. Is the above the intended behaviour or can we somehow synchronize the reader until some data from writer is actually written? Hi Maciej, Your observation is correct regarding reader, sample program that i used in blog to demonstrate how to use memory mapped file for IPC. There are better way to do same thing , ideally index file is used to check whether data is available or not. Write to memorymapped file uses java memory model, so easy to get write guarantee. I think i should write blog on this soon. I am not sure if you have looked into chronical . Chronical is based on memory mapped file and it has some interesting implementation of IPC.
http://www.javacodegeeks.com/2013/05/power-of-java-memorymapped-file.html
CC-MAIN-2014-10
refinedweb
1,178
59.8
Of Eggs and Omelets My life has been one long series of experiments. I tried to be a know-it-all jerk in high school and found that this tended to cut into my social life (drastically), so I stopped (or am still stopping, depending on how well you know me : ). I tried to be a “ground floor” employee for a small company, but that stopped being fun when the repetition and bureaucracy hit and it became clear that any profits that might someday happen would be kept at the owner level. I tried being an employee for a large company, but when my projects started getting axed based on political winds that, as a low-level grunt, I had no control over, I went looking elsewhere. Then I found DevelopMentor. The combination of working on lots of different things with lots of smart people was something that I loved more than anything I’ve since found. But even that didn’t stop my experimentation. After teaching and consulting for a while, I decided that instead of just talking about building software, that I would take one of my ideas and build a software development team. There I worked to build the best software development team that I’ve ever had the pleasure to work with and a learned a *ton* along the way. However, the fact that the product itself was a commercial failure didn’t make for lasting employment. So, late last year, I started another whole series of experiments, this time aimed at making myself solvent as an independent in a down economy. I’ve been doing all kinds of crazy things to see if I could use them to help build my brand, my business and my customer base. Some of them have been critically acclaimed and a few have even been commercially viable, at least to the point that I can continue to pay my mortgage (many of my friends weren’t so lucky). I have been successful in that I’ve gotten to do a lot of cool things and work with a lot of smart people. However, what I’ve found is that to remain independent, I have to spend a lot more time doing promotion and marketing, which goes against every fiber in my being. The problem is, although I’ve experimented a bunch, especially as related to this year’s DevCons, I don’t have any natural marketing aptitude and I don’t know where to turn for help. So, I fall back on my old standby — experimentation. Over several years, I’ve experimented with several sales/marketing/PR folks and organizations and so far, haven’t found what I’m looking for as far as “mentors” go (to be fair, my standard for comparison includes Don Box, Tim Ewald and John Robbins, so I’m not surprised I haven’t found someone to meet that bar). Even if I did meet that person, or may have already met him/her, I don’t know that I’d recognize it. I just have no standard for comparison. With software, it either works or it doesn’t. With marketing, did it succeed or fail because of the quality of the product or the quality of the marketing? The whole thing is too damn squishy and it drives me nuts! My most recent set of experiments is to let a friend of mine, a long-time marketing guy that I’ve known for years, run rough-shod over my newsletter subscribers, asking them all kinds of questions and for their help in various ways, offering my money and my time as incentives. Will it work? I have no idea. Is it risky? You bet. I’ve lost a dozen or more subscribers and who knows what some people think of me now that I’ve let marketing ideas into what I do (and I’d like to apologize again to the folks that I drove away or offended in this most recent campaign). Is it worth the risk? Yes it is, but not for the promised return (I still think my marketing friend was on happy pills the day he quoted his marketing targets). What makes it worth the risk is that I’ve identified a weakness in myself and I’m willing to perform the experiment and make the mistakes to see if I can come out better on the other side. Even if I fail, I’ve learned something. Eventually, I’ll get that omelet made, no matter how many eggs need breaking. And to those of you who suffer with me through my experiments, thank you very much. You have no idea how much you mean to me. Discuss
https://sellsbrothers.com/12568
CC-MAIN-2021-21
refinedweb
789
66.57
How to Upload Files to Multiple Locations Simultaneously with Joystick June 3rd, 2022 What You Will Learn in This Tutorial How to upload files to multiple destinations using Joystick's uploader feature., we need to install one dependency, uuid: Terminal cd app && npm i uuid We'll use this to generate an arbitrary UUID that we can pass along with our upload to demonstrate passing data with your upload. After that's installed, you can start up your server: Terminal joystick start After this, your app should be running and we're ready to get started. Setting up an Amazon S3 bucket For this tutorial, one of the two locations we upload our files to will be Amazon S3 (the other will be to a folder locally within the app). For S3, we'll need to make sure we have a few things: - An Amazon Web Services account. - An Amazon IAM user to provider credentials for accessing the bucket. - An Amazon S3 bucket. If you already have access to these, you can skip to the "Wiring up an uploader on the server" section below. If you don't have these, first, head over to Amazon Web Services and create a new account here. Once you're signed up, make sure you've completed any steps to add your billing information and then head over to the IAM Security Credentials page. From the left-hand menu, click on the "Users" option under the "Access management" subheading. In the top-right hand corner on this page, click the blue "Add users" button. On the next page, in the "User name" box, type in a username for your IAM (Identity Access Management) user and under "Select AWS access type" tick the box next to "Access key - Programmatic access." After these are set, click "Next: Permissions" at the bottom-right corner of the page. On the next screen, click the third box labeled "Attach existing policies directly" and then in the search box next to "Filter policies" in the middle of the page, type in "s3full" to filter the list to the AmazonS3FullAccess option. Tick the box next to this item and then click the "Next: Tags" button at the bottom-right of the page. The "tags" page can be skipped as well as the one after it (unless you're familiar with these and would like to complete them). After these, your IAM user's credentials will be revealed. Note: IAM credentials are like GOLD for thieves. Do not under any circumstances put these into a public Github repository or give them to someone you do not know/trust. It is very easy to leak these keys and find a surprise bill from Amazon at the end of the month with charges you didn't accrue (I speak from experience). It's best to store these credentials in a secure location like 1Password, LastPass, or another password management tool you trust. Once you have your credentials set up, head back to the "Users" list we started from above and click on the user you just created to reveal the "Summary" page. From here, you will want to copy the long "User ARN" string just beneath the page heading. We'll use this next to set up your bucket. Once you have this copied, in the search box at the very top of the page (to the right of the "AWS" logo) type in s3 and select the first option that appears underneath "Services" in the search results. On the next page, click the orange "Create bucket" button in the top-right corner of the page. From this page, we need to fill out the following fields: - For "Bucket name," enter a unique name (bucket names must be unique to the region that you select for the second option) that describes what your bucket will hold. - For "AWS Region" select the region that's either closest to the majority of your users, or, closest to yourself. - Under "Object Ownership," select the "ACLs enabled" box. Even though this isn't recommended, we'll need this in order to customize permissions on a per-uploader basis in your app. - For "Block Public Access..." this option is up to you. If your bucket will NOT store sensitive files or files that you'd like to keep private, you can untick this box (and check the "I acknowledge" warning that appears when you do). For the bucket in use for the rest of the tutorial, we've unticked this box to allow for public objects. After those are set, you can skip the other settings and click on "Create bucket" at the bottom of the page. Once your bucket is created, locate it in the list of buckets and click on it to reveal it in the dashboard. From here, locate the "Permissions" tab at the top of the page and on this tab, locate and click the "Edit" button in the "Bucket policy" block. In the box that pops up, you will want to paste in the following statement, replacing the <bucket-name> placeholder with the name of the bucket you just created and <user arn you copied> with the "User ARN" we copied above. Example Amazon S3 Bucket Policy { "Id": "Policy1654277614273", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1654277612532", "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::<bucket-name>/*", "Principal": { "AWS": [ "<user arn you copied>" ] } } ] } After this is customized for your bucket and user, scroll down and click the orange "Save changes" button. Once this is set, what we just accomplished was allowing the IAM user credentials we just created to have full access to the bucket we just created. This will come into play when we configure our uploader next and set the "ACL" ("access control list" in AWS-speak) we hinted at above. Wiring up an uploader on the server In order to support uploading files in a Joystick app, we need to define an uploader on the server in our /index.server.js file. Let's take a look at the basic setup and walk through it: /index.server.js import node from "@joystick.js/node"; import api from "./api"; node.app({ api, uploaders: { photos: { providers: ['local', 's3'], local: { path: 'uploads', }, s3: { region: 'us-east-1', accessKeyId: joystick?.settings?.private?.aws?.accessKeyId, secretAccessKey: joystick?.settings?.private?.aws?.secretAccessKey, bucket: 'cheatcode-tutorials', acl: 'public-read', }, mimeTypes: ['image/jpeg', 'image/png', 'image/svg+xml', 'image/webp'], maxSizeInMegabytes: 5, fileName: ({ input, fileName, mimeType }) => { // NOTE: Return the full path and file name that you want the file to be stored in // relative to the provider. return `photos/${input?.photoId}_${fileName}`; }, }, }, routes: { ... }, }); This is everything we need to support multiple-location uploads. First, up top, we're calling to the node.app() function imported from the @joystick.js/node package that starts up our server for us (using Express.js behind the scenes). To that function, we can pass options on an object to customize the behavior of our app. Here, the uploaders option takes an object where each property defines one of the uploaders that we want to support in our app (here, we're defining an uploader called photos). To that property, we pass the object or "definition" for our uploader. At the top, we pass a providers array of strings to specify where we want our upload to go (Joystick automatically routes the upload of a file to these providers). Here, we can specify one or more providers that will receive an upload. In this case, we want to upload to two locations: our local machine and Amazon S3. Based on the providers that we pass, next, we need to define configuration for those specific providers. For local, we pass an object with a single object path which specifies the local path (relative to the root of our application) where our files will be stored. For s3, things are a bit more involved. Here, we need to specify a few different properties: regionwhich is the AWS region shortcode for the region where our bucket is located. accessKeyIdwhich is the "Access Key ID" you generated alongside your IAM user earlier. secretAccessKeywhich is the "Secret Access Key" you generated alongside your IAM user earlier. bucketwhich is the name of the bucket where you want your files to be stored. aclwhich is the "access control list" or catch-all permission you want to apply to all files uploaded via this uploader. For our example, we're using public-readwhich means files are read-only for public users. Note: for the accessKeyId and secretAccessKey values here, notice that we're pulling these values from joystick?.settings?.private?.aws. In a Joystick app, you can specify settings for each environment in your app in the settings.<env>.json file at the root of your app (where <env> is some environment supported by your app). Here, because we're in the development environment, we expect these values to be defined in our settings.development.json file. Here's an updated version of this file (you will need to fill in your accessKeyId and secretAccessKey that you obtained from AWS earlier): /settings.development.json { "config": { "databases": [ { "provider": "mongodb", "users": true, "options": {} } ], "i18n": { "defaultLanguage": "en-US" }, "middleware": {}, "email": { "from": "", "smtp": { "host": "", "port": 587, "username": "", "password": "" } } }, "global": {}, "public": {}, "private": { "aws": { "accessKeyId": "", "secretAccessKey": "" } } } A settings file in Joystick supports four root properties: config, global, public, and private. Here, we utilize the private object which is only accessible on the server to store our AWS credentials (we DO NOT want to put these in global or public as they will be exposed to the browser if we do). Back in our uploader definition, after s3, we have some generic settings specific to the uploader. These include: mimeTypeswhich is an array of strings specifying the MIME types supported by this uploader (e.g., we only pass image MIME types here to avoid things like videos, documents, or audio files being uploaded). maxSizeInMegabytesthe maximum file size (in megabytes) allowed for this uploader. Files over this limit will be rejected by the uploader. fileNamea function which gives us the opportunity to customize the path/file name for the file we're uploading. This function receives an object containing the fileName, fileSize, fileExtension, and mimeTypefor the uploaded file as well as the inputwe pass from the client (more on this later). Here, we return a path which nests uploads in a folder photosand prefixes the fileNameof the uploaded file with the photoIdpassed via the inputobject. That's it! With this, now we have an uploader ready-to-go on the server. Let's jump over to the client and see how we actually upload files. Calling to an uploader on the client Fortunately, calling an uploader from the client is quite simple: we just need to call a single function upload from the @joystick.js/ui package (the same one we use to define our components). To make our work a bit easier here, we're going to reuse the existing /ui/pages/index/index.js file that was already created for us when we ran joystick create app earlier. Let's replace the existing contents of that with what's below and step through it: /ui/pages/index/index.js import ui, { upload } from "@joystick.js/ui"; import { v4 as uuid } from "uuid"; const Index = ui.component({ state: { uploads: [], progress: 0, }, events: { 'change input[ ${when(state.progress > 0, ` <div class="progress-bar"> <div class="progress" style="width:${state.progress}%;"></div> </div> `)} ${when(state.uploads?.length > 0, ` <ul> ${each(state.uploads, (upload) => { return `<li>${upload.provider}: ${upload.url ? `<a href="${upload.url}">${upload.url}</a>` : upload.error}</li>`; })} </ul> `)} </div> `; }, }); export default Index; Starting down at the render function, here, we specify some HTML that we want to render for our component. The important part here is the <input type="file" /> tag which is how we'll select files to upload from our computer. Beneath this, using the when render function (this is the name used for the special "contextual" functions passed to a component's render function in Joystick) to say "when the value of state.progress is greater than 0, render this HTML." "This HTML," here, is the markup for a progress bar which will fill as our upload completes. To simulate the fill, we've added an inline style attribute which sets the CSS width property dynamically on the inner <div class="progress"></div> element to the value of state.progress concatenated with a % percentage symbol (Joystick automatically provides us the upload completion percentage as a float/decimal value). Beneath this, again using the when() function, if we see that state.uploads has a length greater than 0 (meaning we've uploaded a file and have received a response from all of our providers), we want to render a <ul></ul> tag which lists out the providers and URLs returned by those providers for our files. Here, we utilize the each() render function, which, like the name implies helps us to render some HTML for each item in an array. Here, for each expected object inside of state.uploads, we return an <li></li> tag which tells us the provider for the specific uploads (e.g., local or s3) along with the URL returned by the provider. Just above this, utilizing the css option on our components, we pass some simple styling for our progress bar (feel free to copy this and tweak it for your own app). The important part here is the events block just above css. Here, we define the JavaScript DOM event listeners we want to listen for within our component (i.e., Joystick automatically scopes the event listeners defined here to this component). To events, we pass an object with properties defined as a string combining two values with a space in the middle: the type of DOM event we want to listen for and the element we want to listen for the event on ( <event> <element>). In this case, we want to listen for a change event on our <input type="file" /> element. When this occurs, it means that our user has selected a file they want to upload; a perfect time to trigger the upload of that file. To this property, we pass the function that Joystick will call when this event is detected on our file input. Inside, first, we call to component.setState() to empty out our state.urls value, assuming that we're running our uploader multiple times and don't want to mix up the response URLs. Next, inside, we call to the upload() function we've imported from @joystick.js/ui up above. This function is almost identical to the get() and set() functions in Joystick that are used for calling to API endpoints defined as getters and setters in your Joystick app. It takes two arguments: - The name of the uploader that we defined on the server that will handle this upload (e.g., here, we pass 'photos'as that's the name we used for our uploader on the server). - An options object which provides the fileswe want to upload, any miscellaneous inputdata we want to pass along, and an onProgressfunction which is called whenever the progress of our upload changes. For files here, we're just passing event.target.files which contains the browser File array provided on the change event for a file input (this is required as it tells Joystick which files we're trying to upload). For input, just for the sake of demonstration, we pass an object with a single property photoId set to a call to uuid(). This is a function from the uuid package that we installed earlier (see the import at the top of this file) that generates a random UUID value. While this isn't necessary, it demonstrates how to get extra data passed alongside our uploader for use with the fileName() function in our uploader definition. For onProgress, whenever Joystick receives a progress event from the server, it calls the function we pass to onProgress here with two arguments: first, the progress of the upload as a percentage and provider which is the name of the provider that progress belongs to. For example, here, because we're uploading to local and s3, we would expect this to get called with some progress percentage and either local or s3 for the provider value. This allows us to track progress on a per-provider basis if we wish. Finally, because we expect upload() to return a JavaScript Promise, we've added a .then() callback and .catch() callback on the end. If our upload completes without any issues, the .then() callback will fire, receiving an array of objects describing the upload result for each provider (i.e., one object for local, one object for s3, etc). Because we're rendering our list of uploads down in our render() function, here, we just take the raw array and set it on state.uploads (remember, this is what we reference in our render() function). So it's clear, at the very top of our options object passed to ui.component() we've provided a state object which sets some defaults for our two state values: uploads as an empty array [] and progress as 0. That should do it! Now, if we select an image file from our computer and upload it, we should see our progress bar fill and a list of URLs rendered to the screen after it completes. Wrapping up In this tutorial, we learned how to add uploads to a Joystick app. We learned how to define an uploader on the server, specifying multiple providers/destinations, passing config for each provider, and how to customize the allowed mimeTypes, fileSize, and fileName for the file we're uploading. On the client, we learned how to call our uploader, handling both the progress for the upload as well as the resulting URLs after our upload completes. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/how-to-upload-files-to-multiple-locations-simultaneously-with-joystick
CC-MAIN-2022-27
refinedweb
3,018
61.26
Last post, I created a model that would predict what effect changing the taxes on cigarettes would have on health care expenditures. That model was fun to build, but it is difficult to get a feel for what is going on. Even though it is a relatively simple model. Humans tend to be fairly visual creatures, so visualizing results is fairly important. In this post, what I want to do is to create an interactive visualization of the model. Specifically, I want to make something that a user could interact with and thus play with what the model is telling us about the world. The tool that I chose to use was bokeh. Bokeh is not just a photography technique. It is a really neat little python package that let’s you visualize data. But more importantly, bokeh also let’s you create dynamic web based visualizations. The end result of this experiment in dynamic visualizations is a lightweight web app that is hosted on heroku for free. This post is the the second of three. In this post, I’m going to go over how to build the web app, and in the next I’ll go over how to deploy it on heroku. Building a Bokeh App So we’ll be using three data sources for this app. The two we used in the last post, and the resulting trace from the last post that I pickled for later use. What we are going to do is create a plot of the tax rate for a selected state. The corresponding health care expenditures for that state, and a histogram of the distribution of simulated savings in health care expenditures that comes from the model that we built in the last post. So how do we go about doing this. Assuming that you are working in the same environment as the last post, and that you still have all of the variables and datasets available to you, we’ll just jump into building the web app. First import what we need into the python environment. from bokeh.io import output_file, show from bokeh.layouts import widgetbox,column,row from bokeh.models.widgets import RadioGroup, Button, Dropdown, Select from bokeh.models import Range1d from bokeh.plotting import figure, curdoc import pandas as pd import pickle import re import numpy as np With this stuff in the environment let’s make some plots. We’ll need three plots. We’ll also define some helper variables that we will make use of when we need to update the plot. p1 = figure(x_range=[2005,2009], y_range=(0,1), title='Tax Rate Per Year') p2 = figure(x_range=[2005,2009], y_range=(df['Data_Value'].min(),df['Data_Value'].max()),title='Expenditures on Cigarette Related Healthcare') p3 = figure(title='Distribution of Savings (in Millions of $) Per 1% Increase in Tax Rate per Year') lines = [] lines2 = [] hists = [] i = 0 line_dict = {} height_dict ={} x_start={} x_end = {} Now that the basics are out of the way. We need to build a plot for each of the three graphs, and we need to make sure, that we can access them individually. So we’ll build the plots inside of a for-loop and keep track of which state we are working with using a dictionary. Note that for the histograms we are randomly pulling from the trace with replacement. for state in df['LocationAbbr'].unique(): line_dict[state] = i temp = df[df['LocationAbbr'] == state] amt = temp.iloc[4,10] hist, edges = np.histogram(np.random.choice((-trace)*amt,1000),density=True) x_start[state] = edges[0] x_end[state] = edges[-1] height_dict[state] = np.max(hist) lines.append(p1.line(temp['Year'],temp['tax'])) lines2.append(p2.line(temp['Year'],temp['Data_Value'])) hists.append(p3.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="#036564", line_color="#033649")) i+=1 The next thing that we’ll do is kind of weird. We’re going to make all of the lines, and fills in the figures invisible. for line in lines: line.glyph.line_alpha=0 for line in lines2: line.glyph.line_alpha=0 for hist in hists: hist.glyph.fill_alpha=0 hist.glyph.line_alpha=0 The penultimate thing that we need to do is to define a callback function that will turn on the appropriate lines when we make a selection for which state we want to look at. This function turns off the old lines and turns on the new lines for each of the figures. That is all that it does. def callback(attr, old, new): p3.x_range.start=x_start[new] p3.x_range.end=x_end[new] p3.y_range.start=0 p3.y_range.end=height_dict[new] lines[line_dict[old]].glyph.line_alpha=0 lines[line_dict[new]].glyph.line_alpha = 1 lines2[line_dict[old]].glyph.line_alpha=0 lines2[line_dict[new]].glyph.line_alpha = 1 hists[line_dict[old]].glyph.line_alpha=0 hists[line_dict[new]].glyph.line_alpha=1 hists[line_dict[old]].glyph.fill_alpha=0 hists[line_dict[new]].glyph.fill_alpha=1 With the callback function in place we just need to give ourselves a way to pick which state we want to look at. We’ll do this through a dropdown menu embedded in the app. Now the tricky thing here is to get all the available options in to the dropdown. We’ll do it by just grabbing the unique values from the datasets. We also need to append each of the elements to a page. dropdown = Select(title="State",value='CA',options=list(df['LocationAbbr'].unique())) dropdown.on_change('value',callback) curdoc().add_root(column(dropdown,row(p1,p2),row(p3))) That’s it! To see your charts locally. save the file and run. bokeh serve --show myapp.py I learned something interesting from doing this. The bigger states like California, and New York, are less certain about the effects that increasing the tax rate will have on health care expenditures than small states like Idaho, or Utah. The small states almost always have a small effect. But the bigger states seem like they can have small effects or really big effects. It is interesting because unless you run the simulations and compare for different states, you would miss this potentially valuable insight. That is the power of these dynamic visualizations. You get to see things that averages like this could miss.
https://barnesanalytics.com/creating-an-interactive-visualization-with-bokeh
CC-MAIN-2018-09
refinedweb
1,040
66.13
All numbers in this list need to be removed (the original is 88,779 lines long): ... You can do it line-by-line to avoid using more memory as the files get larger, and use regular expressions replacement with re.sub to match numbers in different formats: import re with open('infile.txt', 'rt') as infile: with open('outfile.txt', 'wt') as outfile: for line in infile: line_without_numbers = re.sub(r'\[0-9]*(\[0-9]*)?', '', line).strip() outfile.write(line_without_numbers) I've also run .strip() on the string to remove the leading/trailing padding spaces for the numbers that have been removed.
https://codedump.io/share/xJMSfigNLe9L/1/how-do-i-remove-all-numbers
CC-MAIN-2018-09
refinedweb
102
67.86
Welcome to part 11 of the intermediate Python programming tutorial series. In this part, we're going to talk more about the built-in library: multiprocessing. In the previous multiprocessing tutorial, we showed how you can spawn processes. If these processes are fine to act on their own, without communicating with eachother or back to the main program, then this is fine. These processes can also share a common database, or something like that to work together, but, many times, it will make more sense to use multiprocessing to do some processing, and then return results back to the main program. That's what we're going to cover here. To begin, we're going to import Pool from multiprocessing import Pool Pool allows us to create a pool of worker processes Let's say we want to run a function over each item in an iterable. Let's just do: def job(num): return num * 2 Simple enough, now let's set up the processes: if __name__ == '__main__': p = Pool(processes=20) data = p.map(job, [i for i in range(20)]) p.close() print(data) In the above case, what we're going to do is first set up the Pool object, which will have 20 processes that we'll allow to do some work. Next, we're going to map the job function to a list of parameters ( [i for i in range(20)]). When done, we close the pool, and then we're printing the result. In this case, we get: [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38] If you raise the range and processes to the 100's, you can see your CPU max out and the processes if you like. ~500 for me seems to do the trick.
https://pythonprogramming.net/values-from-multiprocessing-intermediate-python-tutorial/
CC-MAIN-2022-40
refinedweb
307
67.69
Creating files: Use any text editor, as long as it can save into straight ASCII text. Something with automatic indentation (e.g. a Python mode) like emacs or vi is suggested but not required. Name your Python modules <valid_identifier>.py, e.g. 'file1.py' or 'file_1.py', but not 'file-1.py'. You have to be able to import them :) Name your Python scripts any valid UNIX/Windows filename. You should avoid '/' and '\' as they are special characters. Running files: 'python filename' should always work for valid Python. 'python -i filename' will run your code and then leave you at the interpreter prompt. import filename will work if 'filename.py' exists (and is valid Python, yada yada). Code inside of 'if name == "main":' blocks will only execute when you run it with 'python filename' and not on import; this is useful if you want to be able to import a module without having it do something, but also want to be able to execute it from the command line as a script or a test. More on this later, with doctest and unittest.
http://msu-cse491-fall-2008.wikidot.com/creatingandrunningpythonscripts
CC-MAIN-2018-39
refinedweb
183
84.27
table of contents NAME¶ remove - delete a name and possibly the file it refers to SYNOPSIS¶ #include <stdio.h> int remove(const char *pathname); DESCRIPTION¶¶ On success, zero is returned. On error, -1 is returned, and errno is set appropriately. ERRORS¶ - EFAULT - pathname points outside your accessible address space. - EACCES - Write access to the directory containing pathname filesystem. CONFORMING TO¶ ANSI C, SVID, AT&T, POSIX, X/OPEN, BSD 4.3 BUGS¶ Infelicities in the protocol underlying NFS can cause the unexpected disappearance of files which are still being used. NOTE¶ Under libc4 and libc5, remove was an alias for unlink (and hence would not remove directories). SEE ALSO¶ unlink(2), rename(2), open(2), rmdir(2), mknod(2), mkfifo(3), link(2), rm(1), unlink(8).
https://manpages.debian.org/testing/manpages-pt-dev/remove.3.pt.html
CC-MAIN-2022-33
refinedweb
127
58.69
Hi Rich, Thanks for using our products. I have tested the scenario and unable to notice any problem. You can check the following documentation link for details and code snippet as per your requirement. How to - Create MultiLayer PDF document Please do let us know if you need any further assistance. Thanks & Regards, The problem is that the text box sizing seems to be incorrect. I set text on the page with code above … Just to prove the problem I can set a width, height, X, and Y all to be 300. Notice in the attached image the actual SIZE of the created text box in comparison to it’s X and Y. The size is getting overriden by something else (font size changing the bounding box?). Also… changing the font size changes the bounding size of teh text area. Even though I have set FixedWidth and FixedHeight. Is there a better way to do hidden text layer to make the pdf searchable? I need to be able to create the in the location on the image so the word is highlighted (I have the coordinates for ocr location). Hi Rich,<?xml:namespace prefix = o Sorry for the delay in reply. As I understand your problem, you are facing the issue when you change the Font Size, the text area size also increases. I tested this issue and I am able to notice this problem. Please confirm whether my understanding about your issue is correct or you are facing some other issue. It would be better if you can share your complete code and generated PDF file with us. This will help us identify and rectify the issue soon. Also, regarding priority support, I think you can use the Priority Support account which has been purchase by your company and register your issue directly in the Priority Support forum as well. Following is the link for Priority Support forum. Sorry for the inconvenience,
https://forum.aspose.com/t/how-to-create-a-hidden-text-layer-behind-an-image/100714
CC-MAIN-2022-40
refinedweb
323
72.97
hi all environment : win10, java8, sikuli 1.1.0 I wrote some .py files that I want to run through command line, here is a part of codes : 1: self.fileCommon 2: paste("cd " + self.cmdPath) 3: wait(2) 4: type(Key.ENTER) 5: wait(2) 6: paste("python " + self.srmMany + " " + b + " " + a + " " + " E29") 7: wait(2) 8: type(Key.ENTER) 9: wait(7) 10: paste("exit") 11: wait(2) 12: type(Key.ENTER) 13: wait(2) when i run it , paste doesn't work sometimes , sometimes doesn't execute line2,the script continues to run just like nothing happened but sometimes line 10 deesn't work .... could you please help me to solve it ? thanks so much!!! Question information - Language: - English Edit question - Status: - Solved - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Solved: - 2018-07-04 - Last query: - 2018-07-04 - Last reply: - 2018-01-30 thanks so much , but 1: self.fileCommon here is ok , only " paste " function doesn't work sometimes.... ------- self.fileCommon ---- This is the method of encapsulation. def openFile(self, path): wait(1) type("r", Key.WIN) wait(1) if exists( wait(2) wait(2) wait(5) else: wait(1) wait(2) wait(5) do you have other idea? thanks a lot 1: self.fileCommon here is ok , only " paste " function doesn't work sometimes.... ... looks so, but might not be in the sense I mentioned: you have dumb wait(5) after triggering the run box from the start menu. An intelligent wait would be, to wait for something whatever related to the opened stuff, that signals, that its GUI is ready to accept a paste. If this for some reason sometimes takes more than 5 seconds, paste would fail in this case. thx a lot I guess: .openFile( "cmd") 1: self.fileCommon opens some GUI, where the paste() should fill in. You have to somehow wait for the GUI to be ready, e.g. insert a dumb wait(1) or a more intelligent wait(someImage, 10), that if found signals the GUI to be ready.
https://answers.launchpad.net/sikuli/+question/663835
CC-MAIN-2019-04
refinedweb
343
84.98
.NET Common Windows Forms Controls In this tutorial we will learn about Common Windows Forms Controls in Visual Basic .NET 2005. IN this part 1 of this article, We will be learning the controls like Control Hierarchy, Label, LinkLabel, TextBox, RichTextBox, PictureBox, GroupBox, Panel, Button, CheckBox , RadioButton, ListBox, CheckedListBox and ComboBox. Control Hierarchy The base class for all Windows controls is located in the System.Windows.Forms namespace. These controls are built into the .NET framework and form the basis for derived controls. These controls have a distinct hierarchy of their own. For example the hierarchy of the control rich text box is given below: Object MarshalByRefObject Component Control TextBoxBase RichTextBox We shall take a quick look at some of the properties of the control: Some of the methods of the control are listed below: Some of the events of the control are listed below: The above list is not exhaustive. However, a designer would find it profitable to get himself acquainted with the properties and methods of the different types of controls that he uses. Derived controls are customized controls that use the .NET framework controls as a base for building upon. Label The Label control is used in a number of applications to indicate the nature of the input required or the name of a control or just to convey a message to the user. The autosize property of this control is true by default, but can be dynamically fixed. Since this control cannot receive focus, it can also be used to create access keys for other controls. Let us see a demo for this activity: - Draw the label first, and then draw the other control. Other wise draw the controls in any order and set the System.Windows.Forms.Control.TabIndex property of the label to one less than the other control. - Set the label’s System.Windows.Forms.Label.UseMnemonic property to true. - Use an ampersand (&) in the label’s System.Windows.Forms.Label.Text property to assign the access key for the label. For more information, see Creating Access Keys for Windows Forms Controls. - The following code will accomplish the task. Press F5 and see the demo in action. The output of the above codes is shown in the screenshot pasted below: Link Labels This control is similar to the Label Control. In addition to all the properties, methods, and events of the Label control, the LinkLabel control has properties for hyperlinks and link colors. The LinkLabel.LinkArea property sets the area of the text that activates the link. The LinkLabel.LinkColor, LinkLabel.VisitedLinkColor, and LinkLabel.ActiveLinkColor properties set the colors of the link. The LinkLabel.LinkClicked event determines what happens when the link text is selected. Link Labels are labels that support hyper links. Some of the important properties and events are given below: The following code snippet demonstrates the usage of the link label. TextBox The hierarchy of the control Text Box is given below: Object MarshalByRefObject Component Control TextBoxBase Text Box This control is used mainly to collect user input of data and display text. The look and feel and the behavior can be controlled by manipulating the values of its properties, invoking some methods and also handling some of the events that this control is enabled to handle. Some of the methods, properties and events of the text box are given below: } RichTextBox Rich Text Box control is used for displaying, entering and manipulating rich text with formatting. Rich text format (RTF) supports many formats. The following code sample shows the usage of Rich Text Boxes and how we can load TRF documents to them. We shall take a look at some of the methods, properties and events of RichTextBoxes: The events defined for this control are the same as those defined for the TextBox Control. PictureBox With the Windows Forms System.Windows.Forms.PictureBox control, you can load and display a picture on a form at design time by setting the System.Windows.Forms.PictureBox.Image property to a valid picture. Acceptable file types which can be any of the following: To display a picture at design time 1. Drag and drop a PictureBox control on a form from the ToolBox. 2. On the Properties window, select the Image property, then click the ellipsis button to display the Open dialog box. 3. If you are looking for a specific file type (for example, .gif files), select it in the Files of type box. 4. Select the file you want to display. To clear the picture at design time On the Properties window, select the Image property and right-click the small thumbnail image that appears to the left of the name of the image object. Choose Reset. You can also add picture to the control at runtime and also change the properties at run time to perform activities like, stretching the image. The following lines of code demostrate the tasks like adding picture to the control at run time and also sets value for the PictureBox1.SizeMode as PictureBoxSizeMode.StretchImage. Press F5 to execute and see the effect. The first screenshot shows the image added to the program at run time using bitmap Object. The second screenshot shows the image added to the picture control to display using the method FromFile. The SizeMode of this control is set to stretch causing the image to be stretched. GroupBox, and at design time all the controls can be moved easily — when you move the single System.Windows.Forms.GroupBox control, all its contained controls move, too. The group box’s caption is defined by the GroupBox.Text property. Panel All the features of the GroupBox are applicatble for the Panel control also. Additionally the Panel control can have a ScrollBar while the GroupBox displays the caption. {mospagebreak} Buttons Buttons provide for the most common way of creating and handling events in the code. It exposes the common properties and the methods that help us use this control. CheckBoxes Check Boxes are controls that you click to select it and click it again to deselect it. This control can be ideally used when getting user input for answers of the type yes or no. Open a new project and on the Form1 you can add three check boxes, a label and a command button. Type the following codes in the code behind form: The following screenshots illustrate the performance of the program: The screenshot shown below shows the initial screen. Note the label message that says that you have not clicked on any of the check box: The following screenshot shows that the checkbox two is clicked: The following screenshot shows that the checkbox is unchecked and label message has changed: RadioButtons Radio buttons are also checkboxes and the difference lies in the following areas: 1. They are round as against the checkboxes which are square 2. they are used mostly in groups While checkboxes are used individually, the radiobuttons are use in groups. In case of the radiobuttons, if you check button 3 after clicking button1 the button1 is deselected automatically. Let us see an illustration: The following code gives the demo for the radiobutton: The following screenshot shows the output: ListBox A List box is a useful tool that helps you to display a list of several items from which you can select one or more. A scrollbar is automatically added to the list box when the number of items in the list box increases. 1. To a new Form add a listbox from the toolbox. 2. Locate the property items in the property sheet. 3. Click on the collections on the other side. 4. A new dialog box is opened. 5. Add the items to this dialog box. 6. You must type one item per line. 7. On closing the dialog box the items added to the listbox are available. 8. Enter the following lines of code in the code. Public Class Form1 Private Sub ListBox1_SelectedIndexChanged(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ListBox1.SelectedIndexChanged Label1.Text = "Item " & ListBox1.SelectedIndex + 1 & " is Selected" End Sub End Class Now if you execute the program by pressing F5, you will see the following output: You can also add items to the listbox at runtime by using the following line of code: Listbox1.Items.Add(“Item10”) In the same way you can also remove the items from the the listbox: Listbox1.Items.Remove(ListBox1.SelectedItem) Checked listbox Checked listbox is derived from standard listbox control. In this case you can see a check box at the side of the item list. To check an item, the user has to double-click a checkbox by default. But you can also do it by single click if you set the CheckOnClick property to true. A screenshot of the application executed is given below: ComboBox This control is a frequently used control. This control is made up of two parts. The top part is a text box that allows user to enter data. The other part is a list box that allows the user to select from the items listed. You can allow the user to type in an item or select an item from the list
http://www.exforsys.com/tutorials/vb.net-2005/common-windows-forms-controls-section-1.html
CC-MAIN-2017-47
refinedweb
1,535
63.9
#include <zb_zcl_price.h> BlockThreshold Sub-Payload Format The Block Thresholds represent the threshold values applicable to an individual block period and, where applicable, to a particular tier. The Tier/NumberOfBlockThresholds field is an 8 bitmap. The format of the bitmap is decided by bit0 of the sub-payload control field. If Bit0 of the Sub-Payload Control field is 0, then the least significant nibble represents a value specifying the number of thresholds to follow in the command. The most significant nibble represents the Tier that the subsequent block threshold values apply to. If Bit0 of the Sub-Payload Control field is 1, then the most significant nibble is unused and should be set to 0. Valid values for the NumberOfBlockThresholds sub-field are 0 to 15 reflecting no block in use (0) to block 15(15). Valid values for the Tiers sub-field are from 0 to 15 reflecting no tier to tier fifteen.
https://developer.nordicsemi.com/nRF_Connect_SDK/doc/zboss/3.8.0.1/structzb__zcl__price__block__threshold__sub__payload__s.html
CC-MAIN-2022-27
refinedweb
155
52.9
The QScrollArea class provides a scrolling view onto another widget. More... #include <QScrollArea> Inherits QAbstractScrollArea.(); it can be retrieved with widget(). The view can be made to be resizable with the setWidgetResizable() function.. This property holds whether the scroll area should resize the view widget. If this property is set to false (the default), the view honors the size of its widget. Regardless of this property, you can programmatically resize the widget using widget()->resize(), and the scroll area will automatically adjust itself to the new size. If this property is set to true, the view will automatically resize the widget in order to avoid scroll bars where they can be avoided, or to take advantage of extra space. Access functions: Constructs a scroll area with the given parent, and with no widget; see setWidget(). Destroys the scroll area. Set's the view widget's widget to w. w becomes a child of the scroll area, and will be destroyed when the scroll area is deleted or when a new view widget is set. See also widget(). Removes the view widget's widget from the view, and passes ownership of the widget to the caller. Returns the view widget's widget, or 0 if there is none. See also setWidget().
http://doc.trolltech.com/4.0/qscrollarea.html
crawl-001
refinedweb
210
66.94
For most administrators, their jobs take their Perl programming into realms of maintaining user accounts and managing servers. More and more administrators are finding it important to write scripts that interact with databases, however, whether for Web-based Common Gateway Interface (CGI) scripts or for querying administrative databases for reports. Many Perl modules and extensions support databases such as the variations of xDBM_File and libraries for SQL Server, Sybase, Oracle, MySQL, Dbase, and various others. A couple of Win32-specific extensions, however, marry the Win32 platform with generic database services such as ODBC. This chapter provides an overview on using the Win32::ODBC extension. All databases have their own unique way of doing things. When a programmer wants to access database services from within his code, he has to learn how to use that particular database. Really this is not so difficult because most of the major database vendors document their systems and provide libraries that a coder can link to. This is all fine and good as long as you always use one particular database. The moment you need to access a different type of database, however, not only will you have to learn an entire set of commands and procedures, but you will also need to change your scripts so that they can interface with this new database. Usually this means a total rewrite of the code. Now imagine that you want to write a database application that would work with any database that your client may have. A perfect example is that you want to write a shopping cart CGI script for the Web. Because you do not know which database your client may have implemented, you would have to write several variations of the same script to handle all the possible databases. If this is not more than you want to contend with, just imagine what it would be like to support all those scripts. Testing them would be just as horrific because you would have to install each and every database system. Wouldn't it be nice if all databases conformed to just one standard so that if you programmed your scripts to utilize it, then one script would work across all database systems? This is where ODBC (Open DataBase Connectivity) comes in. ODBC is an initiative that the database industry has come to accept as the standard interface. Now many people believe that ODBC is a Microsoft product, but they are incorrect in believing so. Microsoft did champion the API and was one of the first companies to have working implementations of it. The actual interface standard was designed, however, by a consortium of organizations such as X/Open SQL Access Group, ANSI, ISO, and several vendors such as IBM, Novell, Oracle, Sybase, Digital, and Lotus, among others. The standard was designed to be a platform-independent specification, and it has been implemented on the Win32, UNIX, Macintosh, and OS/2 platforms, just to name a few. ODBC has become so widely accepted that some vendors like IBM, Informix, and Watcom have designed their database products' native programming interface based on ODBC. When a program uses ODBC, it just makes calls into functions defined by the ODBC API. When a Perl script makes a call in to the ODBC API, it is typically calling in to a piece of software known as the ODBC Manager. This manager is a dynamic link library (DLL) that decides how to handle the call. Sometimes the Manager will perform some task, such as listing all available Data Source Names (DSNs), and return to the calling script. Other times the ODBC Manager will have to load a specific ODBC driver and request that driver (usually another DLL) to perform the task, such as connecting to the database and executing a query. Each level of software that the ODBC Manager has to pass through to accomplish a task is known as a tier. ODBC has different tiered models that describe how the basic infrastructure works. Each model is premised on how many tiers exist between the ODBC Manager and the actual database. There could, in theory, be an unlimited number of tiers, but the impracticality of administrating and configuring so many tiers renders only three common models: the one-, two-, and three-tier models. The one-tier model consists of only one step (or tier). The client application talks to the ODBC Manager, which asks the ODBC driver to perform the task. (This is the first tier.) The driver opens the database file. This is a very simple model in which the ODBC driver performs the work of database lookup and manipulation itself. Examples of single-tier ODBC drivers include Microsoft's Access, FoxPro, and Excel ODBC drivers. Just like the one-tier model, the client application talks to the ODBC Manager, which talks to the ODBC driver. (This is the first tier.) Then the ODBC driver will talk to another process (usually on another machine via a network), which will perform the actual database lookup and manipulation. (This is the second tier.) Examples of this are IBM's DB2 and MS's SQL Server. Just like the first two models, in the three-tier model the client application talks to the ODBC Manager that talks to the ODBC driver (tier one). The ODBC driver then talks to another process (usually running on another machine) which acts as a gateway (tier two) and relays the request to the database process (tier three) which can be a database server or a mainframe such as Microsoft's SNA server. These different tiers are not so important to the programmer as they are to the administrator who needs to configure machines with ODBC installed. However, understanding the basic infrastructure helps in making decisions such as how to retrieve data from the driver so that network traffic is held to a minimum, as discussed later in the section titled "Fetching a Rowset." To truly separate the client application from the database, there are Data Source Names (DSNs). DSNs are sets of information that describe to the ODBC Manager what ODBC driver to use, how the driver should connect to the database, what userid is needed to log on to the database, and so forth. This information is collectively referred to by using a simple name, a Data Source Name. I may set up a DSN that tells the ODBC Manager that I will be using a Microsoft SQL Server on computer \\DBServer, for example, and that it will be accessed via named pipes using a userid of 'JoeUser'. All this information I will call "Data." When writing an application, I will just make an ODBC connection to the DSN called "Data" and ODBC will take care of the rest of the work. My application needs only to know that I connect to "Data," and that is it. I can later move the database to an Oracle server, and all I need to do is change the ODBC driver (and, of course, configure it) that the "Data" DSN uses. The actual Perl script never needs to be altered. This is the beauty of ODBC. ODBC is an application programming interface (API), not a language. This means that when you are using ODBC you are using a set of functions that have been predefined by the groups who created the ODBC specification. The ODBC specification also defines what kinds of errors can occur and what constants exist. So ODBC is just a set of rules and functions. Now, when you use ODBC you need to interact somehow with the database. ODBC provides a set of rules and functions on how to access the database but not how to manipulate data in the database. To do this you need to learn about a data querying language known as SQL. The language that ODBC uses to request the database engine to perform some task is called SQL (pronounced SEE-kwel), an acronym for Structured Query Language. SQL was designed by IBM and was later standardized as a formal database query language. This is the language that ODBC uses to interact with a database. A full discussion on SQL is beyond the scope of this book, but this chapter covers the general concepts that most people use. Before discussing how to use SQL, first you need to understand some very simple but basic SQL concepts such as delimiters and wildcards. Note SQL is not case sensitive, so preserving case is not imperative; however, conventionally, SQL keywords are all in caps. So, if you were to issue a query like this:SELECT * FROM Foo it would yield the same results as this:SelECt * froM Foo When you need to specify a separation of things, you use a delimiter. Typically, a delimiter is an object (such as a character) that symbolizes a logical separation. We all know about these, but some may not know them by name. When you refer to a path such as c:\perl\lib\win32\odbc.pm, for example, the backslashes (\) delimit the directory names with the last backslash delimiting the file name. The colon (:) delimits the drive letter from the directory names. In other words, each backslash indicates a new directory; backslashes separate directory names or they delimit the directory names. Many coders will delimit things to make them easier to handle. You may want to save a log file of time, status, and the name of the user who ran a script, for example. You could separate them by colons so that later you can parse them out. If you have saved to a file the line joel:896469382:success, when you read it back you can parse it out using Example 7.1. Example 7.1 Parsing data using delimitersopen( FILE, "< test.dat" ) || die "Failed to open: ($!)"; while( <FILE> ) { my( @List ) = split( ":", $_ ); print "User=$List[0]\nDate=" . localtime($List[1]) . "\nStatus=$List[2]\n"; } close( FILE ); When it comes to SQL, delimiters are quite important. SQL uses delimiters to identify literals. A literal is just a value that you provide. If you are storing a user's name of 'Frankie' into a database, for example, 'Frankie' is a literal. Perl would refer to it as a value (as in, "you assigned the value 'Frankie' to the variable $UserName"). In the SQL query in Example 7.3, the number 937 and the string 'Noam Chomsky' are both considered literals. When you delimit a literal, you must understand that there are actually two separate delimiters: one for the beginning (known as the literal prefix), and one for the end of the literal (the literal suffix). If that isn't enough to remember, consider this: Each data type has its own set of delimiters! Therefore you use different delimiters when specifying a numeric literal, time literal, text literal, currency literal, and the list goes on. The good news is that it is not too difficult to discover the delimiters that a particular ODBC driver expects you to use. By using the GetTypeInfo() method, you can discover what delimiters, if any, are required by your ODBC driver for a particular data type (as in Example 7.2). For more on how to use this method, refer to the "Retrieving Data Type Information" section. Example 7.2 Determining literal delimiters01. # This assumes that $db is a valid Win32::ODBC object. 02. $String = "Your data"; 03. $DataType = $db->SQL_CHAR(); 04. if( $db->GetTypeInfo( $DataType ) ) 05. { 06. if( $db->FetchRow()) 07. { 08. %Type = $db->DataHash(); 09. } 10. } 11. $Field = "$Type{LITERAL_PREFIX}$String$Type{LITERAL_SUFFIX}"; The even better news is that most databases follow a simple rule of thumb: Text literals are delimited by single quotation marks (both the prefix and suffix), and all other literals are delimited by null strings (in other words, they do not require delimiters). So you can usually specify a SQL query such as the one in Example 7.3 where the text literal is delimited by single quotation marks and the numeric literal has null or empty string delimiters (nothing delimiting it). Example 7.3 Delimiting a text literal in SQLSELECT * FROM Foo WHERE Name = 'Noam Chomsky' AND ID > 937 The use of delimiters when specifying literals can cause problems when you need to use a delimiter in the literal itself. If the prefix and suffix delimiters for a text literal are the single quotation mark ('), for example, it causes a problem when the literal text has a single quotation mark or apostrophe within itself. The problem is that because the text literal has an apostrophe (a single quotation mark) it looks to the database engine as if you are delimiting only a part of your literal, leading the database engine to consider the remaining part of the literal as a SQL keyword. This usually leads to a SQL error. In Example 7.4, the text literal is 'Zoka's Coffee Shop', and the SQL statement is this: SELECT * FROM Foo WHERE Company like 'Zoka's Coffee Shop' Notice the three text delimiters, one of which is meant to be the apostrophe in "Zoka's". This will be parsed out so that the database will think you are looking for a company name of "Zoka", and that you are using a SQL keyword of "s Coffee Shop", and then starting another text literal without a delimiter. This will result in an error. Example 7.4 Specifying a text literal with a character that is a delimiter. (This would cause a SQL error.)$CompanyName = "Zoka's Coffee Shop"; $Sql = "SELECT * FROM Foo WHERE Company like '$CompanyName' "; The way around this is to escape the apostrophe. This is performed by prepending the apostrophe with an escape character. There is no difference between this and how Perl uses the backslash escape character when printing a new line (" \n") or some other special character. The SQL escape character is the single quotation mark. "But wait a moment," you may be thinking, "the single quotation mark is typically a text literal delimiter!" Well, although that is true, if SQL finds a single quotation mark in between delimiters and the single quotation mark is followed by a valid escapable character, SQL will consider it indeed to be an escape character. What this means is that when you are using a single quotation mark in your text literals, you must escape it with another single quotation mark. The most practical way of doing this is to search through your strings and replace all single quotation marks with two single quotation marks (not a double quotation mark-there is a big difference) as in Example 7.5. Example 7.5 How to replace single quotation marks with escaped single quotation marks$TextLiteral =~ s/'/''/g; If you wanted to correct the code in Example 7.4, you could add a function that performs the replacement for you, as in Example 7.6. Example 7.6 Escaping apostrophes$CompanyName = Escape( "Zoka's Coffee Shop" ); $Sql = "SELECT * FROM Foo WHERE Company = '$CompanyName' "; sub Escape { my( $String ) = @_; $String =~ s/'/''/g; return $String; } Tip Not all ODBC drivers use single quotation marks to delimit text literals. Some may use other characters, and some may go as far as to use different delimiters for the beginning and ending of a literal. The way you find out which characters to use is by calling the GetTypeInfo()method, which is discussed in the "Retrieving Data Type Information" section. SQL allows for wildcards when you are querying text literals. The two common wildcards are as follows: � Underscore (_). Matches any one character. � Percent sign (%). Matches any number (zero or more) of characters. These wildcards are used only with the LIKE predicate (discussed later in this chapter in the section titled "SELECT Statement"). For now you need to be aware that wildcards are supported by most databases (but not all). You can use the GetInfo() method to determine whether your ODBC driver supports wildcards (see Example 7.7). If you need to specify a percent sign or an underscore in your text literal and not have it interpreted as a wildcard, you will have to escape it first, just like you have to escape apostrophes. The difference between escaping apostrophes and escaping wildcards is twofold: � Not all ODBC drivers support escaping wildcards (believe it or not). � Usually, the escape character is the backslash (\), but you can change it to be something else. The first of the differences is pretty much self-explanatory; not all ODBC drivers allow support for escaping wildcards. You can check whether your ODBC driver supports wildcards and escapable wildcards by using the GetInfo() method, as in Example 7.7. Example 7.7 Determining the wildcard escape character# This assumes that $db is a valid Win32::ODBC object. if( "Y" eq $db->GetInfo( $db->SQL_LIKE_ESCAPE_CLAUSE( ) )) { $EscapeChar = $db->GetInfo( $db->SQL_SEARCH_PATTERN_ESCAPE( )); print "Escaping wildcards is supported.\n"; print "The wildcard escape character is: $EscapeChar\n"; } If your ODBC driver supports the use of escaped wildcards, you can set the escape character to be whatever you want it to be (you don't have to settle for what the ODBC driver uses by default) by using a LIKE predicate escape character sequence. This process is described later in the chapter in the section on escape clauses. At times, you may need to use special characters in a query. Assume, for example, that you have a table name that has a space in its name, such as "Accounts Receivable." If you were to use the table name in a SQL query, the parser would think you are identifying two separate tables: Accounts and Receivable. This is because the space is a special character in SQL. Ideally you would rename your table to "Accounts_Receivable" or something else that does not have such special characters. Because this is not an ideal world all the time, ODBC provides a way around this called the identifier quote character. Most databases use a double quotation mark (") as the identifier quote character, but because some do not, you may want to query your ODBC driver to discover which character you should use. This is done using the GetInfo() method, which is discussed later in the "Managing ODBC Options" section. When you need to use special characters in an identifier, surround the identifier with the identifier quote character. Example 7.8 shows a SQL query making use of the identifier quote characters for the table name "Accounts Receivable." Example 7.8 Using identifier quote charactersSELECT * FROM "Accounts Receivable" When you construct a SQL statement, you are using a standardized language that has been around for a while. There are dozens of good books available that explore SQL and can provide insight on how to optimize your queries and take full advantage of SQL's powerful features. This section is just provided for the sake of completeness because SQL is the language that ODBC uses. When you use SQL, you construct what is called a statement. A statement is a type of command that can have many details. If you want to get all the rows from a database table that meet some criteria (such as a value of 25 in the Age column), for example, you use a SELECT statement. A SELECT statement retrieves a set of data rows and columns (known as a dataset) from the database. It is quite simple to use. The basic structure of a statement is SELECT [ALL | DISTINCT] columnname [, columnname] � FROM tablename [WHERE condition] [ORDER BY columnname [ASC|DESC] [, columnname [ASC|DESC]] �] where: columnname is the name of a column in the table. tablename is the name of a table in the database. condition is a condition that determines whether to include a row. A SELECT statement will retrieve a dataset consisting of the specified columns from the specified table. If an asterisk (*) is specified rather than a column list, all columns are retrieved. Every row in the table that meets the criteria of the WHERE clause will be included in the dataset. If no WHERE clause is specified, all rows will be included in the dataset. The rows in the dataset will be sorted by the column names listed in the order listed if you specify an ORDER BY clause. If you use ORDER BY LastName, FirstName, the resulting dataset will be sorted in ascending order (the default) first by LastName and then by FirstName. For each column specified, you can use the ASC or DESC keyword to indicate sorting the column by ascending or descending order, respectively. If neither keyword is specified, the column will be sorted however the previous column was. If no keyword is specified for the entire ORDER BY clause, ASC will be assumed. It is interesting to note that instead of specifying a column name for the ORDER BY clause, you can refer to a column number (for example, ORDER BY 5 ASC, LastName DESC, 4, 3). Example 7.9 retrieves a dataset consisting of all the fields (the asterisk indicates all fields) from all the records contained in the table called 'Foo'. Example 7.9 Simple SELECT querySELECT * FROM Foo The WHERE predicate specifies a search condition. Only rows that meet the criteria set by the WHERE clause are retrieved as part of the dataset. WHERE clauses consist of conditional statements that equate to either true or false such as "Age > 21". Table 7.1 lists the valid conditions. Table 7.1 Valid conditional predicates Operator Description Usually, the conditional statement includes the name of a column. When a column is referred to, you are referring to the value in the column. In Example 7.10, a search condition of Age > 21 is used. Every row that has a value of greater than 21 in the Age column will satisfy the condition; therefore, it will be returned in the dataset. Example 7.10 Specifying a WHERE clauseSELECT * FROM Foo WHERE Age > 21 Example 7.11 makes use of the WHERE predicate to indicate a condition of the search. The query will retrieve a dataset consisting of the Firstname, Lastname, and City fields in the Users table only if the user's first name begins with the letter 'C' (the '%' is a wildcard that indicates that you don't care what comes after the 'C'-refer to the section on wildcards). The dataset will then be sorted by Lastname (by default the order will be ascending). Example 7.11 Complex SELECT querySELECT Firstname, Lastname, City FROM Users WHERE Firstname like 'C%' ORDER BY Lastname Multiple conditions can be used in a WHERE clause by using Boolean conjunctions such as AND and OR. Example 7.12 will return a dataset with rows whose Age column has values of greater than 21andLastName column begins with either the letters 'C' or 'R'. Example 7.12 Using multiple conditions in a SELECTstatementSELECT * FROM Foo WHERE Age > 21 AND (LastName like 'C%' OR LastName like 'R%') For another SELECT statement consider Example 7.13. This query will return a dataset consisting of all fields only if the City field is 'Seattle' and the Zip code is not equal to 98103. The list will then be sorted in descending order (starting with Z's and ending with A's) by last name. Example 7.13 SELECT statement with sorting and a conditionSELECT * FROM Users WHERE City like 'Seattle' AND Zip <> 98103 ORDER BY Lastname DESC INSERT Statement An INSERT statement adds a row to a table in a database. The structure of an INSERT statement is as follows: INSERT INTO tablename [(columnname1[, columnname2] ... )] VALUES (value1[, value2] ... ) where: tablename is the name of a table in the database. columnname is the name of the column to receive a value. value is the value to be placed into the column. The INSERT statement is used when you need to add a record to a table. The trick here is that when you specify a list of column names, you must list the values for those columns in the same order as the columns. The list of column names is optional and if left out, the first value specified will be stored into the first column, the second value will be stored into the second column, and so on. Example 7.14 adds a row into the table Foo, assigns the value 'Dave' to the Name column, 'Seattle' to the City column, and 98103 to the Zip column. Example 7.14 A simple INSERT statementINSERT INTO Foo (Name, City, Zip) VALUES ('Dave', 'Seattle', 98103) Just to demonstrate how you can use the INSERT statement without providing any column names, look at Example 7.15. Example 7.15 Using the INSERT statement without any column names INSERT INTO Foo VALUES ('Zoka''s Coffee Shop', 'Double Late', 2.20) This example assumes that the first and second columns are text data types and the third is either a floating type or a currency type (notice that the example escapes the apostrophe in the first value). Note It is rather important to understand that although the SQL language is fairly standardized, not all data sources implement it in the same way. To MS SQL Server, the INTOkeyword for an INSERTstatement is optional, for example; however, MS Access requires it. It is most practical to never use data source-specific shortcuts or leave out optional keywords unless you are sure that your script will always be used with a particular data source. Not only can you select and insert data into a table, but you can also change the values in a given row of a table. The capability to update a row is tremendously powerful and is performed using an UPDATE statement. The statement's syntax is similar to that of the INSERT statement: UPDATE tablename SET columnname1=value1 [, columnname2=value2] ... [WHERE condition] where: tablename is the name of a table in the database. columnname is the name of the column to receive a value. value is the value the column should be set to. condition is a search condition as defined in the statement section. This statement can be used to modify the existing rows of data. Example 7.16 uses an UPDATE statement to change the Department column for all rows in the Users table. If a row's Department column contains the value Personnel it is changed to the more politically correct Human Resources. Example 7.16 Using the UPDATE statementUPDATE Users SET Department = 'Human Resources' WHERE Department = 'Personnel' To remove a row from a table is not difficult at all. This is done with a DELETE statement: DELETE [FROM] tablename [WHERE condition] where: tablename is the name of a table in the database from which rows will be deleted. condition is a search condition as defined in the statement section. The first FROM keyword is optional-some data sources may require it. The tablename (following the optional first FROM keyword) is the table that will be affected by the statement. If search condition specified is TRUE for a row, the row is deleted from tablename. In Example 7.17, all rows from the table Students will be deleted as long as the row has a value greater than 4 in the Year column and the SchoolName column contains the value Michigan State University. If no search condition is specified, all rows from tablename are deleted as in Example 7.18. Example 7.17 Using the DELETEstatementDELETE FROM Students WHERE Year > 4 AND SchoolName = "Michigan State University" Example 7.18 Deleting all rows in a tableDELETE FROM Students Each and every data source has its own way of dealing with specific data types. The DATE data type format for Oracle 6 is "Aug 20, 1997", for example, but IBM's DB2 is "1997-08-20". This becomes quite a problem when creating a SQL statement where you have no idea what the actual database engine will be. Because ODBC attempts to abstract the particulars of different databases, the ODBC standard has adopted a technique to contend with this problem. This technique is called an escape sequence. When an ODBC driver finds an escape clause in a SQL statement, it converts it to whatever format it needs to be to suit the particular database the statement will be sent to. The escape clause for a date is {d 'yyyy-mm-dd'}. So to create an ODBC SQL statement that will be properly interpreted by all databases, you could use the following: SELECT * FROM Foo WHERE Date = {d '1997-08-20'} The time/date-based escape sequences are as follows: � Date: {d 'yyyy-mm-dd'} � Time: {t 'hh:mm:ss'} � Timestamp: {ts 'yyyy-mm-dd hh:mm:ss'} If you need to use an outer join, you can use the outer join escape sequence: {oj outer_join} where the outer join consists of: tablename {LEFT | RIGHT | FULL} OUTER JOIN{tablename |outer_join} ON search_condition In Example 7.19, the outer join specifies all the fields from every record of the Machine table and every record from the Users table in which the field MachineName (from the Users table) matches Name (from the Machine table) as long as the field Processor (from the Machine table) is greater than 487. Example 7.19 An outer join escape clauseSELECT * FROM {oj Machine LEFT OUTER JOIN Users ON Machine.Name = Users.MachineName} WHERE Machine.Processor > 486 Because this query uses an outer join as an escape sequence, you can be guaranteed that it will work on any ODBC driver that supports outer joins. Even if the particular driver uses a nonstandard syntax for outer joins, the ODBC driver will convert the escape sequence into the correct syntax before executing it. Before you actually make use of an outer join, you may want to check to make sure that your ODBC driver supports them. Example 7.20 shows a simple line that will check for outer join support. If the variable $CanUseOuterJoins is 1, outer joins are supported. Example 7.20 Discovering whether an ODBC driver supports outer joins# This assumes that $db is a valid Win32::ODBC object. $CanUseOuterJoins = $db->GetInfo( $db->SQL_OJ_CAPABILITIES() ) & $db->SQL_OJ_FULL(); Scalar functions such as time and date, character, and numeric functions can be implemented by means of escape sequences: {fn function} where function is a scalar function supported by the ODBC driver. Example 7.21 compares the Date column (which is of a timestamp data type) with the current date. Example 7.21 A SQL statement that uses a scalar functionSELECT * FROM Foo WHERE Date = {fn curdate()} For a full list of scalar functions, refer to Appendix B. Before using a scalar function, you should see whether it is supported by the ODBC driver. You can use Example 7.22 to discover this. If the variable $IsSupported is 1, the curdate() scalar function is supported. The value returned from GetInfo( $db->SQL_TIMEDATE_FUNCTIONS ) is a bitmask for all the supported time and date functions. Example 7.22 Checking whether the ODBC driver supports the curdate()function$IsSupported = $db->GetInfo($db->SQL_TIMEDATE_FUNCTIONS())& $db->SQL_FN_TD_CURDATE(); Stored procedures can be called by means of escape sequences. The syntax is as follows: {[?=]call procedurename[([parameter][,parameter]...)]} If a return value is expected, you need to specify the preceding ?=, as in the following: {? = call MyFunction('value')} Otherwise, a call can be constructed like this: {call MyFunction('value')} Note Win32::ODBC does not support parameter binding, so using the '?' as a "passed in" parameter to a stored procedure is not supported and will probably result in an error. The only exception to this is the return value of a stored procedure. For example, if a SQL statement was submitted like this:{? = call MyStoredProc( 'value1', value2 ) } The value returned by the procedure would be stored in a dataset containing one row. Future versions of this extension will support parameter binding. Even though it is not common to do so (because you can always use the default escape character), you can change the character used to escape a wildcard by using the LIKE predicate escape character sequence: {escape 'character') where 'character' is any valid SQL character. Example 7.23 demonstrates this. Example 7.23 Specifying an escape characterSELECT * FROM Foo WHERE Discount LIKE '28#%' {escape '#'} This example will return all columns from the table Foo that have "28%" in the Discount column. Notice that by specifying the LIKE predicate escape character sequence, the pound character (#) is used to escape the wildcard. This way, the wildcard is deemed a regular character and not interpreted as a wildcard. Because the resulting condition will not include any wildcards, the condition could have been this: WHERE Discount = '28%' This particular example demonstrates the use of the LIKE predicate escape clause, however, so we are using the LIKE predicate. Usually, this escape sequence is not necessary because the default escape character is the backslash (\) and the same SQL statement could have been this: SELECT * FROM Foo WHERE Discount LIKE '28\%' The: Because all database interaction uses a statement handle, all three steps must be performed, in order, before any database interaction can occur. It is possible to have multiple statement handles per connection. This is how there can be many queries simultaneously. Win32::ODBC attempts to simplify this process. When you create a new Win32::ODBC object, all interaction with the data source goes through this object. If you need to connect to many data sources at once, you just create many Win32::ODBC objects. For those users who need to create multiple queries to one data source (the equivalent of having multiple statement handles), the capability to clone an object has been implemented. Cloned objects share the same ODBC connection but have separate statement handles. The use of Win32::ODBC is very straightforward in most instances; there are basically five steps: Because Win32::ODBC is a Perl extension, it must be loaded by means of the use command. Typically this is placed in the beginning of your script, as in Example 7.24. Example 7.24 Loading the ODBC extensionuse Win32::ODBC; After your Perl application loads the Win32::ODBC extension, it needs to initiate a conversation with a database. This is performed by creating an ODBC connection object using the new command: $db = new Win32::ODBC( $DSN [, $Option1, $Option2, ... ] ); The first parameter ($DSN) is the data source name. This parameter can be either a DSN or a DSN string. This is described later in this section. The optional additional parameters are connection options that can also be set by using the SetConnectOption() method, which is discussed in the "Connection Options" section. Some options, however, must be set before the actual connection is made to the ODBC driver and database. Therefore, you can specify them in the new command. Example 7.25 assumes a few things: It assumes that you have already created a DSN called "My DSN", and that it is configured correctly. The example also assumes that the current user has permissions to access the DSN. This is not so much a problem for Windows 95 users as it is for Windows NT users. Example 7.25 Connecting to DSN$db = new Win32::ODBC( "MyDSN" ); If all went well, you will have an object, $db, which you will use later. Otherwise something went wrong, and the attempt to connect to the database failed. If something indeed did go wrong, the object will be empty. A simple test, as in Example 7.26, can be used to determine success or failure in connecting to the database. If the connection fails, the script will die printing out the error. Error processing is discussed later in the chapter in the section titled, strangely enough, "Error Processing." Example 7.26 Testing if a connection to a database succeededif(! $db) { die "Error connecting: " . Win32::ODBC::Error() . "\n"; } You can override your DSN configuration by specifying particular configuration elements in the DSN name when creating a new ODBC connection object. To do this, your DSN name must consist of driver-specific keywords and their new values in the following form: keyword=value; You can specify as many keywords as you like and in any order you like, but the "DSN" keyword must be included (and should be the first one). This keyword indicates which DSN you are using and the other keyword/value pairs that will override the DSN's configuration. Table 7.2 provides a list of standard keywords. A data source may allow additional keywords (for example, MS Access defines the keyword DBQ to represent the path to the database file) to be used. Table 7.2 Standard DSN keywords Keyword Description Note If ODBC 3.0 or higher is being used, there are two keywords that cannot be used together. They are DSNand FILEDSN. If they both appear in a connection string, only the first one is used. Additionally ODBC 3.0 allows for a DSNless connection string in which no DSNor FILEDSNkeyword is used. A DRIVERkeyword must be defined, however, in addition to any other keywords necessary to complete the connection. Suppose a DSN exists called "OZ" that points to an Access database. If you want to use that DSN but specify a userid to log on to the database (in Access, a keyword of UID) and password (keyword of PWD), you could use the line: $db = new Win32::ODBC( "DSN=OZ;UID=dorothy;PWD=noplacelikehome" ); A connection would be made to the "OZ" DSN, overriding the default userid and password would be difficult. Other than the practical limitations of memory, you have no real limit as to how many database connections you can have. Although some tricks can help speed things up, you should conserve on memory use and make the most of an ODBC connection (but more on that later). After you have your ODBC object(s), you are ready to begin querying your database. Tip The Win32::ODBC extension's functions also map to another namespace called ODBC. It is not necessary to always use the full Win32::ODBC namespace when calling functions or constants. For example, the following two function calls are the same:@Error = Win32::ODBC::Error(); @Error = ODBC::Error(); Likewise, when you create a new ODBC object, you can use either of the following: $db = new Win32::ODBC( "MyDSN" ); $db = new ODBC( "MyDSN" ); Anywhere that you may need to use the full namespace of the extension, you instead use the abbreviated version: ODBC. Now that you have an ODBC connection object, you can begin to interact with your database. You need to submit a SQL query of some sort. This is where you use the Sql() method: $db->Sql( $SqlStatement ); The first and only parameter is a SQL statement. This can be any valid SQL statement. The Sql() method will return a nonzero integer corresponding to a SQL error number if it is unsuccessful. Note It is very important to note that the Sql()method is the only method that returns a non-zero integer upon failure. The reason for this is to keep backward compatible with the original version of Win32::ODBC, then called NT::ODBC, which was written by Dan DeMaggio. Originally, NT::ODBC used the return value of the sql()method (then the method was all lowercase) to indicate the error number. This technique of error checking has been made obsolete with the introduction of the Error()method. For the sake of backward compatibility, however, the return values have not changed. Suppose you have a database called OZ with a table called Characters. The table consists of the fields (also known as columns) in Table 7.3. Table 7.3 The table called Characters in a fictitious database called OZ Field Name Data Type Suppose also that you want to query the database and find out who is in OZ. Example 7.27 will connect to the database and submit the query. Example 7.27 Submitting the SQL query 01. use Win32::ODBC; 02. if( ! $db = new Win32::ODBC( "OZ" ) ) 03. { 04. die "Error connecting: " . Win32::ODBC::Error() . "\n"; 05. } 06. if( $db->Sql( "SELECT * FROM Characters" ) ) 07. { 08. print "Error submitting SQL statement: " . $db->Error() . "\n"; 09. } 10. else 11. { 12. ... process data ... 13. } The SQL statement " SELECT * FROM Characters" in line 6 will, if successful, return a dataset containing all fields from all rows of the database. By passing this statement into the Sql() method, you are requesting the database to perform this query. If the Sql() method returns a nonzero result, there was an error and the error is printed. If the query was successful (a return value of zero), you will need to move onward and process the results. After you have a dataset that has been prepared by a SQL statement, you are ready to process the data. The way this is achieved is by moving, row by row, through the dataset and extracting the data from columns in each row. The first thing you must do is tell the ODBC connection that you want to move to the next available row. Because you just performed the query, you are not even looking at a row yet; therefore, you need to move to the next available row, which in this case will be the first row. The method used to move from row to row is FetchRow(): ( $Reult, @RowResults ) = $db->FetchRow( [$Row[, $Mode]] ); Before explaining the optional parameters, it is important to understand that the parameters are usually not used. They refer to the extended capabilities of the ODBC function SQLExtendedFetch(). This is explained later in the section titled "Advanced Features of Win32::ODBC." The first optional parameter ($Row) is the row number that you want to move to. For more details refer to the aforementioned section on advanced Win32::ODBC features. The second optional parameter ($Mode) is the mode in which the cursor will be moved. If this parameter is not specified, SQL_FETCH_RELATIVE mode is assumed. For more details, refer to the aforementioned section on advanced Win32::ODBC features. The F etchRow() method will return a one value if it successfully moved to the next row. If it returns a zero, it usually means that no more data is left (that is, you have reached the end of your dataset) or that an error has occurred. Another value is returned, but is typically not of any use unless you use the advanced features of FetchRow(). For more information, refer to the section titled "Advanced Row Fetching" later in the chapter. Example 7.28 shows a typical usage of the FetchRow() method. The loop continues to execute as long as you can advance to the next row. After the last row has been obtained, $db->FetchRow() returns a FALSE value that causes the loop to terminate. Example 7.28 Fetching rows while( $db->FetchRow() ) { ...process data... } Note As of version 970208, the FetchRow()makes use of the SQLExtendedFetch()ODBC function. Unfortunately not all ODBC drivers support this and therefore fail when FetchRow()is called. If this happens, you can either use another ODBC driver, such as a newer version or from another vendor, or you can use an older version of Win32::ODBC. Versions of the extension before 970208 use the regular SQLFetch()ODBC function. Future releases of Win32::ODBC will support both fetch methods and use whichever the ODBC driver supports. After a row has been fetched, the data will need to be retrieved from it. There are two methods for doing this: the Data() method and the DataHash() method. The Data() method returns an array of values: @Data = $db->Data( [$ColumnName1[, $ColumnName2 ... ]] ); A list of parameters can be passed in. Each parameter is a column name of which you are retrieving data. The Data() method returns an array of values corresponding, in order, to the column names that were passed into the method. If nothing is passed into the method, all column values are returned in the order that they appear in the row. Example 7.29 shows how the Data() method can be used. Example 7.29 Using the Data()methodif( $db->FetchRow() ) { @Data = $db->Data( "LastName", "FirstName" ); print "First name is $Data[1]\n"; print "Last name is $Data[0]\n"; } Notice how the first element in the @Data array ($Data[0]) is the value of the first column name specified. The order of the column names passed in determines the order they are stored in the array. The second and more practical way to retrieve data is to use the DataHash() method. The DataHash() method is the preferred method when retrieving data because it associates the column name with the column's data. The DataHash() method returns either an undef if the method failed or a hash with column names as keys and the hash's values consisting of the column's data: %Data = $db->DataHash( [$ColumnName1[, $ColumnName2 ... ]] ); A list of parameters can be passed in. Each parameter is a column name of which you are retrieving data. If no parameters are passed in, the data for all columns will be retrieved. If you were to query the table previously described in Table 7.3 using the code in Example 7.30, you may get output that looks like this: 1) Dorothy is from Kansas and wants to go home. 2) The Scarecrow is from Oz and wants a brain. 3) The Wicked Witch is from The West and wants the ruby slippers. This output would continue until all rows have been processed. Example 7.30 Extracting data using the DataHash() method 01. use Win32::ODBC; 02. if( ! $db = new Win32::ODBC( "OZ" ) ) 03. { 04. die "Error connecting: " . Win32::ODBC::Error() . "\n"; 05. } 06. if( ! $db->Sql( "SELECT * FROM Characters" ) ) 07. { 08. while( $db->FetchRow() ) 09. { 10. 11. my(%Data) = $db->DataHash(); 12. $iRow++; 13. print "$iRow) $Data{Name} is from $Data{Origin} and ", "wants $Data{Goal}.\n"; 14. } 15. } 16. $db->Close(); Tip When using the DataHash() method, it is best that you undef the hash used to hold the data before the method call. Because the DataHash() method is typically called from within a while loop that fetches a row and retrieves data, it is possible that the hash may retain values from a previous call to D ataHash(). Up to now, you have opened the database, submitted a query, retrieved and processed the results. Now you need to finish by closing the database. The proper way to perform this is by calling the Close() method. This method tells the ODBC Manager to properly shut down the ODBC connection. The ODBC Manager will, in turn, tell the driver to clean up after itself (removing temporary files, closing network connections, flushing buffers, and so on) and the ODBC connection object will be destroyed so that it cannot be used any longer. The syntax for the Close() method is this: $db->Close(); The Close() method has no return value. Example 7.30 in the preceding section shows a full working example of how you may use the Win32::ODBC module, including the C lose() method. Win32::ODBC supports a rich set of features, most of which are never used (yet still exist). Some of these require knowledge of ODBC that is beyond the scope of this book. You can find several books that do discuss this topic. For almost every method and function in Win32::ODBC, a set of constants are needed. These constants represent a numeric value that may change as the ODBC API changes over time, so it is important that you use the constant names and not the values they represent. One of the most questioned aspects of the Win32::ODBC extension is how to make use of the constants. Only a small group of constants are actually exported from the ODBC module, so to use most of the constants you need to specify a namespace function such as: Win32::ODBC::SQL_COLUMN_TABLE_NAME() Or, if you have an ODBC connection object, you can access the constant value as if it were a member of the object, as in: $db->SQL_DATA_SOURCE_NAME() Because Win32::ODBC creates a synonymous ODBC namespace and maps it to Win32::ODBC, you could use: ODBC::SQL_CURSOR_COMMIT_BEHAVIOR() Notice that the preceding example just uses the ODBC namespace rather than the Win32::ODBC namespace. Both are valid, but the latter is a bit shorter. The reason why all constants are not exported into the main namespace is because the ODBC API defines more than 650 constants, each of which is important to have. The decision was made to not export all the constants because it would bloat memory use and clutter the main namespace with an entire list of constants that will most likely not be used. You could always edit the ODBC.pm file and export constants that you would like to export; but then again, it is not so difficult to just make use of one of the formats listed earlier. The ODBC API supports metadata functions such as cataloging. Win32::ODBC supports access to such information with two methods: $db->Catalog( $Qualifier, $Owner, $Name, $Type ); $db->TableList( $Qualifier, $Owner, $Name, $Type ); Both of these are really the same method, so they can be used interchangeably. The only difference is in how the results are obtained. The first parameter ($Qualifier) represents the source the database will use. In Access, for example, the $Qualifier would be the database file (such as c:\data\mydatabase.mdb); whereas in MS SQL Server, it would be the database name. The second parameter ($Owner) is the owner of the table. Some database engines can put security on tables either granting or denying access to particular users. This value would indicate a particular owner of a table-that is to say, the user who either created the table or to whom the table has been given. The third parameter ($Name) is the name of the table. The fourth parameter ($Type) is the table type. This can be any number of database-specific types or one of the following values: A call to the Catalog() or TableList() method can include search wildcards for any of the parameters. Passing in "USER%" for the third parameter will result in retrieving all the tables with names that begin with USER. The difference between these two methods is that the Catalog() method returns a result set that you need to process with FetchRow() and Data() or DataHash(). On the other hand, TableList() returns an array of table names that meet the criteria you specified. Basically, TableList() is a quick way of getting a list of tables, and Catalog() is a way of getting much more information. If the Catalog() method is successful, it returns a TRUE and results in a dataset with five columns: TABLE_QUALIFIER, TABLE_OWNER, TABLE_NAME, TABLE_TYPE, and REMARKS. There may also be additional columns that are specific to the data source. Each row will represent a different table. You can use the normal FetchRow() and DataHash() methods to retrieve this data. If the method fails, a FALSE is returned. Note The resulting dataset generated by a call to the Catalog()method will result in a different result set if ODBC 3.0 or higher is used. In this case, TABLE_QUALIFIER becomes TABLE_CAT and TABLE_OWNER becomes TABLE_SCHEM. This is due to a change in the ODBC specification for version 3.0. If the TableList() method is successful, it will return an array of table names. If all parameters are empty strings except the fourth parameter (table type), which is only a percent sign (%), the resulting dataset will contain all valid table types. You can use this when you need to discover which table types a data source can use. If the $Owner parameter is a single percent sign and the qualifier and name parameters are empty strings, the resulting dataset will contain the list of valid owners that the data source recognizes. If the $Qualifier is only a percent sign and the $Owner and $Name parameters are empty strings, the resulting dataset will contain a list of valid qualifiers. Example 7.31 describes how you can use both of these methods. Example 7.31 Using the TableList() and Catalog() methods 01. use Win32::ODBC; 02. $db = new ODBC("MyDSN" ) || die "Error connecting: " . ODBC::Error(); 03. $TableType = "'TABLE','VIEW','SYSTEM TABLE'," . "'GLOBAL TEMPORARY','LOCAL TEMPORARY'," . "'ALIAS','SYNONYM'"; 04. @Tables = $db->TableList(); 05. print "List of tables:\n"; 06. map {$iCount++; print "$iCount) $_\n";} @Tables; 07. 08. if( $db->Catalog( "", "", "%", $TableType ) ) 09. { 10. while( $db->FetchRow() ) 11. { 12. my( %Data); 13. %Data = $db->DataHash(); 14. print "$Data{TABLE_NAME}\t$Data{TABLE_TYPE}\n"; 15. } 16. } 17. $db->Close(); 18. Each column (or field-whichever way you prefer to say it) can be of a different data type. One column may be a text data type (for a user's name, for example) and another may be a numeric data type representing a user's age. At times, a programmer may need to learn about a column-things such as what data type the column is, whether he can conduct a search on that column, or whether the column is read-only. If the programmer created the database table, chances are that he already knows this information; if the table came second-hand, the programmer may not know such information. This is where the ColAttributes() method comes into play: $db->ColAttributes( $Attribute, [ @ColumnNames ]); The first parameter ($Attribute) is the attribute, a numeric value representing attributes of a column. Appendix B contains a list of valid constants that can be used. Additional parameters may be included-specifically, the names of columns you want to query. If no column names are specified, all column names will be queried. The output of the ColAttributes() method is a hash consisting of column names as keys and the attribute for the column as the key's value. The code in Example 7.32 will print out all the data types in a table called "Foo" from the DSN called "MyDSN". In line 14, the column's data types are retrieved with a call to the ColAttributes() method. The resulting hash is passed into a subroutine, DumpAttribs(), which prints out each column's data type. Example 7.32 Printing out a table's column data types 01. use Win32::ODBC; 02. 03. $DSN = "MyDSN"; 04. $Table = "Foo"; 05. $db = new ODBC($DSN) || die "Error connecting: " . ODBC::Error(); 06. 07. if( ! $db->Sql("SELECT * FROM $Table") ) 08. { 09. if( $db->FetchRow() ) 10. { 11. @Fields = $db->FieldNames(); 12. foreach $Field (@Fields) 13. { 14. %Attrib = $db->ColAttributes( $db->SQL_COLUMN_TYPE_NAME(), $Field); 15. DumpAttribs( %Attrib ); 16. } 17. } 18. else 19. { 20. print "Fetch error: " . $db->Error(); 21. } 22. } 23. else 24. { 25. print "SQL Error: " . $db->Error(); 26. } 27. 28. sub DumpAttribs 29. { 30. my( %Attributes ) = @_; 31. my( $ColumnName ); 32. foreach $ColumnName (sort (keys ( %Attributes ) ) ) 33. { 34. print "\t$ColumnName = $Attributes{$ColumnName}\n"; 35. } 36. } Most administrators will create and manage a Data Source Name (DSN) by using the nifty GUI interface such as the ODBC Administrator program or the Control Panel ODBC applet. Both of these (actually they are the same application) do a tremendous job at managing DSNs. Be aware, however, that at times you may need to programmatically manage DSNs. An administrator may want to write a Web-based CGI script enabling the management of DSNs, for example. Win32::ODBC uses the ConfigDSN() function to do just this: Win32::ODBC::ConfigDSN( $Action, $Driver, $Attribute1 [, $Attribute2, ...] ); The first parameter ($Action) is the action specifier. The value of this parameter will determine what action will be taken. The valid actions and their values are as follows: ODBC_ADD_DSN . (0x01)Adds a new DSN. ODBC_MODIFY_DS N. (0x02)Modifies an existing DSN. ODBC_REMOVE_ DSN. (0x03)Removes an existing DSN. ODBC_ADD_SYS_D SN. (0x04)Adds a new System DSN. ODBC_MODIFY_SYS_D SN. (0x05)Modifies an existing System DSN. ODBC_REMOVE_SYS_D SN. (0x06)Removes an existing System DSN. In some versions of Win32::ODBC, the system DSN constants are not exported so their values can be used instead. The second parameter ($Driver) is the ODBC driver name which will be used. The driver must be one of the ODBC drivers that are installed on the computer. You can retrieve a list of available drivers by using either the DataSources() or the Drivers() function. The remaining parameters are the list of attributes. These may differ from one ODBC driver to the next, and it is up to the programmer to know which attributes must be used for a particular DSN. Each attribute is constructed in the following format: "AttributeName=AttributeValue" These examples were taken from a DSN using the Microsoft Access ODBC driver: "DSN=MyDSN" "UID=Cow" "PWD=Moo" "Description=My little bitty Data Source" The "DSN" attribute is one that all ODBC drivers share. This attribute must be in the list of attributes you provide; otherwise, ODBC will not know what to call your DSN or, in the case of modifying and removing, which DSN you alter. It is wise to always include the "DSN" attribute as the first attribute in your list. When you are adding or removing, you need only to specify the "DSN" attribute; others are not necessary. In the case of adding, any other attribute can be added later by modifying the DSN. When you are modifying, you must include the "DSN" attribute so that ODBC will know which DSN you are modifying. Any additional attributes can either be added to the DSN or replace any attributes that already exist with the same name. Some ODBC drivers require you to specify additional attributes (in addition to the "DSN" attribute) when using ConfigDSN(). When adding a new DSN that uses the Microsoft Access driver, for example, you must include the following database qualifier attribute: "DBQ=C:\\SomeDir\\MyDatabase.mdb" In Example 7.33, the ConfigDSN() function is used three times. The first time (line 10), ConfigDSN() creates a new DSN. The second time (line 18) ConfigDSN() modifies the new DSN by adding the password (PWD) attribute and changing the user (UID) attribute. The third call to the ConfigDSN() function (line 22) removes the DSN that was just created. This code is obviously not very useful because it creates and then removes a DSN, but it shows how to use the ConfigDSN() function. Example 7.33 Adding and modifying a DSN 01. use Win32::ODBC; 02. 03. $DSN = "My DSN Name"; 04. $User = "administrator"; 05. $Password = "adminpassword"; 06. $Dir = "C:\\Database"; 07. $DBase = "mydata.mdb"; 08. $Driver = "Microsoft Access Driver (*.mdb)"; 09. 10. if( Win32::ODBC::ConfigDSN( ODBC_ADD_DSN, 11. $Driver, 12. "DSN=$DSN", 13. "Description=A Test DSN", 14. "DBQ=$Dir\\$DBase", 15. "DEFAULTDIR=$Dir", 16. "UID=" ) ) 17. { 18. Win32::ODBC::ConfigDSN( ODBC_MODIFY_DSN, 19. $Driver, 20. "UID=$User", 21. "PWD=$Password"); 22. Win32::ODBC::ConfigDSN( ODBC_REMOVE_DSN, 23. $Driver, 24. "DSN=$DSN" ); 25. } Line 8 assigns the $Driver variable with the name of the ODBC driver to be used. This value should come from a value obtained with a call to Win32::ODBC::Drivers(). The reason for this is because this value could change based on localization. A German version of ODBC, for example, would require line 8 to be: $Driver = "Microsoft Access Treiber (*.mdb)"; The value returned by the Drivers () function is obtained from the ODBC driver directly, so the value will be correct for the locale. The ConfigDSN() function returns a TRUE if it is successful; otherwise, it returns a FALSE. Tip If you do not know what attributes to use in a call to ConfigDSN(), you can always cheat! You just use the ODBC administrator program or the Control Panel's ODBC applet and create a temporary DSN. After you have completed this, you need to run the Registry Editor (regedit.exe or regedt32.exe). If you created a system DSN, open this key: HKEY_LOCAL_MACHINE\Software\ODBC\Your_DSN_Name If you created a user DSN, open this key: HKEY_CURRENT_USER\Software\ODBC\Your_DSN_Name The value names under these keys are the attributes that you specify in the ConfigDSN()function. After a DSN has been created, you may want to review how it is configured. This is done by using GetDSN(): Win32::ODBC::GetDSN( $DSN ); $db->GetDSN(); GetDSN() is implemented as both a function and a method. When called as a function, you must pass in a parameter that is the name of a DSN whose configuration will be retrieved. When used as a method, nothing is passed in to GetDSN(). The DSN that the ODBC connection object represents will be retrieved. A hash is returned consisting of keys that are the DSN's attribute keyword. Each key's associated value is the DSN's attribute value. These key/value pairs are the same as those used in the ConfigDSN() function. It is possible to retrieve a list of available DSNs by using the DataSources() function: Win32::ODBC::DataSources(); DataSources() returns a hash consisting of data source names as keys and ODBC drivers as values. The ODBC drivers represented in the hash's values are in a descriptive format that is used as the second parameter in a call to ConfigDSN(). Example 7.34 illustrates how to retrieve the list of available DSNs and how to use ConfigDSN() to remove these DSNs. Example 7.34 Removing all DSNs 01. use Win32::ODBC; 02. 03. if( %DSNList = Win32::ODBC::DataSources() ) 04. { 05. foreach $Name ( keys( %DSNList ) ) 06. { 07. print "$Name = '$DSNList{$Name}'\n"; 08. if( ! Win32::ODBC::ConfigDSN( ODBC_REMOVE_DSN, $DSNList{$Name}, "DSN=$Name" ) ) 09. { 10. # If we were unable to remove the 11. # DSN maybe it is a system DSN... 12. Win32::ODBC::ConfigDSN( ODBC_REMOVE_SYS_DSN, $DSNList{$Name}, "DSN=$Name" ); 13. } 14. } 15. } Notice how Example 7.34 uses the names and drivers that make up the hash returned by the DataSources() function. These values are used as the DSN name and driver in the calls to ConfigDSN(). Drivers() is yet another DSN-related function: Win32::ODBC::Drivers(); This function returns a hash consisting of available ODBC drivers and any attributes related to the driver. Note that these attributes are not necessarily the same as the ones you provide in ConfigDSN(). The returned hash consists of keys that represent the ODBC driver name (in descriptive format), and the key's associated value contains a list of ODBC driver attributes separated by semicolons (;) such as this: "Attribute1=Value1;Attribute2=Value2;..." These attributes are really not that useful for the common programmer, but may be of use if you are programming ODBC drivers or if you need to make sure that a particular driver is configured correctly. Note The attributes returned by the Drivers()function (which are the ODBC driver's configuration attributes) are not the same type of attributes used in the ConfigDSN()attributes (which are an ODBC driver's DSN attributes). Jane's boss came running into her office and told her that the junior administrator did something really devastating to one of her Web servers. Somehow, he has managed to corrupt the D: drive, the one where all the Access database files were kept. Jane realized that she could not take the server down to reinstall a drive until the weekend. She also began kicking herself for not having already installed the RAID subsystem. To correct the problem, she had someone restore the database files from a tape backup on to the server's E: drive. She figured that all she had to do was change the ODBC DSNs to point to their respective databases on the E: drive rather than the D: drive. No problem�until she realized that there were hundreds of DSNs! So, Jane sat down and wrote the Perl script in Example 7.35. Example 7.35 Changing the database paths for all MS Access DSNs 01. use Win32::ODBC; 02. $OldDrive = "d:"; 03. $NewDrive = "e:"; 04. # We are looking for Access databases 05. $Driver = GetDriver( ".mdb" ) || Error( "finding the ODBC driver" ); 06. 07. %DSNList = Win32::ODBC::DataSources() || Error( "retrieving list of DSNs" ); 08. foreach $DSN ( keys( %DSNList ) ) 09. { 10. next if( $DSNList{$DSN} ne $Driver ); 11. my( %Config ) = Win32::ODBC::GetDSN( $DSN ); 12. if( $Config{DBQ} =~ s/^$OldDrive/$NewDrive/i ) 13. { 14. if( ! Win32::ODBC::ConfigDSN( ODBC_MODIFY_DSN, 15. $Driver, 16. "DSN=$DSN", 17. "DBQ=$Config{DBQ}" ) ) 18. { 19. # If the previous attempt to modify the DSN 20. # failed then try again but using a system DSN. 21. Win32::ODBC::ConfigDSN( ODBC_MODIFY_SYS_DSN, 22. $Driver, 23. "DSN=$DSN", 24. "DBQ=$Config{DBQ}" ); 25. } 26. } 27. } 28. 29. sub Error 30. { 31. my( $Reason ) = @_; 32. die "Error $Reason: " . Win32::ODBC::Error() . "\n"; 33. } 34. 35. sub GetDriver 36. { 37. my( $Extension ) = @_; 38. my( %Sources, $Driver, $Description ); 39. Extension =~ s/([\.\\\$])/\\$1/gs; 40. if( %Sources = Win32::ODBC::Drivers() ) 41. { 42. foreach $Driver ( keys( %Sources ) ) 43. { 44. if( $Sources{$Driver} =~ /FileExtns=[^;]*$Extension/i ) 45. { 46. $Description = $Driver; 47. last; 48. } 49. } 50. } 51. return $Description; 52. } The script in Example 7.35 first seeks the driver description for the ODBC driver that recognizes databases with an .mdb extension. This is performed by calling a subroutine GetDriver() on line 5. The subroutine makes a call to Win32::ODBC::Drivers() to get a list of all installed drivers, and then tests them looking for one which has a keyword FileExtns that matches the specified file extension (lines 40-50). Notice that line 39 prepends any period, backslash, and dollar sign with an escaping backslash. This is so that when the $Extension variable is used in the regular expression (line 44), the characters are not interpreted. After the script has the driver description, it retrieves a list of available DSNs (line 7) and compares their drivers with the target one. If the drivers match, the DSN's configuration is obtained (line 11) and the database file is compared to see whether the database is on the old D: drive. If so, the drive is changed to E:. The DSN is then modified to use the new path by first modifying it as a user DSN (line 14); if that fails, it then tries it as a system DSN (line 21). By running this, Jane could quickly fix all the DSNs on her server in just a few minutes and with no errors. If she had manually altered the DSNs through a graphical ODBC administrator program, it would have taken much longer and would be prone to mistakes. Jane also added a task to her calendar reminding her to install the server's RAID subsystem. The ODBC API has a multitude of functions, many of which the Win32::ODBC extension exposes to a Perl script. A typical user will never use most of these, but for those who are migrating data or need to perform complex queries and cursor manipulation, among other tasks, these functions are required. This section discusses these functions and methods. It is possible, for instance, to retrieve an array of column names using the FieldNames() method, although this is not the most useful feature because it only reports column names and nothing else. The syntax for the FieldNames() method is as follows: @List = $db->FieldNames(); The returned array consists of the column names of the result set. There is no guarantee to the order in which the names are listed. Connections to data sources quite often have attributes that govern the nature of the connection. If a connection to a data source occurs over a network, for example, it may allow the network packet size to be changed. Likewise logon timeout values, ODBC tracing, and transaction autocommit modes are considered to be connection attributes. These attributes can be modified and examined by two Win32::ODBC methods: GetConnectOption() and SetConnectOption(): $db->GetConnectOption( $Option ); $db->SetConnectOption( $Option, $Value ); For both methods, the first parameter ($Option) is the connection option as defined by the ODBC API. Appendix B contains a list of connect options. The second parameter in the SetConnectOption() ($Value) indicates the value to set for the specified option. The return value for GetConnectOption() is the value for the specified option. Warning Be careful with this, because GetConnectOption()does not report any value to indicate an error-if the method fails, it will still return some value that may be invalid. The return value for SetConnectOption() is either TRUE if the option was successfully set or FALSE if the method failed to set the option. In Example 7.36, the ODBC tracing state is queried in line 4. (Tracing is where all ODBC API calls are copied into a text file so that you can later see what your ODBC driver was doing.) If ODBC tracing is already active, the current trace file is retrieved (in line 12) and is printed. Otherwise the trace file is set (line 6) and tracing is turned on (line 7). Example 7.36 Using the Get ConnectOptions() and SetConnectOptions() methods 01. use Win32::ODBC; 02. $db = new Win32::ODBC( "MyDSN" ) || die "Error: " . Win32::ODBC::Error(); 03. $TraceFile = "C:\\TEMP\\TRACE.SQL"; 04. if($db->GetConnectOption( $db->SQL_OPT_TRACE() ) == $db->SQL_OPT_TRACE_OFF ) 05. { 06. $db->SetConnectOption( $db->SQL_OPT_TRACEFILE(), $TraceFile ); 07. $db->SetConnectOption( $db->SQL_OPT_TRACE(), $db->SQL_OPT_TRACE_ON() ); 08. print "ODBC tracing is now active.\n"; 09. } 10. else 11. { 12. $TraceFile = $db->GetConnectOption( $db->SQL_OPT_TRACEFILE() ); 13. print "Tracing is already active.\n"; 14. } 15. print "The ODBC tracefile is '$TraceFile'.\n"; 16. 17. ...continue with your code... 18. $db->Close(); Note The ODBC API specifies so many options that are quite useful (and in some cases necessary) for a user but are just far beyond the scope of this book. Appendix B lists most of these options and includes a brief description of them. Some of these descriptions are quite technical so that ODBC programmers can understand their impact. For those who need more information on the ODBC options and their values or for those who are just curious, it is highly recommended to consult a good book on the ODBC API. A couple of recommended books are Microsoft's "ODBC SDK and Programmer Reference" and Kyle Geiger's "Inside ODBC." Just as an ODBC connection has attributes that can be queried and modified, so can an ODBC statement. When a script creates a Win32::ODBC object, it has both connection and statement handles. This means that you can not only manage the connection attributes, but you can also manage statement attributes such as the cursor type, query timeout, and the maximum number of rows in a dataset from a query. These statement attributes are managed by the GetStmtOption() and SetStmtOption() methods: $db->GetStmtOption( $Option ); $db->SetStmtOption( $Option, $Value ); The first parameter is the statement option as defined by the ODBC API. Appendix B contains a list of these statement options. The second parameter in SetStmtOption() indicates the value to set for the specified option. The return value for GetStmtOption() is the value for the specified option. Warning Be careful with this, because GetStmtOption()does not report any value to indicate an error-if the method fails, it will still return some value that may be invalid. The return value for SetStmtOption() is either TRUE if the option was successfully set or FALSE if the method failed to set the option. In Example 7.37, the SQL_ROWSET_SIZE statement option is set to 100. This will retrieve a rowset of no more than 100 rows every time FetchRow() is called. Because the rowset size is greater than 1, the actual row processed after the FetchRow() will increase by 100. This is why we are using the GetStmtOption() with the SQL_ROW_NUMBER option to determine the current row number. Example 7.37 Using the GetStmtOption() and SetStmtOption() methods 01. use Win32::ODBC; 02. $db = new Win32::ODBC( "MyDSN" ) || die "Error: " . Win32::ODBC::Error(); 03. if( ! $db->Sql( "SELECT * FROM Foo" ) ) 04. { 05. $db->SetStmtOption( $db->SQL_ROWSET_SIZE(), 100 ); 06. while( $db->FetchRow( 1, SQL_FETCH_NEXT ) ) 07. { 08. my( %Data ) = $db->DataHash(); 09. ...process data� 10. if( $Row = $db->GetStmtOption( $db->SQL_ROW_NUMBER() ) ) 11. { 12. print "Processed row $Row.\n"; 13. } 14. else 15. { 16. print "Unable to determine row number.\n"; 17. } 18. } 19. } 20. $db->Close(); There is one additional method that will retrieve information pertaining to the ODBC driver: GetInfo(). The information retrieved by GetInfo() is read-only and cannot be set. $db->GetInfo( $Option ); The only parameter ($Option) is a value that represents the particular information that is desired. Appendix B provides a list of values. Example 7.38 shows how GetInfo() can be used to determine whether the data source is read-only. Example 7.38 Using the GetInfo() method 01. use Win32::ODBC; 02. $db = new Win32::ODBC( "MyDSN" ) || die "Error: " . Win32::ODBC::Error(); 03. if( ! $db->Sql( "SELECT * FROM Foo" ) ) 04. { 05. $Result = $db->GetInfo( $db->SQL_DATA_SOURCE_READ_ONLY() ); 06. if( $Result =~ /y/i ) 07. { 08. print "OOOPS! This data source is read only!\n"; 09. $db->Close(); 10. exit; 11. } 12. ...process data... 13. } 14. $db->Close(); Not all ODBC drivers support all ODBC functions. This can cause problems for script writers. To check whether a connection supports a particular ODBC function, you can use the GetFunctions() method: %Functions = $db->GetFunctions( [$Function1[, $Function2, ...]]); The optional parameters are constants that represent ODBC functions such as SQL_API_SQLTRANSACT (which represents the ODBC API function SQLTransact()). There can be any number of functions passed in as parameters. If no parameters are passed in, all functions are checked. A hash is returned consisting of keys that represent an ODBC API function, and the key's value is either a TRUE or FALSE. If parameters were passed into the method, the resulting hash consists of only the keys that represent the parameters passed in. In Example 7.39, the GetFunctions() method is used to learn whether the ODBC driver supports transaction handling by means of the ODBC API's SQLTransactions() function. If it does, the Perl script can call $db->Transaction(). Example 7.39 Using the GetFunctions() method use Win32::ODBC; $db = new Win32::ODBC( "MyDSN" ) || die "Error: " . Win32::ODBC::Error(); %Functions = $db->GetFunctions(); if( $Functions{$db->SQL_API_SQLTRANSACT()} ) { print "Hey, this ODBC driver supports the SQLTransact() function!\n"; } When a query returns a result set, a buffer must be created for each column. Generally Win32::ODBC can determine the size of the buffer based on the column data type. Some data types, however, do not describe the size of their data (such as the memo data type found in MS Access). In these cases, Win32::ODBC allocates a buffer of a predetermined size. This size is the maximum size that a buffer can be. This limit, however, can be both queried and changed with the GetMaxBufSize() and SetMaxBufSize() methods: $db->GetMaxBufSize(); $db->SetMaxBufSize( $Size ); The SetMaxBufSize() method takes one parameter ($Size), which represents the size in bytes that the limit of a buffer can be. Both functions return the number of bytes that the current buffer size is limited to. Because each database has its own way of handling data, it can become quite difficult to know how to manage a particular data type. One database may require all text literals to be enclosed by single quotation marks, whereas another database may require double quotation marks. Yet the MONEY data type may require a prefix of some character such as the dollar sign ($) but nothing to terminate the literal value. Because a script using ODBC must be able to interact with any kind of database, it is important to be able to query the database to learn this information. This is where GetTypeInfo() comes in: $db->GetTypeInfo( $DataType ); The first, and only, parameter is a data type. This value can be any one of the following data types: SQL_ALL_TYPES SQL_TYPE_DATE SQL_CHAR SQL_TYPE_TIME SQL_VARCHAR SQL_TYPE_TIMESTAMP SQL_LONGVARCHAR SQL_INTERVAL_MONTH SQL_DECIMAL SQL_INTERVAL_YEAR SQL_NUMERIC SQL_INTERVAL_YEAR_TO_MONTH SQL_SMALLINT SQL_INTERVAL_DAY SQL_INTEGER SQL_INTERVAL_HOUR SQL_REAL SQL_INTERVAL_MINUTE SQL_FLOAT SQL_INTERVAL_SECOND SQL_DOUBLE SQL_INTERVAL_DAY_TO_HOUR SQL_BIT SQL_INTERVAL_DAY_TO_MINUTE SQL_BIGINT SQL_INTERVAL_DAY_TO_SECOND SQL_BINARY SQL_INTERVAL_HOUR_TO_MINUTE SQL_VARBINARY SQL_INTERVAL_HOUR_TO_SECOND SQL_LONGVARBINARY SQL_INTERVAL_MINUTE_TO_SECOND If successful, the GetTypeInfo() method returns TRUE and a dataset that describes the data type passed is returned; otherwise, the method returns FALSE. Any resulting dataset will contain one row representing the specified data type. Use the FetchRow() and DataHash() methods to walk through the resulting dataset. Table 7.4 describes the columns of the dataset. If SQL_ALL_TYPES is specified as the data type, the dataset will contain a row for every data type that the data source is aware of. Example 7.40 illustrates the use of the GetTypeInfo() method. Line 8 assigns the value SQL_ALL_TYPES to the variable $Type. This could be any valid data type constant from the preceding list. Notice the hack to obtain that value; referring to it as a method from the $db. This is necessary because the data type constants are not exported from the ODBC.PM file (unless you edit ODBC.PM and add the constants to the EXPORT list). Table 7.4 Dataset returned by the GetTypeInfo() method Column Name Description Example 7.40 Determining how an SQL literal is handled using GetTypeInfo() 01. use Win32::ODBC; 02. $DSN = "My DSN" unless $DSN = $ARGV[0]; 03. if( ! ( $db = new Win32::ODBC( $DSN ) ) ) 04. { 05. print "Error: Could not connect to \"$DSN\".\n" . Win32::ODBC::Error(); 06. exit; 07. } 08. $Type = $db->SQL_ALL_TYPES(); 09. if( $db->GetTypeInfo( $Type ) 10. { 11. my %Data; 12. if( $db->FetchRow() ) 13. { 14. my (%Data) = $db->DataHash(); 15. print "$Data{TYPE_NAME} data is referred to as: " . "$Data{LITERAL_PREFIX}data$Data{LITERAL_SUFFIX}\n"; 16. }else{ 17. $Data{TYPE_NAME} = "---not supported---"; 18. } 19. }else{ 20. print "Can't retrieve type information: " . $db->Error() . "\n"; 21. } If supported by your ODBC driver, you can submit multiple queries in one call to Sql(). If the query is successful, you would fetch and process the data from the first SQL statement and then call the MoreResults() method and repeat the process of fetching and processing the data: $db->MoreResults(); The return value is either TRUE, indicating that another result set is pending, or FALSE, indicating that no more result sets are available. Example 7.41 demonstrates using MoreResults(). Example 7.41 Processing multiple result sets with MoreResults() 01. use Win32::ODBC; 02. $db = new Win32::ODBC( "MyDSN" ) || die "Error: " . Win32::ODBC::Error(); 03. $Query = "SELECT * FROM Foo SELECT * FROM Bar"; 04. if( ! $db->Sql( $Query ) ) 05. { 06. do 07. { 08. while( $db->FetchRow() ) 09. { 10. my( %Data ) = $db->DataHash(); 11. ...process data... 12. } 13. } while ( $db->MoreResults ); 14. } 15. $db->Close(); By default, most ODBC drivers are in autocommit mode. This means that when you perform an INSERT, UPDATE, DELETE, or some other query that modifies the database's data, the changes are made immediately; they are automatically committed. For some situations, however, this is unacceptable. Consider a CGI-based shopping cart script. Suppose this script will submit an order by adding information to a table in a database. Then it updates another table which describes the particular customer (the time he placed the order, what that order number was, and so forth). This is a case where autocommit mode can be a problem. Suppose that for some odd reason the second update fails. The script will tell the user that the order failed, but the first update has already been submitted, so the order is now queued to be processed. This situation can be avoided by setting autocommit mode to off. When autocommit is off, you can submit as many modifications to the database as you want without having the modifications actually committed until you explicitly tell it to do so with the Transact() method: $db->Transact( $Type ) The only parameter ($Type) is the type of transaction. This can be either of the following listed values: � SQL_COMMIT. All modifications are committed and saved to the database. � SQL_ROLLBACK. All modifications are ignored and the database remains as it was before any modifications were made to it. The Transact() method will return a TRUE if the transaction was successful and a FALSE if it failed. Note Not all ODBC drivers support the Transact()method. You can use the GetFunctions()method to check whether your driver does support it. Before using the Transact() method, you need to make sure that the autocommit mode is turned off. Example 7.42 describes how to turn autocommit off as well as how to use Transact(). Example 7.42 Using the Transact() method 01. use Win32::ODBC; 02. $db = new Win32::ODBC( "MyDSN" ) || die "Error: " . Win32::ODBC::Error(); 03. 04. $db->SetConnectOption( $db->SQL_AUTOCOMMIT, $db->SQL_AUTOCOMMIT_OFF ); 05. $Query = "INSERT INTO Foo (Name, Age) VALUES ('Joe', 31 )"; 06. if( ! $db->Sql( $Query ) ) 07. { 08. ...process something that sets $bSuccess... 09. 10. if( $bSuccess == 1 ) 11. { 12. $db->Transact( $db->SQL_COMMIT ); 13. } 14. else 15. { 16. $db->Transact( $db->SQL_ROLLBACK ); 17. } 18. } 19. $db->Close(); Some SQL statements ( UPDATE, INSERT, DELETE, and sometimes SELECT) return a value that represents the number of rows that were affected by the statement. Not all drivers support this. If it is supported, however, you can obtain this number with the RowCount() method: $db->RowCount(); The return value is either the number of rows affected or -1 if the number of affected rows are not available. The -1 value can be a result of an ODBC driver that does not support this function (such as the Access driver). So far, this chapter has discussed the simplest usage of the Win32::ODBC extension. This is how most scripts use it; however, Win32::ODBC provides access to most of the ODBC 2.0 API. This means that if you are familiar with ODBC you can control your interactions with databases using some pretty powerful features, including fetching rowsets, cursor control, and error processing. Suppose that you submit a query to a SQL Server. This query will return 10,000 rows and you need to process each row. Every time you use the FetchRow() method, a command will be sent to the server requesting the next row. Depending on the network traffic, this can be quite a slow process because your ODBC driver must make 10,000 network requests and the SQL Server must respond with data 10,000 times. Add on top of this all the data required to package this network data up (such as TCP/IP headers and such) and you end up with quite a bit of network traffic. To top it all off, each time you fetch the next row your script must wait until the request has made it back from the server. All this can cause inefficient and slow response times. If you could convince the ODBC driver to always collect 1,000 rows from the server each time a FetchRow() was called, the driver would only have to call to the server 10 times to request data. This is where the concept of a rowset comes in. When you fetch rows, the ODBC driver really fetches what is known as a rowset. A rowset is just a collection of rows. By default a rowset consists of one row. Typically this configuration suffices, but it can be changed. You can change the number of rows that make up a rowset by calling the SetConnectOption() method specifying the SQL_ROWSET_SIZE option. Once done, the advanced options of the FetchRow() method can be used to obtain a desired rowset. Then by just resetting the rowset size to 1, a regular call to FetchRow() will retrieve one row at a time. Refer to the next section, "Advanced Row Fetching," as well as Example 7.44 for more details. Although the FetchRow() method was described earlier, you need to revisit it now. A few parameters that can be passed into it need some explaining: ( $Result[, @RowResults] ) = $db->FetchRow( [$Row [, $Type]] ); If no parameters are passed in, the method will act as was described in the section titled "Processing the Results." The first parameter ($Row) is a numeric value that indicates a row number. The second value ($Type) indicates the mode in which the row number (the first parameter) will be used. The following list shows possible values for this parameter. If this parameter is not specified, the default value of SQL_FETCH_RELATIVE will be used. SQL_FETCH_NEXT. Fetch the next rowset (typically this is just one row) in the dataset. This will disregard the row value passed into FetchRow(). If this is the first FetchRow()to be called, this is the equivalent to SQL_FETCH_FIRST. SQL_FETCH_FIRST. Fetch the first rowset in the dataset. SQL_FETCH_LAST. Fetch the last rowset in the dataset. SQL_FETCH_PRIOR. Fetch the previous rowset in the dataset. If the cursor is positioned beyond the dataset's last row, this is equivalent to SQL_FETCH_LAST. SQL_FETCH_ABSOLUTE. Fetch the rowset that begins at the specified row number in the dataset. If the row number is 0, the cursor is positioned before the start of the result set (before physical row number 1). SQL_FETCH_RELATIVE. Fetch to rowset beginning with the specified row number in relation to the beginning of the current rowset. If row is 0, the current rowset will not change (instead, it will be updated). SQL_FETCH_BOOKMARK. This is not supported, but is included for completeness. The F etchRow() method will move the rowset to the new position based on the row and mode parameter. There are two return values for the FetchRow() method. The first indicates whether the fetch was successful. It will return a TRUE if successful; otherwise, it returns FALSE-unless, however, optional parameters are passed into the method. If this is the case, the first return value will be SQL_ROW_SUCCESS if successful; otherwise, it will be some other value. The second return value is only returned if optional parameters are passed into the method. This second return value is an array of values. The array consists of a return value for each row fetched in the rowset. By default ODBC drivers use a value of 1 for the rowset size so the FetchRow() will only return one value (because it only fetches one row). If you change the rowset to greater than one, however, your return value array will reflect the number of rows in your rowset. Each value in the returned array corresponds with one of the values in the following list: SQL_ROW_SUCCESS. The row was successfully fetched. SQL_ROW_UPDATED. The row has been updated since the last time it was retrieved from the data source. SQL_ROW_DELETED. The row has been deleted since the last time it was retrieved from the data source. SQL_ROW_ADDED. The row has been added since the last time it was retrieved from the data source. SQL_ROW_ERROR. The row was unable to be retrieved. Note that it is possible to set your cursor to a type that will reflect alterations made to the data. Suppose, for example, that you are retrieving all the rows from a data source with " SELECT * FROM Foo". Your script then begins to retrieve rows one at a time. While your script is fetching row 100, another program has deleted row 195 from the database. When you get to row 195, your ODBC driver will try to fetch it even though it has been deleted. This is because when you execute your query the driver receives a list of row numbers which it will fetch. Because row number 195 was included in this list, the driver will try to fetch it when requested to, regardless of whether it has since been deleted. The driver will report the fact that it has been deleted by returning a value of SQL_ROW_DELETED in the return array. For the following example, assume that the rowset size was set to ten so that every time you perform a FetchRow(), ten rows are actually fetched. When you fetch the rowset starting at 190, FetchRow() will return a ten-element array that will consist of the values shown in Example 7.43. Example 7.43 The returned row result array from FetchRow() indicating that the fifth row has been deleted 01. $RowResults[0] == SQL_ROW_SUCCESS; 02. $RowResults[1] == SQL_ROW_SUCCESS; 03. $RowResults[2] == SQL_ROW_SUCCESS; 04. $RowResults[3] == SQL_ROW_SUCCESS; 05. $RowResults[4] == SQL_ROW_SUCCESS; 06. $RowResults[5] == SQL_ROW_DELETED; 07. $RowResults[6] == SQL_ROW_SUCCESS; 08. $RowResults[7] == SQL_ROW_SUCCESS; 09. $RowResults[8] == SQL_ROW_SUCCESS; 10. $RowResults[9] == SQL_ROW_SUCCESS; Example 7.44 demonstrates how you can use the extended features of FetchRow() to jump ahead several rows in a result set. Example 7.44 Advanced and simple example of FetchRow() 01. use Win32::ODBC; 02. $db = new Win32::ODBC( "MyDsn" ) || die "Error: " . Win32::ODBC::Error(); 03. 04. # Prevent the cursor from closing automatically 05. $db->SetStmtCloseType( SQL_DONT_CLOSE ); 06. 07. # Change our cursor type to static (assuming the driver supports it) 08. $db->SetStmtOption( $db->SQL_CURSOR_TYPE, $db->SQL_CURSOR_STATIC ); 09. 10. if( ! $db->Sql( "SELECT * FROM Foo" ) ) 11. { 12. if( ( $db->FetchRow( 9000, SQL_FETCH_ABSOLUTE ))[0] == $db->SQL_ROW_SUCCESS ) 13. { 14. do 15. { 17. my( %Data )= $db->DataHash(); 18. print "User: $Data{Name}\n"; 19. } while( $db->FetchRow() ); 20. } 21. } 22. $db->Close(); The operation in Example 7.44 can save one quite a bit of both time and network bandwidth (if the ODBC driver is talking to a network database server) because the first FetchRow() (line 12) will position the cursor to point at row 9,000 right away. The alternative would be to walk through the database one row at a time until it got to the nine thousandth row. At this point, the column's data will be retrieved and printed for the rest of the remaining rows by using the simple FetchRow() method. Those familiar with SQL will be happy to know that cursors are supported. If you have no idea what a cursor is, you probably don't need to be concerned about them. Cursors are beyond the scope of this book, so very little time will be spent on this topic. Several ODBC functions pertain to their use, however, and these are described in this section. Basically, the cursor is an indicator that points to the current row in a given rowset. This is just like a cursor on a DOS window; it shows you where your current position in the DOS window is. The ODBC cursor just shows you where the current row is from which you will be retrieving data. Win32::ODBC, by default, resets the state of all cursors automatically for you whenever you use the Sql() or any other method that returns rows of data (such as GetTypeInfo() and any of the cataloging functions). Whenever these methods are used, the current cursor is dropped-that is to say, it is destroyed and forgotten. This automatic dropping can be a problem when you need to keep a cursor open or if you have named a cursor and need to retain its name. This handling of the cursor can be overridden by changing the statement close type with the following method: $db->SetStmtCloseType( $CloseType[, $Connection ] ); The first parameter ($CloseType) is one of the close types documented in the following list. SQL_CLOSE. The current statement will not be destroyed, only the cursor. For all practical reasons, this is the same as SQL_DROP. SQL_DROP. The current statement is destroyed as well as the cursor. SQL_DONT_CLOSE. This will prevent the cursor from being destroyed anytime new data is to be processed. SQL_UNBIND. All bound column buffers are unbound. This is of no use and is only included for completeness. SQL_RESET_PARAMS. All bound parameters are removed from their bindings. Because Win32::ODBC does not yet support parameter binding, this is of no use and is only included for completeness. The optional second parameter ($Connection) is the connection number for an existing ODBC connection object. If this is empty, the current object is used. This is the object whose close type will be set. The SetStmtCloseType() method will return a text string, indicating which close type is set on the object. Yet another method that will enable you to retrieve the current close type on a connection object is as follows: $db->GetStmtCloseType( [$Connection] ); The optional first parameter ($Connection) is an ODBC connection object number that indicates which object will be queried. If nothing is passed in, the current object is assumed. Just like the SetStmtCloseType() method, Ge tStmtCloseType() will return a text string indicating which of the five values documented in the list for S etStmtCloseType() is the close type. A connection's cursor can be dropped by force if you need to. This can be very handy if you have previously set its close type to SQL_DONT_CLOSE: $db->DropCursor( [$CloseType] ); The first optional parameter ($CloseType) indicates how the cursor is to be dropped. Valid values are the same as documented for SetStmtCloseType (), with the exception of SQL_DONT_CLOSE; this value is not allowed. If no value is specified, SQL_DROP is assumed. Note that this will not only drop the cursor, but also the current statement and any outstanding and pending result sets for the connection object. All cursors that are created are given a name either by the programmer or by the ODBC driver. This name can be useful in queries and other SQL statements that allow you to use the cursor name such as " UPDATE table ... WHERE CURRENT OF cursorname." To retrieve the cursor name, you use the GetCursorName() method: $db->GetCursorName(); The method will return a text string that is the name of the cursor. If no cursor is defined, the method returns an undef. If you want, you can name the cursor yourself. This may be necessary for stored procedures that require particular cursor names to be set: $db->SetCursorName( $Name ); The first and only parameter ($Name) is the name of the cursor. Each ODBC driver defines the maximum length of it's cursor names but the ODBC API recommends not exceeding 18 characters in length. If the cursor's name is successfully set, the method returns a TRUE; otherwise, it returns a FALSE. Note A cursor's name will be lost when the cursor is reset or dropped. For this reason it is important that you set the statement close type to SQL_DONT_CLOSEbefore you set the cursor name. Otherwise, the moment the SQL query is generated the cursor will be destroyed and the name will be lost. Win32::ODBC closes cursors before executing SQL statements unless the close type is set to SQL_DONT_CLOSE. You can always force the cursor to be dropped with the DropCursor()method. ODBC connection objects do not share anything with each other. For example if you create two objects ( $db1 and $db2), even from the same database, the two objects cannot communicate with each other. If $db1 has created a dataset with a named cursor $db2 cannot access $db1's data. There is a way, however, for $db1 to talk with $db2, using cloning. When you clone a Win32::ODBC object you are creating a duplicate object. This cloned object talks with the same data source and for all practical matters is the same as the original object. Because these objects are separate and discrete from each other they can run separate queries and process results as if they were two totally separate connections. The wonderful nature of cloned objects is that they can talk with each other. If one object issues a query that produces a dataset, the other object can use that dataset for its own query. Cloned objects are created using the new command: new Win32::ODBC( $Object ); If you pass in another Win32::ODBC connection object rather than a DSN name, the object will be cloned-that is, another object will be created which shares the same connection to the database. In technical terms, the objects will share the same environment and connection handles although their statement handles will be unique. This is used mostly in conjunction with cursor operations. An ODBC error is an error generated by either the ODBC Manager or an ODBC driver. This is a number that refers to an error that occurred internally to ODBC. In practice this error is really only useful if you have an ODBC API manual or information on error numbers for a particular ODBC driver. The SQL state, however, is a standardized string that describes a condition that all ODBC drivers adhere to. Win32::ODBC tracks ODBC errors in two ways. The first way is that the module itself will always track the last ODBC error that occurred. The second way is that a particular ODBC connection object will track its own errors. You retrieve the error information by calling Error() either as an object method or as a module function: Win32::ODBC::Error(); $db->Error(); If used as a module function (the first example), it will report the last error that the ODBC module had generated regardless of which connection was responsible for the error. If used as an object's method (the second example), however, the last error that the particular object generated will be reported. When retrieving error information with the Error() method (or function), the results could be in one of two formats, depending upon the context of the assignment: as an array or as a text string in a scalar context. If a call were made to Error() in an array context: @Errors = $db->Error(); the returning array, @Errors, would consist of the following elements: ( $ErrorNumber, $Tagged Text, $Connection Number, $SQL State) Table 7.5 lists a description for each element. If a call were made to Error() in a scalar context: $Error = $db->Error(); the result would resemble the following string: "[ Error Number] [Connection Number] [Tagged Text]" Table 7.5 lists descriptions for these elements. Table 7.5 Fields returned by the E rror() function and method Field Description The current SQL state can be retrieved by using the SQLState() method: $db->SQLState(); The value returned constitutes the last SQL state of the particular ODBC connection object. Because the SQL state represents the state of the connection and it is not driver specific (as the error number is), it is the same for all ODBC drivers. Any good book on the ODBC API will list all possible SQL state codes. Interacting with databases is not a very difficult thing to do, but having to interact with several, potentially different, databases can cause nightmares because of a lack of conformity across them. The expectation of writing a Perl script to recognize all different types of databases is just too high to be realistic. The ODBC API is commonly found on Win32 machines and it addresses this issue of database conformity. By using ODBC, a programmer can create huge scripts that will work with any database and can be easily transported from machine to machine. The Win32::ODBC extension has provided thousands of CGI, administrative, and other types of scripts with access to ODBC. This extension provides a relatively thin layer of abstraction from the ODBC API that is ideal for anyone who is prototyping an application in Perl that will be programmed in C or C++ to access the ODBC API. Any coder who is familiar with the ODBC API will feel at home using this extension. If a programmer is not familiar with ODBC, the extension hides most of the tedious work required so that the coder can focus on manipulating data and not worry about how to retrieve it. You can also access ODBC data sources in other ways such as using the Perl DataBase Interface (DBI) which has a basic ODBC driver. Considering that the DBI ODBC extension exists on both Win32 and UNIX platforms, it is ideal for cross-platform scripts. Additionally the Win32::OLE extension (refer to Chapter 5) provides access into Microsoft's ActiveX Database Objects (ADO) and other COM-based ODBC implementations. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/books/1578700671.aspx
crawl-002
refinedweb
16,546
63.49
Groovy - Asserting that all JSON response elements have content I have a test scenario with hundreds of response elements and I am trying to script an assertion to ensure that none of the elements returned as null. There must be an easy way to use a wildcard to search the entire response body for a null field, but I can't figure out how to do it. Can anyone suggest a better solution to ensure no nulls were returned? Response snippet: { "errors" : [ ], "transactionId" : "123456789", "dateTime" : "2017-12-08T22:07:43.099Z", "observed" : { "date" : "2017-12-08", "obs" : 39, "conditions" : { "text" : "Clear", "code" : 85 } }, etc..... And the code I have so far: import com.parasoft.api.* import groovy.json.JsonSlurper import java.util.* import soaptest.api.* void extractAndAssertResponse(Map input, ScriptingContext context) { def slurper = new JsonSlurper() String responseBody = input.get(SOAPUtil.XML_RESPONSE) Application.showMessage("RESPONSE_BODY: " + responseBody) def json = slurper.parseText(responseBody) assert json.transactionId != null assert json.dateTime != null assert json.observed.date != null } Hello Gambit, If you don't expect "null" to be anywhere in your payload, it may be easier to use the "Search Tool" to throw an error if the particular string is found. For example: Hi OmarR, The search tool looks like it would work fine for the literal "null" field value, but how would I account for the scenario of a return of ""? You could add ""as one of the search terms in the tool. For example: The Json Assertor could also accomplish the same for specific elements in the payload: Give these tools a try and let us know if you have any questions about them Hi Omar, I've tested this solution and it works for my problem. Thanks!
https://forums.parasoft.com/discussion/comment/9062
CC-MAIN-2018-09
refinedweb
285
63.59
go to bug id or search bugs for Add a Patch Add a Pull Request I Don't Think that this is PEAR related, I Tried the newest PEAR packages on PHP 4.3.1 but it don't solve the problem, i even tried to not use PEAR, but instead the normal interbase functions. But if i do SELECT COUNT (*) with the same 'where' expression it results in N row's, but if i try to fetch it, it doesn't have any content. So it's a Interbase problem then. Please try using this CVS snapshot: For Windows: Hi, I just D/L the php4-STABLE, but it doesn't even make through Configure, the errors are : checking for InterBase support... yes checking for isc_detach_database in -lgds... no configure: error: libgds not found! Check config.log for more information. ---------------------------------------------------- Config log: configure:37594: gcc -o conftest -g -O2 -L/opt/interbase/lib -lgds conftest.c -lgds -lresolv -lm -ldl -lnsl 1>&5 /opt/interbase/lib/libgds.so: undefined reference to `crypt' collect2: ld returned 1 exit status configure: failed program was: #line 37583 "configure" #include "confdefs.h" /* Override any gcc2 internal prototype to avoid an error. */ /* We use char because int might match the return type of a gcc2 builtin and then its argument prototype would still apply. */ char isc_detach_database(); int main() { isc_detach_database() ; return 0; } ----------------------------------- I've tried this with 2 stable versions of Firebird: FirebirdSS-1.0.0.796-0 FirebirdSS-1.0.2.908-1 I can't switch to BETA versions of Firebird Thanks much, Jerry These are two different problems. It's better to set this bug in Compile Failure category. Interbase related ----> Compile Failure The compile issue you've received is true for Firebird. - Firebird users can't pass configure test. libgds test require libcrypt too, otherwise it fails. - Interbase users have no problems instead. The question about ibase_fetch_row sounds strange .... it doesn't work on some table only? I' m using 4.3.1 + Firebird (without pear) and I have no problem. However, if you can reproduce your error about ibase_fetch_row in a few lines of code, feel free to open a new bug. (see at as an excellent example of how to report a bug) The problem begun differently,sniper pointed the newest PHP version to try, in this new version i got problems with compiling the interbase extension, the funny thing is i got libcrypt on my System :( (and its on my ld path) I posted my table structure on one of the previous messagess, could you please try to use the table i posted ? I've tried it with/without PEAR, and the Result are the same,no data can be retrieved, but the row count of the same Select statement is working alright. Thanks Much in advance Maybe I don't explain me well ... 1) the compile failure exists for ALL Firebird users, it's our test for libgds in ./configure that is not correct . If we don't fix it you couldn't compile php with Firebird anymore .... this is why I turn your bug in compile error. So right now let's solve the compile error ok? 2) Open a NEW bug for ibase_fetch_row (and without pear 100%). I need to reproduce the error you 're talking about for ibase_fetch_row but from your information I can't. All tests I 've done work good. Daniela, the problem is in ext/interbase/config.m4 so please don't make this a general "compile failure".
https://bugs.php.net/bug.php?id=23099
CC-MAIN-2018-05
refinedweb
587
74.19
This article has been excerpted from book "Visual C# Programmer's Guide"When you have to compile complex applications, you need to pass several other parameters to the compiler. Let's address each of the C# compiler options. The @ Option As shown in Listing 1, the @ option is used to specify a response file (denoted with the extension .rsp) that contains the compiler options to be used while compiling. The response file is a normal text file that can contain compiler switches and/or file names to be compiled. The pound symbol (#) is used to write comments. Writing options in a response file is similar to typing them at the command prompt. Specifying these options in a response file comes in handy when you want to perform complex compilations during application design. For example, if you have to specify many assemblies to be referenced in your application, a normal compile statement would span many lines! You can write a response file containing the reference to all the assemblies and reuse this file every time you compile the code. You can specify additional options along with the response file. The compiler processes these options sequentially as they are encountered. Therefore, command-line arguments can override previously listed options in response files. Conversely, options in a response file will override options listed previously on the command line or in other response files. Listing 1:Sample Response File (winform.rsp) #Sample file containing general compiler options for Windows forms #Usage: csc @winform.rsp yourCode.cs #Usage 2: csc @winform.rsp /out:MyApplication.exe yourCode.cs /target:winexe /r:System.dll;System.Windows.Forms.dll;System.Data.dll;System.Drawing.dll The /? or /help Option The /? or /help option displays on the console a list of all the compiler options and their usage. The /addmodule Option The /addmodule option prompts the compiler to import metadata from the specified modules into the assembly you are compiling. You can specify two or more modules in a single statement by separating them with a comma or a semicolon, as shown. csc /addmodule:module1.netmodule;module2.netmodule myAssembly.cs The /baseaddress Option The /baseaddress option lets you specify in hexadecimal, decimal, or octal form the preferred memory base address at which the .NET runtime should load your dynamic-link library (DLL). csc /t:library /baseaddress:0x1111000 urClass.cs Generally, the base address is determined by the .NET runtime. This option is ignored if your output file is not a DLL. The /bugreport Option As its name suggests, the /bugreport option is used to report source code bugs. It creates a text file containing the source code; versions of the operating system, compiler, and .NET runtime; and the programmer's description of the problem and the expected result. The following example shows how to use the /bugreport option. csc /bugreport:report.txt myCode.cs The /checked Option Sometimes the results of an integral value fall outside the permissible range of the data type and cause a runtime exception. The /checked option eliminates the potentially drastic effects that can occur when a value falls outside the range allowed. For example, the range of the type Int16 extends to 32767. Generally, during runtime, if a variable of type Int16 is assigned a value greater than 32767 and if that value is not within the scope of the checked statement, the variable's value is automatically set to zero and the program executes normally. If it's important for you to that you always have the right value assigned to your variables, then compile with the option /checked+. This results in a runtime exception every time the variable exceeds its maximum value. The /codepage Option The /codepage option is useful when you have written source code in code pages in a format other than Unicode or UTF-8. Code pages are character sets, and they vary for different languages. For example, the Hindi language has multibyte characters. The /codepage option takes the ID of the code page to be used to compile the source code.The /debug Option You can use the /debug option to produce additional debugging information for your application. This option accepts either + or - to indicate creation or omission of debugging information. To expand the functionality, you can specify /debug:full to create debugging information along with capabilities of attaching it to a debugger. As an alternative, /debug:pdbonly does not allow you to debug the source code if you attach the debugger to a running program. The pdbonly option will only display assembly code from the running program The /define Option When you use preprocessor directives in your code, you can define multiple symbols for your program via the /define option. Preprocessor directives are used to perform conditional compilation-that is, they mark sections of the code to be compiled as per the options defined during compilation time. For example, suppose you mark the debugging counters and trace statements with the DEBUG preprocessor directive. Only if you define the DEBUG directive during compile-time (as shown) will these statements get compiled. 32 C# Corner csc /define:DEBUG;Test myCode.cs The /doc Option The /doc option allows you to produce an Extensible Markup Language (XML) file containing the documentation you have defined within your source code using the special documentation comments. This option does not work when you are using the /increment+ option. An example of the syntax follows. csc /doc:MyCode.xml myCode.cs The /filealign Option The /filealign option lets you specify the size of different sections within the compiled file. The sizes you can specify in byte form are 512, 1024, 2048, 4096, 8192, and 16384. If a section cannot fit within the given size, then two or more sections will be created in multiples of the size specified. This option is useful, for example, if your code is to be used on small mobile devices. The /fullpaths Option Enabled by default, the /fullpaths option is used to show the full path to the source code file that is causing errors or warnings during compilation. The /incremental Option The /incremental option incrementally builds your applications and can be used along with the /debug+ option. The first time you use this option along with the compiled assembly, an .incr file is created to contain all the information about the current build. The next time you compile your code, only those portions of the code that have changed are recompiled and the .incr file will be updated. This option has significant impact only when you compile many small files. The /lib Option You use the /lib option to specify additional directories where the compiler can find the libraries to be referenced during compilation. If you store your DLL assemblies in various directories, use this option to supply the C# compiler with the path to those directories, as in the following example: csc /lib:c:\csharp\libraries /r:myLibrary.dll myCode.cs The /linkresource Option The /linkresource option provides a link to the resource file in the assembly you are compiling. It does not store the resource file within the assembly; it only provides a reference to the resource file. Therefore, you must distribute the resource file along with your application files. The following example shows how you might use this option: csc /linkresource:myrs.resource myCode.cs The /main Option The /main option is useful only when you compile an executable (.exe) application and your source code has multiple classes that define the Main method. Because an application can have only one entry point, you use this option (as shown in the example) to specify to the compiler which class's Main method should be treated as the entry point into an application. csc /main:myCode myCode.cs yourCode.cs The /nologo Option You use /nologo to suppress the Microsoft banner that usually appears every time you use the compiler. The /nostdlib OptionIf you have created your own System namespace implementation, you may use the /nostdlib option to prevent the compiler from loading the mscorlib.dll that holds the System namespace. The /noconfig Option The /noconfig option prevents the compiler from using the global and local response files defined within the csc.rsp file. If you do not need the default compiler options that are defined in csc.rsp, you can use the /noconfig compiler option. The /nowarn Option The /nowarn option forces the C# compiler to suppress the specific warnings you've specified. The C# reference documentation lists all the numbers associated with C# warnings. With /nowarn, you can specify the number of any warning you don't want to display during compilation. The following example shows how to compress the warning CS0029 - Cannot implicitly convert type 'type' to 'type': csc /nowarn:29 myCode.cs The /optimize Option The /optimize option allows the compiler to optimize compiled code to make it execute faster and more efficiently. Use this option when compiling the release versions of your assemblies. The /out Option You use the /out option to specify the output file name of the file compiled by the C# compiler. As shown in the following example, you can also specify the path name along with the file name after the out directive to specify where the compiled file will be stored: csc /out:.\bin\website.dll /t:library myCode.cs The /recurse Option The /recurse option is used to compile all files bearing the specified file name within the specified directory and its child directories. You can also use wild cards to specify the names of files to be compiled, as shown in this example: csc /recurse:*.cs /out:myCode.dll /t:library The /reference Option The /reference option is used to reference the external assemblies you have used in your code. The compiler reads the public type information from the external assemblies and provides the necessary metadata within the assembly you are currently compiling. The following example shows the syntax used with /reference: csc /reference:urCode.dll myCode.cs The /resource Option You can use the /resource option as shown to embed the resource file within the assembly you are compiling. csc /resource:fileres.resource myCode.cs Unlike the /linkresource option that merely links the resource file, /resource actually embeds the resource into the assembly. The /target Option You use the /target option to tell the compiler which of four kinds of output files you want to produce: exe-a console application (.exe). winexe-a Windows application (.exe).library-a library DLL (.dll). module-a .NET module (.netmodule). The example specifies creation of a Windows application: csc /target:winexe /out:Application.exe myFile.cs The /unsafe Option When you have included unsafe code (e.g., pointers) within your C# source code, the source code will not compile unless you use the /unsafe option. The /utf8output Option The /utf8output option is used for situations in which the output generated by the compiler does not render properly on certain international language packs. You may use this option to redirect the output to a separate file in UTF-8 format. The /warn Option With the /warn option, you can set the warning level displayed by the C# compiler. Values range from 0, which turns off the warnings, to 4, which reports all warnings and includes additional information. The /warnaserror Option The /warnaserror option reports all warnings as errors at compile-time. When you use this option, any warning issued by the compiler causes the code not to compile. The /win32icon Option You use the /win32icon option as shown to specify inclusion of an icon file when you compile a Windows executable file. csc /win32icon:myApp.ico /target:winexe /out:Application.exe myCode.cs This option gives the Windows application the desired look in Windows Explorer. The /win32res Option You use the /win32res option to include a Win32 resource file in your compiled code. The Win32 resource file can contain icons, cursors and bitmaps. An example of the syntax for this option follows: csc /win32res:oldres.res /target:winexe myCode.csTo include .NET resource files rather than Win32 resource files, use the /addresource option. ConclusionHope this article would have helped you in understanding C# Command-Line Compiler Options in .NET. See my other articles on the website on .NET and C#. ©2016 C# Corner. All contents are copyright of their authors.
http://www.c-sharpcorner.com/uploadfile/prvn_131971/C-Sharp-command-line-compiler-options/
CC-MAIN-2016-22
refinedweb
2,051
55.24
Relational Databases are killing content management. Why LAMP is wrong for content management LAMP. Linux/Apache/Mysql/PHP,Perl,Python is THE opensource software solution stack for programmers/administrators doing almost anything, specifically rapid application development on the web. There are numerous frameworks that pull all the pieces together in hopes that large and parcel anyone can build robust and usable web applications and websites. This includes content management. For the most part this works well, even through the hurdles of getting all of the specific components to work together nicely. Unfortunately the masses have taken the context of building anything to mean that LAMP is the ONLY solution to building applications on the web. So the standard de facto has been to look at a problem and automatically assume LAMP as the underlying technology for the space. Blog? LAMP. Twitter like site? LAMP. Website for my company? LAMP. Simple web page with a small contact page? LAMP. To be fair, this works well for many context spaces. Especially because the LAMP system is highly modular one can literally pull one piece of the overall solution set and replace it with something completely different. However as any engineer and architect will tell you. When building a bridge, it helps to know what is going to travel over it, under it, through it and unfortunately, into it. If not you end up with something like this. The collapse of the Ironworkers Memorial while under construction on June 17 1958. Collapsed during construction due to miscalculation of weight bearing capacity of a temporary arm. This is generally what content management tends to resemble when it meets the architecture of LAMP. The software solution set is a complete failure for content management applied as a standard de facto solution primarily because of the M in LAMP. In this case I mean Mysql but the problem in actuality is any RDBMS or Relational Database Management System. It's not the only problem but it's one of the most critical components to getting the idea of content management right. Lets start by taking apart the LAMP acronym. Linux; As far as the operating system goes. Linux is a tried and true system which has been under development for nearly two decades. It has consistently pushed the envelope in regards to utilizing the underlying hardware to provide a robust and capable operating system that can scale from the server to the desktop. Apache; As far as web serving goes. Apache is again, tried and true and has been under development for nearly just as long with many lessons learned. It is the standard de-facto server for serving static/dynamic content with an extensible and modular system which makes it capable of providing support for many different applications and setups. Mysql; The relational database management system that had the honor from many as being the first RDBMS they used and the cut throat bare bones data solution. In early Mysql releases there was no idea of ACID Atomicity, Consistency, Isolation, Durability in the project. This overhead was seen as generally a problem that could be solved higher in the application space but for the most part everyone using Mysql during that time had no need for such compliancy. This made Mysql extremely fast and of course, with the exception of Postgres (which did concentrate on these things) it was opensource and freely available. Things have changed much since those days, Mysql has generally become ACID aware and is surprisingly owned by Oracle. Perl/PHP/Python; For the most part Perl as a language was dominant in the 90's. Since then it's fallen behind in regards to the web application space. There are numerous reasons for this but one of the major reasons is that many new programmers have written Perl code that is difficult to read and maintain. The language wasn't originally intended for large object-oriented projects (Ruby which has syntax largely like Perl was written primarily with object orientation at its core and the Ruby language has a vibrant community and a popular web app framework called Ruby on Rails or ROR) and maintenance has generally become a nightmare. This isn't a blackmark against the language past it allowing poor practices that have seemingly been embedded with the programmer. As those new and junior programmers become more literate their Perl code improves but those old projects and web applications they've written or help write don't fare as well. There tends to be a group-think backlash against the language because of that. "It was written in Perl? Ugh.. nightmare". PHP tends to have the same exact problem, low cost of entry, except the community around PHP is wholly vibrant and releases often. PHP as a language itself has had many of the same problems as Perl in regards to building a large software project. Most of which have been remedied with time, namespace support, halfway usable object orientation support etc. It still lacks many commercial features that you can't readily use without purchasing them or their frameworks. Python on the other hand, has a much higher cost in regards to learning curve than the previous two. However it's still a very easy language to learn and enforces many good practices out-of-the-box that the other two languages don't. Proper formatting, a useful object oriented system, the idea of namespaces, unit tests and code structure. These are all important concepts in architecting software (building your bridge) and it's a viable language. Ruby isn't a P but needs mention as it's also a dominant language used. It's highly object oriented and as stated above has syntax largely like Perl. It's essentially referred to some as "Perl done right" and it's primary author has stated that "I wanted a scripting language that was more powerful than Perl, and more object-oriented than Python. That's why I decided to design my own language". So what is the problem with Relational Database systems and content management? Relational Database systems are from an era where object-oriented programming didn't readily exist large and parcel. The concept of an object quite frankly was foreign. Most programming had been functional and procedural, no one had any idea how useful it would become. The general crowd-think was more concerned with "records"; it had worked well since the 1940's and many companies had spent large sums of money setting up internal systems. A nice line printer spilling out data onto dead tree's with a list of users, phone numbers, etc. It was easy, pulling out all of this information. However as time passed and with the advent of the internet the concept of object-oritentation became more important. Java came to dominance because of this. We needed to do more than just relate information but also update it real time, expose it to other systems in-house and to our partner systems. Records in a flat table were simply not enough, we wanted to have a representation of a user and all of the attributes important to us updated dynamically as they were updated by our staff, customers or both. Computer languages started to reflect this fact. Most languages started receiving object orientation methodologies. C got it's OO super set in C++ and then there was Objective-C which got it's clothing from Smalltalk and obviously Java. Which sparked a new commercial trend and is probably the most dominant object oriented program in use today. Square vs Circle, Cube vs Cylinder, Oil vs Water. Unfortunately, as the way we wrote programs changed, the way we stored the data our programs created or needed did not. Programmers began writing Object Relationship Mappers ORM's to map the objects they created in their programs with the way they stored their data. So one would design a user object with a name, address and phone number in their program and then have to create a table for this in the relational database and then have to map between the two. Obviously for time sensitive applications the overhead of conversion became an issue. More importantly though a whole system to manage the consistency between the two became an issue. If one got out of sync with the other it would cause no small amount of trouble for critical applications. Enter Object Oriented databases or OODB's. A programmer could create the object in his program and store it into an object database. Early versions were considered slow and inefficient however OODB's tended to hold more data than relational database systems, were generally faster than relational databases as there is less to lookup and no overhead or extra systems to manage between a relational system and an object system. As well as being more secure it seemed like an easy win however there was no uptake. Most commercial organizations were and still are used to the idea of a "record" and Oracle as a company is simply a good salesman; they came up with Oracle Object Relational support. OR in more simple terms a superset to SQL to make Oracles relational database behave more object like. Also the sheer force of SQL and the relational database eco-system made it hard to see through the clouds. In-fact, most people didn't and don't even bother looking. There was no real advantageous reason to go with an object oriented system if you had an object relationship mapper. Summarily, no one took up OODB's except for organizations in the know, primarily scientific and engineering houses who had large amounts of data they needed to warehouse and work on. Every one else stuck with RDBMS, until it started hurting them. They would eventually retool by either finding and consulting with Oracle or one of the commercial Object Oriented Database providers. It's a testament to Oracle's success that they are the ONLY database game in town. Really, what other commercial database company can you refer to off the top of your head? I'll wait... Right, so that leads to the present day where there is more talk of Nosql databases (object databases, graph databases, high perform key/value stores etc things like HadOOP, CouchDB, MongoDB, Redis, Neo4j, Allegrograph) but not much has changed in the last two decades. This time around things seem much different and the database playing field is bound to go through transformation with web semantics and html5 database standards. We can only wait and see. In the interim the previous decades were simply unfortunate for database stores and summarily the content management space as it is highly object oriented. Your customers want to manage content. Videos, Users, Large lists, Blogs, News, Images etc. All of which are objects that need to be stored somewhere, and for the most part that is occurring in a relational database. Which means one has to overcome the problems above and 9 times out of 10 it requires a lot of time and engineering that simply isn't done properly. Hence choosing the standard de facto in LAMP, the bridge eventually collapses. A collapse maybe downtime, or loss of records, or constant maintenance, security threats all of which can be lessened by building the correct bridge for the problem space. In content management that is an object database; or some combination of nosql/relational and object data depending on application. How do I change the M in LAMP to an O for object database or something similar? Well, if you plan on managing content there is ZODB or the Zope Object Database for Python which is part of the overall package for the Plone Content Management system. There are also DyBase, db4o, Twig etc. Past that your options are currently limited without an object relationship mapper but it pays to understand the problem space so you can architect your content management solution appropriately even should you need to keep your data in a RDBMS. Whether it be for something as simple as management of blog data or a large list of data it pays to know that you have a bridge that can withstand all of the nets elements. Hopefully, next time you are talking with your consultant, client, design company or web team and you hear LAMP you have a better idea of what it entails and how to apply your business needs and process using LAMP if you need to. LAMP isn't always the answer and it certainly isn't always the answer for content management solutions. Related Posts: Syndicated 2010-08-30 19:47:00 from Christopher Warner » Advogato
http://www.advogato.org/person/zanee/diary/209.html
CC-MAIN-2014-10
refinedweb
2,121
60.75
NAME vm_map_submap -- create a subordinate map SYNOPSIS #include <sys/param.h> #include <vm/vm.h> #include <vm/vm_map.h> int vm_map_submap(vm_map_t map, vm_offset_t start, vm_offset_t end, vm_map_t submap); DESCRIPTION The vm_map_submap() function marks the range bounded by start and end within the map map as being handled by a subordinate map sub_map. It is generally called by the kernel memory allocator. IMPLEMENTATION NOTES This function is for internal use only. Both maps must exist. The range must have been created with vm_map_find(9) previously. No other operations may have been performed on this range before calling this function. Only the vm_fault() operation may be performed within this range after calling this function. To remove a submapping, one must first remove the range from the parent map, and then destroy the sub_map. This procedure is not recommended. RETURN VALUES The vm_map_submap() function returns KERN_SUCCESS if successful. Otherwise, it returns KERN_INVALID_ARGUMENT if the caller requested copy- on-write flags, or if the range specified for the sub-map was out of range for the parent map, or if a NULL backing object was specified. SEE ALSO vm_map(9), vm_map_find(9) AUTHORS This manual page was written by Bruce M Simpson <bms@spc.org>.
http://manpages.ubuntu.com/manpages/precise/man9/vm_map_submap.9freebsd.html
CC-MAIN-2014-52
refinedweb
202
58.79
Implementing React Native Responsive Design Part 2 : Adapting Building phone apps used to be simpler. There were a small number of phone sizes and an even smaller number of screen sizes to support. Your app always took up the whole screen so developers tended to target particular screen sizes and exact pixel dimensions. As mobile developers we have been going through the same mental shift that web developers went through almost a decade ago. Apps need to be able to adapt to differing screen sizes and still look really good. For a long time, React Native could give you screen size and not the size of your app window. This allowed for some amount of responsiveness, but as the number of devices grew and multitasking options became available, apps really needed to know how big their window was. React Native came to the rescue with Dimensions.get('window') and the useWindowDimensions hook which allows the app to know its exact shape and size. This enables us as developers to create layouts that take into consideration all the various ways our app could be displayed and sets the stage for supporting an even wider array of device types. Flexbox is your friend Even though we can now know the exact size and shape of our app window, it is still more flexible to use Flexbox or percents to lay out your app. The documentation shows how the various sizing options work. Say you have a header and a main content area. You can easily divide your screen into sections with the following code. <View style={styles.container} > <View style={styles.header} > ... </View> <View style={styles.content} > ... </View> </View> const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', alignItems: 'stretch', justifyContent: 'space-between', }, Header: { Flex: .2 } content: { Flex: .8 } With flexbox you can use any set of numbers that will express the ratio of space given to each component. Here I have used decimals that add up to 1, because that makes sense to me conceptually, but could as easily been written as 1 and 4 or 20 and 80. The React Native flex property is much like CSS's flex-basis except that you can't use a string like '80%' as you might in CSS. Flexbox properties also let you do really convenient things like allow you to change rendering direction and even rotate your content. Taking advantage of these properties would let you do something like turn a bottom tab bar on a phone into a left or right side nav bar for bigger window sizes. Unfortunately flexbox doesn’t work for text size. Finding a way to adaptively set font sizes is necessary to keep your content from looking sparse on large devices and crowded on smaller ones. You can use the PixelRatio and/or screen size data from useWindowDimensions to adaptively set your font sizes. The previous installment of this series talks a bit about this and we can refine that approach by adding more context which we will do later in this post. Leveraging the useWindowDimensions hook React Native provides the useWindowDimensions hook which is the preferred way to get window size information. It replaces the Dimensions.addEventListener api and is a great way to be notified if your user rotates their device or opens a split screen view. Since it is a hook, it returns new values anytime the user does something to alter your app's display environment. The easiest way I’ve found to leverage useWindowDimensions is to add it at the top level of your app. Adding the results to your app state (Redux, MobX, Context or whatever) makes it easy for your components that need to care to find the sizing info they need. You can use this information to adjust your layout based on window size or orientation. You can make your adaptive code as simple or complex as your app needs and budget dictate. import { StyleSheet, Text, View, useWindowDimensions } from 'react-native' export default function App() { const { width, // window width height, // window height scale, // same as PixelRatio.get() fontScale // same as PixelRatio.getFontScale() } = useWindowDimensions() // stash these wherever you keep app state. Using the onLayout event While is it much more convenient to express you app components declaratively via styles, there are times when you need to know the exact dimensions of your component. For those occasions, the onLayout event is the tool to reach for. The onLayout event fires and your callback runs every time a render updates the placement or size of a component. You can use this information to calculate styles to pass to children of this parent component. const [width, setWidth] = useState(0) const [height, setHeight] = useState(0) handleLayout = event => { const { nativeEvent: { layout: { width, height } } } = event; setWidth(width) setHeight(height) }; const style = width == 0 ? {} : {width: width - 20, height: height -10} <View style={style}> ... </View> Something like CSS Media Queries You can build style objects for each screen size range. Then it’s just a matter of picking the correct style object based on screen size. If you want to keep your code DRY, you can take advantage of the style array option and build a style object that is the default and only put your overrides in the screen specific style objects. A very simple implementation could look something like this. import { StyleSheet, Text, View, useWindowDimensions } from 'react-native'; const styles = StyleSheet.create({ container: { flex: 1, }, header: { flex: 1, }, content: { flex: 4, }, }); const tablet = StyleSheet.create({ header: { flex: .5 } }); export default function App() { const window = useWindowDimensions(); const headerContainerStyle = [styles.header, (window.width > 900) && tablet.header] return ( <View style={styles.container}> <View style={headerContainerStyle}> <Text style={headerStyle}>Header - Not Scaled</Text> </View> <View style={styles.content}> ... </View> </View> ) } This kind of solution would allow your app to respond to either orientation or device height and width parameters. You would want to build a utility to capture the criteria that matter to your app. Here's a sample module to do just that. export default class Responsive { breakpoints: [] match: null styles: [] base: null constructor(breakpoints, styles, base) { this.breakpoints = breakpoints this.styles = styles this.base = base } comparator(el, i, arr) { return ( this.match <= el ) } gather() { let styling = [] if ( this.base[label] ) { styling.push(this.base[label]) } let i = this.breakpoints.findIndex(this.comparator, this) if ( i === -1 ) i = this.styles.length - 1 const picked = this.styles[i] if ( picked && picked[label] ) { styling.push(picked[label]) } return styling } stylePicker(label) { this.match = Dimensions.get('window').width return this.gather() } } // example usage: const base = StyleSheet.create({ container: { flex: 1, }, ... }) const small = StyleSheet.create({ ... }) const medium = StyleSheet.create({ ... }) const tablet = StyleSheet.create({ ... }) const responsive = new Responsive( [500, 800, 1024], [small, medium, large], base) <view style={responsive.stylePicker('container')}> Using screen widths as style breakpoints is the typical approach to responsive styling on the web, but you aren't limited to that. This class could be extended to factor in orientation or any other criteria that makes sense for your app. Here's an extension that lets you set styles based on orientation. export default class ResponsiveOrientation extends Responsive { comparator(el, i, arr) { return ( this.match === el ) } getOrientation(window) { return window.width < window.height ? 'portrait' : 'landscape'; } stylePicker(label) { this.match = getOrientation(Dimensions.get('window')) return this.gather() } } // example usage: const base = StyleSheet.create({ container: { flex: 1, }, ... }) const portrait = StyleSheet.create({ ... }) const landscape = StyleSheet.create({ ... }) const responsive = new Responsive( ['portrait', 'landscape'], [portrait, landscape], base) <view style={responsive.stylePicker('container')}> Conditional Rendering One other common practice for responsive layouts is to only include certain components for certain screen sizes. There are a couple ways to handle this. The first and most common is to conditionally include a component in the render tree. For example, if the header should not show when the app window is large and landscape, you can do something like the following: export default function App() { const width = useWindowDimensions().width; return ( <View style={responsive.stylePicker('container')}> {width < 1000 ? <Header /> : null} ... </View> ); } React Native also supports the display: 'none' property so you can simply hide the component using styles. const baseStyles = StyleSheet.create({ name: { color: 'black' }, caret: { }, }) const stylesSM = StyleSheet.create({ name: { display: 'none', }, caret: { display: 'none', } }); const stylesSM = StyleSheet.create({}) const stylesLG = StyleSheet.create({}) <Text style={responsive.stylePicker('name')}>Kilroy was here</Text> We can put this all together to make a much more responsive React Native app. Please check out the Snack with all the code for this post. You can pop out the preview and play with the window size to see how the layout adapts. To see it on device, you will have to download the snack and view it in whatever devices and emulators you have handy. To whet your appetite, here are a couple images showing the various ways the layout adapts. Portrait landscape Links Before embarking on this journey, I wandered around the internet a while looking to see how others have approached this. I didn't find anything that did everything I wanted, but I did find a bunch of different approaches that might be useful. - How to Build Responsive React-Native Apps? - How to create responsive layouts in React Native - How To Make Your React Native Apps Responsive - My React Native Stack After 1 Year - Handling responsive layouts in React Native apps While I was researching this topic I also ran across a couple library options that have some really nice features related to this topic. Check out react-native-responsive-ui and Restyle (by Shopify) as more nicely packaged alternative to some of the concepts presented here.
https://bendyworks.com/blog/implementing-react-native-responsive-design-part-2
CC-MAIN-2021-31
refinedweb
1,585
57.87
Part I True/False ( 40 points , 2 point each) If you have created an exception class, you can define other exception classes extending the definition of the exception class you created. Using the mechanism of inheritance, every public member of the class Object can be overridden and/or invoked by every object of any class type. If a class is declared final, then no other class can be derived from this class. The class Throwable contains constructor(s). You can instantiate an object of a subclass of an abstract class, but only if the subclass gives the definitions of all the abstract methods of the superclass. Every program with a try block must end with a finally block. If there is a finally block after the last catch block, the finally block always executes. Making a reference to an object that has not yet been instantiated would throw an exception from the NullPointerException class. If you have created an exception class, you can define other exception classes extending the efinition of the exception class you created. The layout manager FlowLayout places components in the container from left to right until no more items can be placed. The class Container is the superclass of all the classes designed to provide a GUI. JTextAreas can be used to display multiple lines of text. The StringIndexOutOfBoundsException could be thrown by the method parseInt of the class Integer. The class RuntimeException is part of the package java.io. If an exception occurs in a try block and that exception is caught by a catch block, then the remaining catch blocks associated with that try block are ignored. The order in which catch blocks are placed in a program has no impact on which catch block is executed. The methods getMessage and printStackTrace are private methods of the class Throwable. If a formal parameter is a variable of a primitive data type, then after copying the value of the actual parameter, there is no connection between the formal parameter and actual parameter. If a member of a class is a method, it can (directly) access any member of the class. You must always use the reserved word super to use a method from the superclass in the subclass. Part II Multiple Choice : (30 points, 2 points each) - An abstract class can contain ____. a. only abstract methods b. only non-abstract methods c. abstract and non-abstract methods d. nothing - An abstract method ____. a. is any method in the abstract class b. cannot be inherited c. has no body d. is found in a subclass and overrides methods in a superclass using the reserved word abstract - If a class implements an interface, it must ____. a. provide definitions for each of the methods of the interface b. override all constants from the interface c. rename all the methods in the interface d. override all variables from the interface - Which of the following is the default layout manager for a Java application? a. null b. GridLayout c. FlowLayout d. BorderLayout - Java will automatically define a constructor a. if the programmer does not define a default constructor for a class b. if the programmer does not define any constructors for a class c. if the program refers to a constructor with no parameters d. for every class defined in a program - How many finally blocks can there be in a try/catch structure? a. There must be 1. b. There can be 1 following each catch block. c. There can be 0 or 1 following the last catch block. d. There is no limit to the number of finally blocks following the last catch block. - What happens in a method if there is an exception thrown in a try block but there is no catch block but a finally block following the try block? a. The program ignores the exception. b. The program will not compile without a complete try/catch structure. c. The program terminates immediately. d. The program throws an exception and proceeds to execute the finally block. - An exception that can be analyzed by the compiler is a(n) ____. a. unchecked exception b. checked exception c. compiler exception d. execution exception 9 - 12 Based on the following code : import java.util.*; public class ExceptionExample1 { static Scanner console = new Scanner(System.in); public static void main(String[] args) { int dividend, divisor, quotient; try { System.out.print("Enter dividend: "); dividend = console.nextInt(); System.out.println(); System.out.print("Enter divisor: "); divisor = console.nextInt(); System.out.println(); quotient = dividend / divisor; System.out.println("quotient = " + quotient); } catch (ArithmeticException aeRef) { System.out.println("Exception" + aeRef.toString()); } catch (InputMismatchException imeRef) { System.out.println("Exception " + imeRef.toString()); } catch( IOException ioeRef) { System.out.println("Exception " + ioeRef.toString()); } - Which of the following will cause the first exception to occur in the code above? a. if the divisor is zero b. if the dividend is zero c. if the quotient is zero d. This code will not compile so an exception cannot be triggered. - Which of the following inputs would be caught by the second catch block in the program above? a. 0 b. 10 c. h3 d. -1 - Which method throws the second exception in the code above? a. nextInt b. toString c. println d. nextLine - Which of the following methods prints a list of the methods that were called before the exception was thrown? a. getMessage() b. printCalledMethods() c. printStackTrace() d. traceMethodStack() - How would the three classes, Undergraduate, Graduate, and Student interrelate if we are determined to use inheritance in the class definitions? a. Student would be a subclass of Undergraduate, which would be a subclass of Graduate. b. Graduate would be a subclass of Undergraduate, which would be a subclass of Student. c. Graduate would be a subclass of Student, and Undergraduate would be a subclass of Graduate. d. Undergraduate and Graduate would both be subclasses of Student. - Which of the following types of methods cannot be declared as abstract? a. Private methods b. Static methods c. a and b d. neither a nor b - What happens when an expression uses == to compare two string variables? a. The value of the expression will be true if both strings have the same characters in them. b. The value of the expression will be true if both string variables refer to the same object. c. The expression will not be correct syntax and the program will not compile. d. A run-time error will occur.
https://www.daniweb.com/programming/software-development/threads/113557/need-help-with-t-f-and-multiple-choice
CC-MAIN-2017-26
refinedweb
1,074
58.69
Over the last six weeks, DEV has focused on improving its search engine optimization, and conforming the site to best practices. As the team member running point on this project, I've immersed myself in all things SEO. This article is an attempt to distill and un-silo my knowledge—we will briefly discuss JSON-LD versus traditional implementations that leverage meta tags and then explore how DEV has migrated to using JSON-LD for our site. JSON- What? JSON-LD stands for JavaScript Object Notion for Linked Data. JSON-LD makes it simple to structure the data on a site for web crawlers by disambiguating elements, thus making the webpage using it more indexable, in turn bolstering the site’s SEO. The JSON-LD for a page is generally found within a <script> tag in a page’s <head> tag, though finding the data within a <body> tag is not uncommon. Placing structured data within the <head> tag is considered best practice, though, since crawlers generally begin searching for metadata in <head> tags. What makes JSON-LD so usable is its syntax, which makes it easily readable by humans and machines. The linked data format consists of key-value pairs, containing Schema.org vocabulary, a shared vocabulary for structuring data, so machines can quickly interpret the content of the page. Leveraging the Schema.org vocabulary makes it possible to describe a litany of item types and item properties, with varying detail—types can inherit from parent properties and other types. There a few basic attributes that make up the general JSON-LD structure, aside from essential <script> tags: @context, @type, and the attribute-value pairs for the given object. All of the essential elements that make up a basic JSON-LD structure, aside from the <script> tags, can be found wrapped in double quotation marks ( “”) and ending with a comma. The <script> tags containing the structured data will always specify its type as JSON-LD: <script type=“application/ld+json”>. Similar to the <script> tags, the @context attribute should always specify Schema.org: ”@context”: “Schema.org”,. Unlike the other essential attributes that make up the data structure, @type and the structure’s attribute-value pairs change depending on the item’s type and properties. For our visual learners, I have included an example below of what the structured data for a DEV Organization looks like: <script type=“application/ld+json”> { "@context": "", "@type": "Organization", "mainEntityOfPage": { "@type": "WebPage", "@id": URL.organization(@organization) }, "url": URL.organization(@organization), "image": ProfileImage.new(@organization).get(width: 320), "name": @organization.name, "description": @organization.summary.presence || "404 bio not found" } </script> Looking at the above structure, you’ll see the necessary attributes that make up the skeleton of a JSON-LD structure: "<script type=“application/ld+json”>", "@context", "@type". There are some additional attribute-value pairs pointing to necessary information that we at DEV have aimed to disambiguate for search crawlers, like Google's, as well. DEV- How? Now that we’ve covered the basics of JSON-LD and the vocabulary that it uses, it’s time to talk briefly about how DEV uses JSON-LD to structure its data and boost its SEO. Prior to switching to JSON-LD in the last six weeks, the DEV codebase previously relied on <meta> tags in addition to specifying itemTypes and itemProps. While this approach works, it wasn’t working as well as JSON-LD could and we weren’t feeling satisfied with its results. With our SEO plateauing, it was time to make our data more structured. The solution? Move away from using <meta> tags, itemTypes, and itemProps and migrate towards structuring our data using JSON-LD. The switch to JSON-LD was nice because it is well documented, there are many examples to draw inspiration from, and there are useful testing tools, like Google’s own Google Structured Data Testing Tool. The same is true for Schema.org—the vocabulary’s documentation is easy to parse and item type and property examples are plentiful. This being said, implementing JSON-LD was a learning experience for the entire team (as is any new coding endeavor!). Through our joint effort, we were able to structure the data for many of our major pages and our most important data in only a couple of weeks. Currently, the data for Article show pages, User profile pages, Organization profile pages, and Video show pages are structured using JSON-LD and the Schema.org vocabulary. This snippet shows what the structured data for a User’s profile page looks like. Note: For brevity, I’ve included most, but not all, of the code for that makes up the structured data within the Stories::Controller. In the Stories::Controller: def set_user_json_ld @user_json_ld = { "@context": "", "@type": "Person", "mainEntityOfPage": { "@type": "WebPage", "@id": URL.user(@user) }, "url": URL.user(@user), "sameAs": [], "image": ProfileImage.new(@user).get(width: 320), "name": @user.name, "email": “”, "jobTitle": “”, "description": @user.summary.presence || "404 bio not found", "disambiguatingDescription": [], "worksFor": [ { "@type": "Organization" }, ], "alumniOf": “” } end And in the users/show Template: <% unless internal_navigation? || user_signed_in? %> <script type="application/ld+json"> <%= @user_json_ld.to_json.html_safe %> </script> <% end %> I should also note that in order to make our structured data reusable and to keep our views clean, we opted to extract all of the logic for our structured data out into the set_user_json_ld method within the Stories::Controller—we found that this implementation works best for us. Since switching to JSON-LD, DEV has seen a dramatic increase in its SEO. Through disambiguating elements and organization, we were able to boost our SEO by making it easier for Google and its crawlers to navigate our site and its pertinent information. How do you plan on implementing JSON-LD to improve your SEO? Discussion I had no idea about JSON-LD but I plan to use it now! My SEO knowledge seems so out of date. TY for the informative post. Oh my goodness this is going to be easy to integrate into my new dev environment. You are most welcome, Steve! Here's a great resource containing a ton of JSON-LD examples that you might find useful: jsonld.com/ Best of luck in integrating it! 🎉 Oh that is a good site! I worked JSON-LD into my site in about 1 hour. It is glorious. Nice overview. I've recently added JSON-LD too but it's at the bottom of my code, and still seems to show up just fine on Google. Is there any documentation indicating that it really should be in the head preferably? Oh actually we just put it in the body as well.... Instructions say head. As mentioned in the post
https://practicaldev-herokuapp-com.global.ssl.fastly.net/juliannatetreault/json-ld-what-it-is-and-how-dev-uses-it-4d25
CC-MAIN-2021-04
refinedweb
1,107
55.44
Changes ======= Except where noted, all changes made by Daisuke Maki 0.3105 2014-04-29T22:31:01Z * Fix tests for timestamps displaying [+-]0000 [] (eserte) 0.3104 2014-03-07T21:24:58Z * Remove stray Makefile 0.3103 2014-02-22T08:31:41Z * Iterate over hash using keys() instead of each() (reported by Dave O'Brien) * Use Minilla for development 0.3102 - 14 Sep 2011 * Allow upper uppercase letters as first character of namespace prefix [] (arc) 0.3101 - 06 Jul 2011 * Fix perl 5.14 qw() deprecation warnings (theory) * Fix some edge case errors cause by isa / UNIVERSAL::isa * Fix silly error detecting RSS 1.0 0.3100 - 22 Jun 2010 * Bunch of changes by David Wheeler, granting a minor version++ :) Note that changes about stringifying child elements may have affect some users. Drop me a line if you have a problem. - Add support for xml:base in RSS 2.0. (theory) - Parse and include items in RSS 2.0 feeds without a title or description. (theory) - The encoding() method now returns the encoding of a parsed feed. (theory) - Parser recognizes elements that should not have children and stringifies the children of such elements when it finds children. (theory) 0.3005 - 3 Jun 2010 - rt #58067 Document create_libxml() (theory) - rt #58068 Add support for RSS 0.92 (theory) 0.3004 - 20 Jan 2009 - rt #42536. Some files were removed from the distro for the time being. (I don't have the time to re-create these for now -- patch submissions are most welcome) 0.3003 - 26 Nov 2008 - Try not to die if we encountered a Broken <image> tag in RSS 2.0 - We won't test RSS 0.9x. This is due to the nature of libxml wanting to validate the DTD, when the namespaces have been changed (see) We could use a hackish fix, but we won't, as it may change the behavior for this widely used format (which is surprisingly large number). So instead, we'll just silence the tests, to stop odd failures from occurring - For practical purposes, we use the old namespace unless asked otherwise via XML_RSS_LIBXML_USE_NEW_RSS09 environment variable 0.3002 - 08 Oct 2007 - Apply fix from AAR (rt #29683) to make things work with XML::LibXML >= 1.64 0.3001 - 09 May 2007 - Fix Makefile.PL dependency - Remove stray debug output 0.30 - 08 May 2007 - Move to Module::Install - Tweak tests. 0.30_02 - 23 Mar 2007 - Make things more compatible with t/items-are-0.t 0.30_01 - 14 Mar 2007 - BEWARE! MAJOR CHANGE IN CODE! - Compatiblity with XML::RSS-1.29_02's test suite. - Completely redo the internal structure in a saner manner. 0.23 - 05 Jul 2006 - Apply multiple enclosure patch from SERGEYCHE (rt #20285) This allows you to *generate* RSS with multiple enclosures 0.22 - 28 Jun 2006 - Remove stray files and debug statements (rt #19939) 0.21 - 31 May 2006 - Repository blunder messes up the distro. fixed. Reported by Tatsuhiko Miyagawa 0.20 - 14 May 2006 - Set $rss->{version} for compatibility. - As a result, we no longer set or depend on $rss->{_internal}{version}. If you saw it and used it, then stop doing that ;) 0.19 - 17 Apr 2006 - Fixed bug where $rss->channel('title') and such would give you the non-UTF8 representation when another encoding is specified in the RSS document (reported by Tatsuhiko Miyagawa) 0.18 - 06 Mar 2006 - Fixed bug where extra modules were not included in the output string. (reported by Tatsuhiko Miyagawa) 0.17 - 05 Mar 2006 - s/getValue/getData/g (reported by Tatsuhiko Miyagawa) - Add caveat: namespaced attributes aren't parsed correctly 0.16 - 28 Feb 2006 - Fix namespace support for RSS 2.0. Reported by various people. 0.15 - 06 Jan 2006 - Fix cpan #16748, and now we can parse RSS 0.91. Patch provided by aar@cpan.org - Add tests for 0.91 0.14 - 20 Nov 2005 - Bah, stupid POD msitakes. No code change 0.13 - 18 Nov 2005 - XML::RSS::LibXML wasn't conforming to the XML::RSS interface on channel(), image() and textinput() methods. Reported by Taro Minowa. - Make POD tests run only on disttest 0.12 - 09 Nov 2005 - Ugh, need to use Test::Pod::Coverage more carefully. Reported by various people. 0.11 - 19 Oct 2005 - Most files were mysteriously not included in the previous release. 0.10 - 18 Oct 2005 - Mainly a kwalitee improvement release. Added bunch of POD and tests - Fix: Allow XML::RSS::LibXML constructor to accept encoding. Currently this just controls the output encoding, not the internal representation 0.09 - 17 Aug 2005 - Various fixes to make $rss->parse($rss->as_string) work. However, it turns out that since XML::RSS doesn't parse <-> generate RSS in a way that allows 100% of the cases to work, I've decided to stop it at a "good enough" state. - taxo: parses correctly - use exists() to prevent autovivification 0.08 - 17 Aug 2005 ("Insanity" Release) - In a fit of insanity, I've implemented RSS generation code. Currently RSS version 0.9, 1.0, and 2.0 are supported. - You can now parse RSS, serialize it via as_string(), parse it again, and get (almost) the same structure back. - Separated out RSS parsing/generation code from main module. These modules are loaded as necessary, or by demand. - Updated benchmark. 0.07 - 15 Aug 205 - Document MagicElements in the main docs. - Create XPathContext at parse time, andd call registerNs() only then. - Update benchmark for fairness. - Changed code to use eh, cleaner Test::More code - Removed unused code 0.06 - 10 Aug 2005 ("Magic Is In The Air" Release) - Introduce MagicElement.pm. This allows us to parse RSS elements that have attributes without sacrificing the interface (hopefully). Inspired by patch from Taro Minowa. 0.05 - 04 Jul 2005 ("I'm so dumb" Release) - Make $item->{$namespace_uri}->{$tag} work. Patch by Naoya Ito. - Add corresponding tests. Patch by Naoya Ito. 0.04 - 21 Jun 2005 - No code change. - Clarify compatibility issues. - Fix typos 0.03 - 21 Jun 2005 - channel() fix by Naoya Ito (compatibility with XML::RSS) 0.02 - 21 Jun 2005 - Doc tweaks. - This be 0.02. Remember to read the backward incompatible changes below. 0.01_01 - 20 Jun 2005 - Typo in Build.PL/Makefile.PL (Tatsuhiko Miyagawa) **** Backwards Incompatible Change **** - Make namespace handling the same as XML::RSS - e.g., <content:encoded> is now parsed as $item->{content}->{encoded}. (thanks to Tatsuhiko Miyagawa for suggestions) - Remove add_parse_context(), as it is no longer necessary. - Parsing is now done only on nodes that are immediately under <channel> and <item>. This is not correct spec-wise, but it does the job for most of the RSS out there. 0.01 - 14 Jun 2005 - Seems like some people just think about the same thing. Tatsuhiko Miyagawa caught me doing some of the same thing he was doing in an unrelease module, so merged some features from his :) - Added add_module() (Tatsuhiko Miyagawa) - Added as_string() (Tatsuhiko Miyagawa) - Added add_parse_context(). - Added fields to be parsed by default. - Changed internal representation a bit. 0.01_02 - 14 June 2005 - Doc screw up 0.01_01 - 14 June 2005 - Initial CPAN release
https://web-stage.metacpan.org/dist/XML-RSS-LibXML/changes
CC-MAIN-2022-27
refinedweb
1,184
69.18
Details Description If I have a standard gradle project setup (attached): build.gradle src/ main/ java/ test/ AbstractBase.java SamePackageJavaBase.java groovy/ test/ GroovyBase.groovy GroovyBaseWithSuper.groovy I have a single Abstract base class: AbstractBase.java package test ; public abstract class AbstractBase { int base ; public AbstractBase( int base ) { this.base = base ; } public abstract int mult( int n ) ; } And a class GroovyBase that extends this class: GroovyBase.groovy package test public class GroovyBase extends AbstractBase { public GroovyBase( int n ) { super( n ) } public int mult( int n ) { n * base } static main( args ) { println new GroovyBase( 10 ).mult( 3 ) } } When running this (using the Gradle script in the attachment), I get the exception: Exception in thread "main" groovy.lang.MissingPropertyException: No such property: base for class: test.GroovyBase Possible solutions: class Changing the line n * base to n * super.base Or making the base field public in the AbstractBase class makes it work. It's as if the classes are considered to be in different packages for some things, but not for others. Thinking about it, not sure if this is a Gradle bug or a Groovy cross compiler one. To run the tests, unpack the attachment, and run: # Test the above failing example gradle -Pmain=test.GroovyBase # Test the addition of super.base gradle -Pmain=test.GroovyBaseWithSuper # Test the Java extension of the Abstract class gradle -Pmain=test.SamePackageJavaBase You can change the groovy version from 1.8.6 by passing (for example) -Pgroovy=groovy=2.0.0-beta-2
http://jira.codehaus.org/browse/GROOVY-5396
CC-MAIN-2013-20
refinedweb
248
59.7
NestJS a JavaScript framework that can be used to build scalable and dynamic server side applications very quickly and easily, NestJS is built with TypesScript and supports TypeScript out of the box, it feels very much like using Angular but on the backend, you see the project was heavily influenced by Angular. NestJS enforces a certain application structure and this is one of the benefits of using NestJS. NestJS combines the best of OOP and functional programming, it also exploits the benefits of using TypeScript. It is built upon some popular libraries that we use to build NodeJS server side applications like Express and Cors. NestJS is a high level abstraction that is built on these simple libraries, so much thought has been put into the framework development and some of the obvious benefit that comes from using the framework includes; - Reduction of unnecesary code - Fast development time - Ease with testing apps No matter how cool we say JavaScript really is, there are some pitfalls that comes with using JavaScript, especially for server side apps. There is often the problem of file and module structure, lack of types, there's often too much duplicated code, and it is quite difficult to test your app, all these are problems familiar to some developers and the goal of using NestJS is to provide an elegant solution to all these problems. Nest JS was built to give projects a certain level of structure, most often junior developers struggle with choosing the right project structure, handling application dependencies and other third party plugins. NestJS is the right tool for the junior developer, or anyone who has trouble adopting a particular application structure. It is also a good solution to the afore mentioned problems. NestJS also makes it incredibly easy for us to test our applications. Installation To install NestJS you have to ensure that you have NodeJS installed on your PC, then you can run; npm i -g @nestjs/cli This installs the very capable NestJS CLI that comes baked in with some commands that can allows us to spin up new projects and lots of other utility features we will need when building applications with NestJS. We can scaffold a new nestjs project by running nest new project-name This will scaffold a project for us with some basic code, you can now proceed to opening up the project in your favorite text editor. The two commands we just ran are; npm i -g @nestjs/cli nest new project_name Alternatively you can clone the starter template git git clone project_name; Navigate into the newly created folder, and install the dependencies. cd project_name; npm install; Folder Structure A NestJS project will often posses the following folder structure depending on which version you are using, as of the time of this writing, NestJS is in version 9.0.5. We will only concern ourselves with the src folder, that's going to be the only folder we will be working with most of the time. That's where our application source code will be stored. src/----------|------app.controller.ts |------app.service.ts |------app.module.ts |------main.ts main.ts This file contains the file necessary code for bootstrapping and starting our application. It imports NestFactory and the main module for our application, creates a server app for us and listens on a specified port for an incoming request. import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; async function bootstrap() { const app = await NestFactory.create(AppModule); await app.listen(3000, () => console.log(`App started on PORT 3000`)); } bootstrap(); Calling the create() method of NestFactory builds a http application for us from the application main module, The app that is returned by this method satisfies the NestExpressApplication interface and this interface exposes some useful methods for us. We start the http server by calling app.listen() as we would have with an express app. If it isn't apparent, you now see the benefit of working with NestJS, enabling CORS on our application is as simple as calling app.enableCors() on our application. Ordinarily it would require us to first install cors module and then using it as a middleware. To create our http app/server we need to pass in our applications main module as an argument to the create method of NestFactory, we would look at the app.module.ts below. app.module.ts A module in NestJS is simply a data structure for managing our applications dependencies. Modules are used by Nest to organize the application structure into scopes. Controllers and Providers are scoped by the module they are declared in. Modules and their classes (Controllers and Providers) form a graph that determines how Nest performs, for a class to serve as a NestJS module it should be decorated with the @Module() decorator. Let's examine the contents of the app.module.ts. import { Module } from '@nestjs/common'; import { AppController } from './app.controller.ts'; import { AppService } from './app.service.ts'; @Module({ imports: [], controllers: [AppController], providers: [AppService] }) export class AppModule {} The imports array is responsible for handling other modules that this module depends on, as our application grows we would have other modules because NestJS suggests that each feature in application should have it's own module, rather than polluting the global module namespace. The imports array handles those other modules. The controllers array handles the contorllers that we create in our application, while the providers array handles services that we create in our application. The module class also scopes the controllers and providers, making them available only in the module they are registered with. We are going to have a brief overview of contorllers and services. app.service.ts A service in NestJS is similar in concept to a service in Angular, a service is just a class that encapsulates helper methods in our application, we define functions that helps get certain things done, in this example we define only one method on app.service.ts which returns hello world! Another name for a service is a provider. Let's inspect our app.service.ts file. import { Injectable } from '@nestjs/common'; @Injectable() export class AppService { getHello(): string { return 'Hello World!'; } } For a class to serve provider it should be decorated with the @Injectable which is exported by @nestjs/common, we can proceed to declaring methods on the class that we use in our code. app.controller.ts A controller is a just another fancy name for a route handler, a controller will define a method that is usually attached to a route, whenever there is a request to the server that matches a particular route, the controller will call the function that is attached to the route. import { Controller, Get } from '@nestjs/common'; import { AppService } from './app.service'; @Controller() export class AppController { constructor(private readonly appService: AppService) {} @Get() getHello(): string { return this.appService.getHello(); } } We already established that NestJs makes use of dependency injection, in the above controller we inject the AppService provider so we can use the methods declared on it. For a class to serve as a controller it should be decorated with the @Controller decorator as demonstrated above, the decorator can accept a string as an argument, that string will serve as a base route for that controller. Http verbs can be applied as decorators to the functions that will process an incoming request. In the above example, a @Get decorator which matches to the http verb GET is applied to the getHello function. Thus whenever there is a GET request to the server, the getHello function is called as the handler to that route. The decorators that serve as http verbs also accepts strings as argument and they serve as a secondary path to match after the one defined in the @Controller. In the above example, the base route is / because no argument is passed into the @Contorller decorator and the route for the @Get controller is also / because no argument is also passed to the @Get decorator attached to the getHello function that is why a request made to the server will return hello world. Starting the app To kick start the server, we simply run; npm start The NestJS CLI which you have access to if installed with npm i @nestjs/cli will bootstrap and start the application for us in production mode. To start the server in development mode, which enables hot reload we can run npm i start:dev and any changes we make while the server is running locally will take effect. In fututre articles in this series we will take our time to explore NestJS providers, controllers, modules and lots of other features of NestJS, that is it for today, i hope you enjoyed this and i hope you found this useful, please leave your comment down below about what your thoughts or experience on using NestJS, feel free to add in anything you feel i left out in this introduction in the comments. Top comments (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/kalashin1/getting-started-with-nestjs-1p1d
CC-MAIN-2022-40
refinedweb
1,495
55.68
Friday Fun - Late Edition : Tiny Neopixels I got some tiny neopixels...they must be tested! Recently I purchased some really small neopixels from Aliexpress as I just had to have some really small neopixels, but are they any good? YOU made this post! Physical Characteristics The entire string of pixels measure approx 6 metres in length, and each pixel is spaced 12cm apart. There are 50 pixels in the string. I chose the crystal clear wire option but there are others including black. Yes I have a slightly crappy USB microscope so I can do pics like this. And yes that is blutack holding the wires in place. Each pixel is housed in a clear resin / hot glue case which provides some mechanical strain relief for the wires which go into and out of each pixel. Blinkies! pic.twitter.com/rvo9Q4LJuQ— biglesp (@biglesp) January 16, 2020 Connections The wire is thin and fragile, there I said it. The wire is encased in a foil / fabric cover which is prone to friction and over time it will degrade. So these neopixels are not for use in any kinetic / moving / wearable projects as the friction will cause the pixels to short. Best case they will simply break, worst case they will break your microcontroller. To connect the pixels to a board, use the female connector. Looking at the connector face on, with the tab / clip on the top the pin out is. Teeny tiny neopixels have arrived from China. I am really impressed with them and expect to buy more for projects! pic.twitter.com/vu5y5svW5X— biglesp (@biglesp) January 16, 2020 Testing the pixels To test the pixels I chose the humble Arduino, specifically the 4duino pro which is my go to Arduino board. Using Adafruit's code, I remixed a quick test script to change each pixel to red, green, blue one after another. The power and GND connection was provided from the Arduino, which is not best practice but ok for a quick test. The data pin used was pin 6. If you do not have the Adafruit Neopixel library installed then go to Sketch >> Include Library >> Manage Libraries (or press CTRL + SHIFT + I)and search for Adafruit Neopixel, which at the time of writing is at version 1.1.8. Here is my code. Arduino Code #include <Adafruit_NeoPixel.h> #ifdef __AVR__ #include <avr/power.h> #endif #define PIN 6 Adafruit_NeoPixel strip = Adafruit_NeoPixel(60,.setBrightness(50); strip.show(); // Initialize all pixels to 'off' } void loop() { // Some example procedures showing how to display to the pixels: colorWipe(strip.Color(255, 0, 0), 50); // Red delay(2); colorWipe(strip.Color(0, 255, 0), 50); // Green delay(2); colorWipe(strip.Color(0, 0, 255), 50); // Blue delay(2); } // Fill the dots one after the other with a color void colorWipe(uint32_t c, uint8_t wait) { for(uint16_t i=0; i<strip.numPixels(); i++) { strip.setPixelColor(i, c); strip.show(); delay(wait); } } I flashed the code on to the Arduino and it ran, but there were a few issues and chief among them was glitching! A glitch? Yeah every so often the colours will flash, either the whole string or just a section, and this can be when they are at rest or in motion. Does it work with {insert board here} I did another test with a micro:bit and yeah it worked. I used BIT:170 a micro:bit to breadboard breakout board from 4tronix to make the connections, and yes 50 pixels is way too much to be connected for a long time on a micro:bit, so don't do it! Here is the code that I used, note that you will need to install the Adafruit Neopixel extension to make this work. Conclusions They are indeed tiny pixels, but with this comes fragility. For static displays then they will do a good job, but for wearable projects then I would avoid as the shorting / glitching issue will cause many issues.
https://bigl.es/friday-fun-tiny-pixels/
CC-MAIN-2020-16
refinedweb
660
72.76
Here is Part 1 of How to build Image classifier Robot using Raspberry Pi, with Deep Learning Image classifier Robot Part 2 : Python code. Suppose you use Pi command line, nano is my favourite editor. $ sudo nano classify.py Now, we will import necessary package, Keras library has pre-trained model available in their applications’ method. The ‘subprocess’ library is use for running command line in python file. We use it to run pico2wav and omxplayer for robot’s speech. # import the necessary packages from keras.applications import ResNet50 import tensorflow as tf ''' #Uncomment in case you would like to try another pretrained model #from keras.applications import InceptionV3 #from keras.applications import Xception # TensorFlow ONLY #from keras.applications import VGG16 #from keras.applications import VGG19 #from keras.applications import inception_resnet_v2 ''' from keras.applications import imagenet_utils from keras.preprocessing.image import img_to_array from keras.preprocessing.image import load_img import numpy as np import h5py import subprocess Time Decorator It’s easy to check the run time of your code, by using time decorator. Then just put @fn_timer prior to any function you would like to check. #Declare Time Decorator import time from functools import wraps def fn_timer(function): @wraps(function) def function_timer(*args, **kwargs): t0 = time.time() result = function(*args, **kwargs) t1 = time.time() print ("Total time running : %s seconds" % (str(t1-t0))) return result return function_timer Create model We start by define our model and its weights as ‘Imagenet’. model = ResNet50(weights="imagenet") Here, we declare our classify() function: Resnet50 take image input shape = 224 x 224 pixels, it may vary if you use different models, but mostly they are 224×224 or 299×299. Here is the tricks, you should declare model outside the function classify(). Because if you put it in the function, it will get in the loop. Every time you call function, you’ll have to wait for the model (or graph in TensorFlow) created, and that may cost a minute in Raspberry Pi. So, we just call this model once while import this file to our main file. It will load on memory and good to go. You can try by yourself and see how slower it would be if you put model inside classify() function. @fn_timer def classify(): inputShape = (224, 224) preprocess = imagenet_utils.preprocess_input image_path = '/dev/shm/mjpeg/cam.jpg' image = load_img(image_path, target_size=inputShape) image = img_to_array(image) image = np.expand_dims(image, axis=0) image = preprocess(image) . . The image path to Rpi-Webcam_interface is '/dev/shm/mjpeg/cam.jpg', but you can change it to any path, or if you want to run classify() on any static picture files. Then we predict the image classify by calling model.predict and decode it. . . preds = model.predict(image) P = imagenet_utils.decode_predictions(preds) . . The we loop over the predictions and display the rank-5 predictions + probabilities to our terminal. . . for (i, (imagenetID, label, prob)) in enumerate(P[0]): output = ("{}. {}: {:.2f}%".format(i + 1, label, prob * 100)) print(output) . . Now that’s our classify() function is nearly finish. You can try run your code by changing the image_path pointing to any image file that download from internet. To download image file just google to the file and use wget http://(file URL location) Adding Speech Now we will bring voice to our Image classifier Robot. I got this idea of phase “I’m thinking…” from Lukas’s blog. We will use our thinking.wav file from previous part. Feel free to change the words to anything you want. We call it by using ‘subprocess’ library, just put it above the prediction line. . . # classify the image print("I'm thinking...") subprocess.call(['omxplayer','think.wav']) preds = model.predict(image) P = imagenet_utils.decode_predictions(preds) . . We also want the robot to speak which class it think the image most likely is, so we take the winner from object ‘P’, and covert it to temporary wav file, and let it speak. . . winner = P[0][0][1] speak = ("I think I see {}".format(winner)) subprocess.call(['pico2wave','-w','/tmp/win.wav',speak]) subprocess.call(['omxplayer','/tmp/win.wav']) subprocess.call(['rm','/tmp/win.wav']) OK, that’s very Cool! Our robot can see and classify the image now. Bring motors to life Now it’s time to get our Image classifier Robot worked as a car, moving. We’ll use a library ‘RPi.GPIO’. This package provides a class to control the GPIO on a Raspberry Pi. Basically, it tell Pi to send current to any GPIO channel that we want to function. In this case, it’s a motor and sonar. We’ll keep sonar for the next post. First, we create a new file ‘motor.py’ , then import the necessary packages. import RPi.GPIO as GPIO from time import sleep import sys import signal import random import tkinter as tk Tkinkter is a library to do a loop in the codes we desired. We use it here to standby and receive commands from keypress telling the robot which direction to move. To use RPi.GPIO , we start by setmode of GPIO. like to use BCM which is a specific number of GPIO , but you can use the pin number by setting to GPIO.BOARD Then we setup the GPIO/pin number as an ouput, and set PWM for it. PWM stands for Pulse Width Modulation. You can set Frequency and Duty Cycle by using GPIO.PWM. RasPi.tv has a very good post explanation of it. The codes will look like this. GPIO.setmode(GPIO.BCM) # GPIO numbering GPIO.setup(Your_GPIO_number, GPIO.OUT) # set pin as output p = GPIO.PWM(Your_GPIO_number, 100) # set Frequency to 100 p.start(100) # start at duty cycle 100 sleep(1) # let it run for 1 second p.stop() GPIO.cleanup() # Free the GPIO pin after use The logic how the motor will be controlled are differ depends on the motor control board. You can look in their manual or in the manufacturer website. Then we will write a function to perform moving direction task. This is an example of my car using Cytron’s motor hat. You can look at an example of my code at Github. def forward(tf): print ("Forward") GPIO.output(12, 100) GPIO.output(13, 100) p1.start(100) p2.start(100) sleep(tf) GPIO.output(12, GPIO.LOW) Sentdex had made a wonderful step-by-step VDO on YouTube to follow easily. Keyboard input to command the direction Our Image classifier Robot need a controller now, we start by declare a function of Keyboard input. I also import our classify() function, so we can tell the robot when to do image classification task. import classify as cs . . def key_input(event): key_press = event.keysym.lower() # convert to lower-case letters sleep_time = 0.20 print(key_press) if key_press == 'w': forward(sleep_time) elif key_press == 's': backward(sleep_time) elif key_press == 'a': left(sleep_time) elif key_press == 'd': right(sleep_time) elif key_press == 'q': # Just stop the car stop() elif key_press == 'z': # Stop the car and EXIT the program stop() GPIO.cleanup() sys.exit(0) elif key_press == 'space': # Doing image classification print('Analyze!') cs.classify() else: print('Wrong key press') Lastly, we use Tkinkter to run a program loop and connect key_input. command = tk.Tk() command.bind_all('', key_input) command.mainloop() Now you can test and play with your Image Classifier Robot. Next post is the last one (optional), we’ll attach the Sonar system!
https://www.royyak.com/build-image-classifier-robot-using-rasppi-deep-learning-part-2/
CC-MAIN-2018-34
refinedweb
1,228
60.21
WSE 3.0 and Visual Studio 2008? - Thursday, November 29, 2007 7:38 PM Emmanuel Huna So I downloaded and installed Visual Studio 2008 RTM. I was able to upgrade all of my projects without problems, except for a client app that uses WSE 3.0. It refuses to build under Visual Studio 2008 - there were errors related to the web reference objects. I updated the web references from VS 2008 and now the WSE 3.0 objects are gone. I googled and searched the MSDN forums, but the only thread I found was this one: where someone asks about WSE 3.0 support under VS 2008. Unfortunately, there's no response. What should developers that are using WSE 3.0 do if they upgrade to Visual Studio 2008? 1) Is there a way to have support for the WSE 3.0 configuration tools within the VS 2008 IDE? (right click on project and choose WSE 3.0 settings) 2) If not, is there anyway I could re-read the WSDL and create my WSE objects through a command line prompt if my project was upgraded to Visual Studio 2008? Can someone point me to documentation and examples? It cracks me up that Microsoft is comitted to backwards compatibility for MFC, but WSE seems to never work with the latest version of Visual Studio. I had the same problem when I upgraded to Visual Studio 2005 and was using WSE 2.0. I had to wait and upgrade to WSE 3.0 to get the tools in the IDE. Answers - Tuesday, December 04, 2007 6:35 PM Sidd Shenoy - MSFTModerator Hi Emmanuel, WSE 3.0 is not supported in VS 2008. This would require a Service Pack to be released for WSE and currently the WSE team has no future Service Packs planned. The question then stands, what do developers do. I'll answer your questions above and hopefully that might help a bit: 1. Currently no, there is no supported way of doing this. VS 2008 just released and WSE 3.0 released a while back. Is there anything in particular that you use in the configuration tool? Everything that is done through the configuration tool can be done manually by updating your web.config file. 2. Re-reading your WSE used to work with "Add Web Reference" in VS 2005. However, under the covers, what is really happning is that wsdl.exe is being run against the service's wsdl, and the base class changes from SoapHttpProtocol to WebServicesClientProtocol. So that's all you have to do: a) Run wsdl.exe against the service's wsdl file b) Change the base class that your proxy inherits from, from SoapHttpClientProtocol to WebServicesClientProtocol WSE 3.0 does not have any future releases planned and people should really be thinking about upgrading their stack to WCF. Migration from WSE to WCF isn't all that difficult and there is literature to help with that. You can find out more about WCF at. If there are any specific questions regarding migration that you want to ask, please let me know. Thanks, Sidd [MSFT] All Replies - Thursday, November 29, 2007 8:41 PM Emmanuel Huna Ok, I found a fairly easy workaround to have the web references with WSE 3.0 updated and working in VS 2008: 1) Create a new project in VS 2005. Name it exactly the same as the VS 2008 project a. In my case: “MYPROJECT”. 2) Add a web reference to the web service with WSE 3.0 support. a. In my case: 3) Right click on the project and choose > WSE 3.0 Settings > Enable this project for Web Services Enhancements. 4) Update your web reference – this will create the WSE 3.0 classes and objects a. In my case: MYSERVICE.MYASMXWse 5) Copy all of the files from the VS 2005 project to the VS 2008 project: a. In my case: i. From: C:\Projects\Misc\Projects\Vs2005Client\MYCLIENT\Web References\MYSERVICE ii. To: C:\Projects\MYCLIENT\main\Web References\MYSERVICE 6) Exclude all previous files in the VS 2008 project and include the new ones. a. In my case: in the new VS 2008 project, refresh and include all files in MYCLIENT > Web References > MYSERVICE 7) If you get an error on the default URL coming from “My Settings”, update it. a. In my case: Me.Url = My.Settings.MYCLIENT_MYSERVICE_MYASMX Hope this helps someone else, but if anyone from Microsoft reads this my question still stands: can we get WSE 3.0 support in VS 2008? - While this solution is definietly workable it is far from ideal. I will be putting a watch on this thread to see if anyone from Microsoft replies to you. Thank you for posting your workaround I will be utilizing it. - Tuesday, December 04, 2007 7:29 AM Andreas HammarCouldn't agree more, I guess that they want us to be using WCF - but you can't do all transitions at the same time! Our workaround is doing a postbuild-replace of the class/constructor in the proxy project (we have a separate project just for the proxy): With regex, in reference.cs: "class.*Service .*\{" to class ServiceWse : Microsoft.Web.Services3.WebServicesClientProtocol { and "public Service\(.*\{" to public ServiceWse() { This will yield no non-wse proxy, just a wse one, but we don't need the non-wse. Looking forward to integrated tools! - That is somewhat disheartening. I have tried to adopt WCF right off the bat on a new project, but after spending hours and hours knocking my head against the wall, I have decided to go back to the simple and easy to set up ASP.NET web services. It's nice that there is an easy conversion route from ASP.NET web services to WCF services, except that no matter what conversion path you decide on, if you include credentials in the requests, you need to include certificates in the workflow to secure the sensitive data... and if you use certificates, then you are doomed to fail. I have searched high and low through hundreds of blog postsings and forums, and it appears that either Microsoft doesn't want you to use certificates, or wants you to become brain dead before you use them. It seems all but impossible to create a test certificate that works for development. Since this was not feasible in a reasonable amount of time (I could have lived with weeks, but not even that was enough), I must try another path because the WCF path was going nowhere very fast. If I read your post correctly, it seems that one can still use WSE with VS 2008, as long as it is all done through the command line. Is this correct? If it isn't even usable that way then I'm going to have to decide on another technology stack because ASP.NET / WCF just doesn't seem very usable at the moment. Unfortunately I am forced to use vs 2005 for web part projects on SharePoint. I like the one click to deploy web part. Hopefully MS will give us some way in the near future. - Thursday, January 24, 2008 10:54 PM Bidware.com I figured out exactly how to use WSE 3.0 with VS2008. You must use the old .cs or .vb files with your proxy or web reference files (They are in the same directory as your .wsdl files.) So, once VS 2008 finishes converting your VS 2005 to VS 2008, it will overwrite your existing reference.cs file (in my case) with the wrong base class (SoapHttpClientProtocol instead of WebServicesClientProtocol.) So, you can either manually modify your reference.cs files to use WebServicesClientProtocol, or simply copy your old .cs files from your working VS2005 project directory. It should compile and run after that. Contact me at chris@bidware.com should you have any questions. - I've gotten it to work with VS2008 alone too. You must use the GUI tool that is installed with WSE 3.0 to configure the app.config or web.config of the project (you can find a good amount of documentation on that on the net). Then you must go into a service reference's auto generated code (remember to make a old web reference, not a new service reference), and change the main web service client class so that it derives from The Protocol class in the Microsoft.Web.Services3 namespace (I forget the exact name). Once you do this, you will get access to all of the inherited properties, and it works fine. - Thursday, January 24, 2008 11:27 PM Bidware.com Actually, you never need to use any GUI at all. It's just a matter of using the old references to WSE. It's that simple. - Sorry. I forgot to mention that my post was from the stand point of a person who isn't using VS 2005 at all (as in not even upgrading the original references). - Tuesday, February 12, 2008 12:25 PM AlienationZombie Thanx for the suggestion! It works great! It saved me the time of trying to figure this out. - Monday, February 25, 2008 2:08 PM Silvero van Henningen Thnx, Finally my client-app (VS2008) works with MTOM (WSE 3.0). I only had to replace (in Reference.cs) public partial class wsMTOM : System.Web.Services.Protocols.SoapHttpClientProtocol in publicpartial class wsMTOM : Microsoft.Web.Services3.WebServicesClientProtocol There is a very simple solution if you can change the Target Framework of your application to 3.0 or 3.5: You simply need to delete the "Web Reference" from your application and then re-add the web service as a "Service Reference". This should automatically configure your client to use the WSE settings. The only other change you will need to make to your code is the proxy reference which will need to instantiate as <webservice name>Client instead of <webservice name> If you must remain at Framework 2.0, then the above solutions must be considered. - Dear Microsoft, I have been contracted to develop a small web application that interfaces with a Java-based web service written by IBM. Of course I am using ASP.NET. The client would prefer .NET 3.0. My problem is that the web service I am consuming uses DIME attachments. Therefore I must use WSE2 which is not supported with VS2008. I understand DIME is outdated, obsolete, whatever. But why not keep support for old time's sake? Now I must revert to VS2005 for development. Very disappointing. -Redluv - Thursday, August 21, 2008 4:51 PM John SaundersMVP, ModeratorDid you consider using WCF? John Saunders | Use File->New Project to create Web Service Projects - We are currently running both version WSE 3.0 and WCF with VS 2005 due to diverse clients needs. Not all are ready with WCF. We are upgrading to VS 2008 and have the same problem. Is there way to customize that WSE Proxy creation ? Or any other ideas other than replacing the inherit class manually. - Can you please tell me how you are doing this as post-build action ? - Friday, October 03, 2008 8:21 PM Jason Young _iMeta_Check out for a solution to using WSE 3 in VS 2008 - Proposed As Answer byJason Young _iMeta_ Friday, October 03, 2008 8:22 PM - - One solution that doesn't seem to be mentioned so far is that if you have a separate project for your proxy classes (which we happen to do at my company) and you have both VS2005 and 2008 installed, then you can simply open this project in 2005 and update the web reference in there - no need to have separate projects, as VS2005 and 2008 can share project files. The project can then be built in 2008 and everything should work as before.Hope this helps someone.
http://social.msdn.microsoft.com/forums/en-US/asmxandxml/thread/84299cff-5af6-4cef-8b6e-a8251df3d496
crawl-002
refinedweb
1,978
74.9
On 10/16/05, Josiah Carlson <jcarlson at uci.edu> wrote: > > Calvin Spealman <ironfroggy at gmail.com> wrote: > > > > On 10/14/05, Josiah Carlson <jcarlson at uci.edu> wrote: > > > > > > Calvin Spealman <ironfroggy at gmail.com> wrote: > > > > > > > >. > > > > > > -1000 If you want a namespace, create one and pass it around. If the > > > writer of a function or method wanted you monkeying around with a > > > namespace, they would have given you one to work with. > > > > If they want you monkeying around with their namespace or not, you can > > do so with various tricks introspecting the frame stack and other > > internals. I was merely suggesting this as something more > > standardized, perhaps across the various Python implementations. It > > would also provide a single point of restriction when you want to > > disable such things. > > What I'm saying is that whether or not you can modify the contents of > stack frames via tricks, you shouldn't. Why? Because as I said, if the > writer wanted you to be hacking around with a namespace, they should > have passed you a shared namespace. > > From what I understand, there are very few (good) reasons why a user > should muck with stack frames, among them because it is quite convenient > to write custom traceback printers (like web CGI, etc.), and if one is > tricky, limit the callers of a function/method to those "allowable". > There may be other good reasons, but until you offer a use-case that is > compelling for reasons why it should be easier to access and/or modify > the contents of stack frames, I'm going to remain at -1000. I think I was wording this badly. I meant to suggest this as a way to define nested functions (or classes?) and probably access names from various levels of scope. In this way, a nested function would be able to say "bind the name 'a' in the namespace in which I am defined to this object", thus offering more fine grained approached than the current global keyword. I know there has been talk of this issue before, but I don't know if it works with or against anything said for this previously.
https://mail.python.org/pipermail/python-dev/2005-October/057361.html
CC-MAIN-2016-36
refinedweb
357
70.13
... CodingForums.com > :: Server side development > MySQL PDA View Full Version : MySQL Pages : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [ 23 ] 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 Compare and get result from two tables Music Database Design help About how to use the cursor Questions when reviewing code Problem with ON clause in MySQL 5 Server vs Client version One to Many updating table with data unexpected T_CONSTANT_ENCAPSED_STRING Weird Error Code Query help for an SQL challenged designer update 5.0 to 5.1 or 6.0 ??? Table Structure Advice Newbie question - WHERE statements help with a query How query results[SELECT] appear in tabular form,using an image with source fromFIELD query help Reset indexes? REGEXP range of characters [a-d] I learned something today... How to make this efficient? Two Questions Incorrect time value: '' for column Query puzzle... Table in My SQL #1044 - Access denied for user 'cruisew'@'localhost' to database 'emailer' my sql query do not like me! Need to retrieve donor information AND total donations given for date range performance vs many records ODBC or DDE Link help formatting Localtime() mysql statment help what is the difference between ""and ''? how to subtract values from two tables Error 2002: Cant Connect To Local MySQL Server Through Socket mysql_fetch_array(): supplied argument is not a valid MySQL result resource Solved! - help with hebrew in mysql! what files are need from the /mysql/data folder?? Constant table corruption. Order SQL Query Random How to add X minutes, myql way 60 Gb of mp4 files in MySQL database count?? Query Optimization help Need Dummies guide to mysql field structure Restrict # characters from field tools to see visual structure of a MySQL database Can't make select statement with default value Sql Synt Prob Insert/Update error Ordering in queries by sequence of elements in "IN" How to do Bulk delete of records? How to make social network table setting MYSQL rules How I add to a dadabase table field [eg 1st row 4 th col] an entry [image or url] ? blob size You have an error in your SQL syntax?? sql procedure? Best file type to use cannot create table in phpMyAdmin MySQL query strange order results LOCK TABLES issues fill a table with dates in pure SQL New, Confused, and A couple of questions how to read composite values in tables Can you call an sp from another sp? Return longtext losing line breaks etc php html syntax error in the query Is it possible to select multiple entries from one column? SQL sub query and ORDER BY problem help with my php/mysql search Web-based Chat I want to design a MySQL table to hold 2 columns, News CMS can not lock upgrade to MySQL 5.1.7 to use RENAME DATABASE phpMyAdmin - #1044 - Access denied for user, used GRANT ALL PRIV... next id on auto increment field what is wrong with that line? Help:How can I load this text file into MySQL? Is this a good table setup (just started) Sum, Union and balancing accounts sorting by a column with letters and numbers insert in 2 tables Importing new data into empty fields of an existing table Avoiding Inline MySQL With PHP? problem with mediumblob On what port is MySQL running? Limit Query to 1st occurrence of a value Import Sql file into PhpMyAdmin Need more help with news system Deleting a "test databse" Running a modify SQL command, When a INSERT is done on a Table mysql weight mySQL question I can't seem to find my.ini anywhere in the /mysql/ directory Problem in Sql.. WHERE - 2 cases Join from 2 dbs Is there a website that offers schema for different types of tables Simple SQL bug Importing data into new tables error!! help me insert into What Chars Do I have To Worry About Rows and Total Please Help Check My Answers!. SQL Noob $user IN (table.Assigned) daylight saving and CONVERT_TZ upload files referring to count in output [was: Count ME in!!?] MYSQL - Date and Months query? And VB Whats wrong with my syntax? People Realy Simple Sql Question SQL Syntax question Refresher required! Easy question. How to pull information from the MySQL DB!? Help wrapping my head around joins Get strange record in Group by & order by MYSQL - Removing Duplicated Rows Getting Error on Date_Add using Variables Change date/time stamp in mysql stumped by a JOIN...plz assist. COUNT warning error I can't understand how to use nested query to get different names how to best connect the data Conducting a search with MySQL Connector for Microsoft Access? Joining 2 tables complex query - joining to self Really slow UPDATE IGNORE with innodb The art of printing Mysql columns Ignoring blank fields Change table type to MyISAM Incorrect Integer Value Error To make an advanced search... insert new rows with qty values from existing rows Difference between ascii_bin and ascii_general_ci Alternative to tons of selects from diff tables to populate many <SELECT> elements? mysql error and not quite sure why How to design a movie-cinemas database? Fulltext searches MySQL column value as percentage MySQL JOIN - List Items Under Categories select all fields of specific row in multirow result How Much Load Can MySQL Handle? Dedicated MySQL Server... Very Slow PHP/MYSQL problem! Help in displaying records in Single row. MySQL in PHP Dreamweaver Retrieving last auto-increment Help in Truncating string column in Sql Ordering SQL Query Results importing txt file to table stored PROCEDURE problem data from two tables PMA Colossal Blunder Detecting high CPU usage queries Creation of a CMS with PHP & mySQL Connect to MySQL could multiple calling of mysql_fetch_array possible? Questions regarding Primary Key/Design Not sure to use filer or sort query not working if nil results mysql/php help with form [was: Noob question =(] MySQL database structure? Need help optimizing FIND_IN_SET() table structure question SQL errors help with left join query measuring query times Poorly performing update queries Multiple COUNTs Against A Single Table Need help in qriting mysql query printing table from mysql to lpd on linux box UPDATE record & upload image phpbb with mysql Mysql Query help Quick Question : Checking against multiple columns SQL archive news statement Nested query? select * from flat files.... mysql_connect is failing How bad is it for a script that uses lots of queries? Listing results by first creating category header, then displaying subcat results. num_rows issue How many queries script used? Help in refining the sql query multiple SUM queries - nested? Linking HTML to a mysql data source how to use a dictionary table twice as a ref for 2 diff columns in a join? innoDb -v- MyISAM question query from 3 table in one (3 in 1) ? PHP Upload MySQL server has gone away sum() in SQL automatic updates help please, returing data from two tables question about querying in two tables. Syntax Error???? MySQL Quit with PHP 5 Sql problem PHP MySQL Statistics Algorithm Error trying to grab a news post from phpBB MySQL server has gone away ERROR 1364 (HY000): Field 'DateIn' doesn't have a default value WAMP server settings changed Quick Question, how to make ON DUPLICATE KEY ---DO Nothing---? Calling a variable from a different php file Alter Table! execute an SQL doc sustaining tables relationships INSERT not working... SELECT WHERE any field is blank? i... uh... deleted the root user... Get Next and Previous Records Whats wrong with this query? InnoDB vs MYISAM for mysqli functions need help in update query Database Design Question Displaying picture from a mysql table how to create relationship between tables whwn unvoidable redundancy is there? query vs logic speed datestamp on records? Confused about tinyint,medint,bigint... query problems Group By and Order by Help displaying result by querying first 3 letters Inner Join help MYSQL Sum() Join Group By Changing data in a table Relationships Populate DB from a PHP script Ordering with GROUP BY Optimizing MySQL for a new server setup Can anyone tell me what I have done wrong in the short MySQL command? Creating a simple form that displays field output Can't get SQL right import an excel sheet in mysql Field consolidation [ was: Row Consolidation] help with complex query Several Queires on A Single Page Return a count of number of rows before desired row. Tricky Join instert multiple records MYSQL Sorting My query is throwing an error I am trying to enter the program mysql but I keep getting the error MySQL query I'm new to MySql......how do I? How to find duplicates in two columns and count for each row found table structure question EZ Archive Ads Plugin for vBulletin Computer Help Forum Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. Search Engine Optimization by vBSEO 3.6.1
http://www.codingforums.com/sitemap/f-7-p-23.html
CC-MAIN-2014-15
refinedweb
1,499
59.33
Closed Bug 515885 Opened 13 years ago Closed 13 years ago "Assertion failure: !scope->owned(), at ../jsobj .cpp" Categories (Core :: JavaScript Engine, defect, P2) Tracking () People (Reporter: gkw, Assigned: jorendorff) References Details (Keywords: assertion, regression, testcase, Whiteboard: fixed-in-tracemonkey) Attachments (1 file) for (a in (function () { return (x for (x in [function(){}])) })()) ++b asserts js debug shell without -j on TM branch at Assertion failure: !scope->owned(), at ../jsobj.cpp:2615 Flags: blocking1.9.2? autoBisect shows this is probably related to bug 511728: The first bad revision is: changeset: 32189:297db27579ca user: Jason Orendorff date: Wed Sep 09 15:53:37 2009 -0500 summary: Bug 511728 - Misc. cleanup from bug 503080. r=igor. Since this is only on trunk and doesn't affect 1.9.2, this will not block 1.9.2. Flags: blocking1.9.2? → blocking1.9.2- (In reply to comment #2) > Since this is only on trunk and doesn't affect 1.9.2, this will not block > 1.9.2. Weird - the fingered bug landed on 1.9.2, so is this really still not blocking? that was true when Damon wrote it. why did you nominate it for 1.9.2 when you found it? Flags: blocking1.9.2- → blocking1.9.2+ (In reply to comment #4) > that was true when Damon wrote it. why did you nominate it for 1.9.2 when you > found it? I made a mistake. :( Apologies. (In reply to comment #5) > (In reply to comment #4) > > that was true when Damon wrote it. why did you nominate it for 1.9.2 when you > > found it? > > I made a mistake. :( Apologies. no worries, just making sure. Priority: -- → P2 jason gets a blocker again. Assignee: general → jorendorff The top of ComprehensionTail (way outside the context shown in the patch) calls PushLexicalScope regardless of which kind of comprehension we're parsing. Attachment #408020 - Flags: review?(brendan) Comment on attachment 408020 [details] [diff] [review] v1 Sorry, my bad from the upvar2 patch (right? hard to chase down the offending rev but I think I got it). /be Attachment #408020 - Flags: review?(brendan) → review+ Flags: in-testsuite+ Whiteboard: fixed-in-tracemonkey Status: NEW → RESOLVED Closed: 13 years ago Resolution: --- → FIXED These bugs landed after b4 was cut. Moving flag out. A type of test for this bug has already been landed because it is already marked in-testsuite+ -> VERIFIED. Status: RESOLVED → VERIFIED
https://bugzilla.mozilla.org/show_bug.cgi?id=515885
CC-MAIN-2022-27
refinedweb
402
68.97
Date: September 1998. $Id: RDB-RDF.html,v 1.25 2009/08/27 21:38:09 timbl Exp $ Status: . Editing status: Comments please. An parenthetical discussion to the Web Architecture at 50,000 feet. and the Semantic Web roadmap. Up to Design Issues There are many other data models which RDF's Directed Labelled Graph (DLG) model compares closely with, and maps onto. See a summary in One is the Relational Database (RDB) model. openning vocabular doing operations with a small number of tables some of which may have a large number of elements. A fundamental aspect of a database table is that often the data in a table can be definitive. Neither RDF nor RDB models have simple ways of expressing this. For example, not only does a row in a table indicate that there is a red car whose Massachusetts plate is "123XYZ", but the table may also carry the unwritten semantics that if any car has a Massachusetts plate then it must be in the table. (If any RDF node has "Massachusetts plate number" property then than node is a member of the table) The scope of the uniquenes of a value is in fact a very interest property. The original RDB model defined by E.F. Codd included datatyping with inheritance, which he had intended would be implememnted in the RDB products to a greater extent that it has. For example, typically a person's home address house number may be typed as an an integer, and their shoe size may also be also be typed as an integer. One can as a result join to tables through those fields, or list people whose shoe size equals their house number. Practical RDB systems leave it to the application builder to only make operations which make sense. Once a database is expreted onto the Web, it becomes possible to do all kinds of strange combinations, so a stronger typing becomes very useful: it becomes a set of inference rules. In a pure RDB model, every table has a primary key: a column whose value can be used to uniquely identify every row. Some products do not enforce this, leading to an ambiguity in the significance of duplicate rows. A curious feature is that the primary key can be changed without changing the identity of a row. (A person can change their name for example). SQL allows tables to be set up so that such changes can cascade through the local system to preseve referential integrity. This clearly won't work on the Web. One solution is to use a row ID -- which many systems do in fact use although SQL doesn't expose it in a standard way. Another is for the application to coinstrain the primary key not to change. Another is to put up with links breaking. RDB systems have datatypes at the atomic (unstructured) level, as RDF and XML will/do. Combination rules tend in RDBs to be loosely enforced, in that a query can join tables by any columns which match by datatype -- without any check on the semantics. You could for example create a list of houses that have the same number as rooms as an employee's shoe size, for every employee, even though the sense of that would be questionable. The new SQL99 standard is going to include new object-oriented features, such as inherited typing and structured contents of cells - arrays and structs. This RDB model with things from the OO world. I don't deal with that here in that the RDF model works as a lowest commoin denominator being able to express either and both. A difference between XML/RDF schemas (and SGML) on the one hand and database schemas on the other is the expectation that there will be a relatively small number of XML/RDF schemas. Many web sites will export documents whose structure is defined by the same schema, and this is in fact what provides the interoperability. A database schema is, as fasr as I know, created independently for each database. Even if a million companies clone the same form of employee database, there will be a million schemas, one for each database. It may be that RDF will fill a simple role in simply expressing the equivalence of the terms in each database schema. In order to be able to access a table, and make extra statements about it which will enable its use in more and more ways, the essential objects of the table must be exported as first class objects on the Web. When mapping any system onto the Web, the mapping into URI space is critical. Here we are doing this common operation generically for all relational databases. It is obviously usefuil for this to be done in a consistent ways between multiple vendors would be useful - an area for possible standardization. Here is a random example I may have gotten wrong, basd on whatI understand of the naming within databases. The database itself is defined within a schema which is listed in a catalog. 2002 version, see real code implemented by Dan Connolly: @@@ How to use typing to indicate that the URI in the table is a (relative?) URI to another object, not a string? @@@ This works fine when implemented live on a database. However, it is a little tricky to emulate in a typical file-based web server because of the use of "personnel" in this case both as directory and as One of the things which makes life easier is to make the mapping so that the relative URI syntax can be used to advantage. For example, here, everything within the database (the scope of an SQL statement) can be writted as a short URI. There is a question as to how much of the SQL query syntax should be turned into identifier. For example, is a query on a primary key really an identifier? Is the extraction of a single cell really an identifier? It would be useful to be able to treat them as such. However, it would be wiser to use the "?" convention to indicate a generalized SQL idempotent query. (A URL should of course never be used to refer to the results of a table-changing operation such as UPDATE or DELETE. In this case, if HTTP were used, an SQL query should IMHO be POST ed to the database URI. Of course, you can use your favorite networked database access protocol) In the above the column name of the table could be refered to using the table as a namespace, a row for example being <foo xmlns: <t:email>joe@example.com</t:email> <t:age>45</t:age> </foo> and one row of the the result of joining this table (of people) and another table (about people) by their primary keys would use namespaces from both tables: <foo xmlns:t="" xmlns: <t:email>joe@example.com</t:email> <t:age>45</t:age> <u:music>blues</u:music> </foo> This has been elaborated with help of an RDB tutorial and discussion from Andrew Eisenberg/Sybase. See also: Why RDF is more than XML Up to Design Issues; back to Architecture from 50,000ft timbl
http://www.w3.org/DesignIssues/RDB-RDF
CC-MAIN-2017-09
refinedweb
1,204
60.75
Our Java integration tests, like all integration tests I've ever run into, look simple enough when looking at the code of the test; however, if you want to understand how the simple test exercises several collaborations you'll need to dig deep into the bowels of the integration test helpers. Integration test helpers are usually a very fragile combination of classes that stub behavior, classes with preloaded fake datasets, and classes that provide event triggering hooks. In my experience you also generally need the High-Level Test Whisperer on hand to answer any questions, and you generally don't want to make any changes without asking for the HLTW's assistance. Today's experience went exactly as I expect all of my experiences with the functional tests to go: I added a bit of behavior, and tested everything that I thought was worth testing. With all the new tests passing, I ran the entire suite - 85 errors. The nice thing about errors in functional tests: they generally cause all of the functional tests to break in an area of code that you know you weren't working in. I wasn't that surprised that so many tests had broken, I was adding a bit of code into a core area of our application. However, I wasn't really interested in testing my new behavior via the functional tests, so I quickly looked for the easiest thing I could do to make the functional tests unaware of my change. The solution was simple enough, conceptually, the new code only ran if specific values were set within the atom in my new namespace; therefore, all I needed to do was clear that atom before the functional tests were executed. Calling Clojure from Java is easy, so I set out to grab the atom from the Clojure and swap! + dissoc the values I cared about. Getting the atom in Java was simple enough, and clojure.lang.Atom has a swap() method, but it takes an IFn. I spent a few minutes looking at passing in a IFn that dissoc'd correctly for me; however, when nothing was painfully obvious I took a step back and considered an easier solution: eval. As I previously mentioned, this is not only test code, but it's test code that I already expect to take a bit longer to run**. Given that context, eval seemed like an easy choice for solving my issue. The following code isn't pretty, but it got done exactly what I was looking for. RT.var("clojure.core", "eval").invoke( RT.var("clojure.core","read-string").invoke("(swap! my-ns/my-atom dissoc :a-key)"));I wasn't done yet, as I still needed the HLTW's help on getting everything to play nice together; however, that little Clojure snippet got me 90% of what I needed to get my behavior out of the way of the existing functional tests. I wouldn't recommend doing something like that in prod, but for what I needed it worked perfectly. ** let's not get carried away, the entire suite still runs in 20 secs. That's not crazy fast, but I'm satisfied for now. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/eval-clojure-snippet-java
CC-MAIN-2016-30
refinedweb
544
55.68
safecache is a thread-safe and mutation-safe LRU cache for Python Project description A thread-safe and mutation-safe LRU cache for Python. Features - All cached entries are mutation-safe. - All cached entries are thread-safe. - Customizable cache-miss behavior. - Zero third-party dependencies. Usage safecache preserves the functools.lru_cache's API so no extra learning is needed. To migrate from lru_cache to safecache, you can simply rename the decorators! Before: from functools import lru_cache @lru_cache(maxsize=32, typed=True) def function(*a, **kw): ... After: from safecache import safecache @safecache(maxsize=32, typed=True) def function(*a, **kw): ... And now you get all the benefits of lru_cache with the enhancements of safecache. For more advanced usage of safecache, please refer to the documentations. License safecache is under MIT License. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/safecache/
CC-MAIN-2020-05
refinedweb
157
53.68
Re: Questions for Desginers\Archiects - From: "NH" <NH@xxxxxxxxxxxxxxxxxxxxxxxxx> - Date: Thu, 9 Feb 2006 09:25:30 -0800 Thanks for the great insight Kevin, much appreciated. Yes I have jumped into specifics too quickly on one previous project and now I find myself thinking "if only I designed it in this way". While the project was a very good success here certain parts of the system were designed without thinking about the bigger picture and possible issues about expanding the system etc. I've learned from that. I am familiar with the design you mention, I suppose I would just love to see a small application that embodies best practice. I have looked at the ASP.Net Time Tracker Starter Kit, its probably not a good sign when I found that complicated. I am very strong on database stuff so I tend to design with the DB as the main thing, now I realise I need to get out of that mindset and start putting the business logic where it should be. Thanks for the links... N "Kevin Spencer" wrote: Hi NH,. I read as much as I can about designing systems (and I cant really find any decent books) I dont have anyone to discuss the best approach or what would be best practice. You're attitude is commendable. More time spent up front in learning how to properly design applications, and more time spent in the design process, ultimately mean much time saved over the long haul, both in current development, and in future maintenance and upgrade. Here's an excellent (and free) resource for all types of subject matter in this domain: I notice that you seem to be thinking a bit too specifically when it comes to the design of your current project, and about design in general. For example, take the following question: How can I handle this business logic in asp.net, create some classes and use datasets so the application itself handles the logic? And then use SP's for purely retrieving data? My personal approach is to not think too specifically about a problem, but to think about the principles involved. There are many good approaches to design and architecture, and quite a few patterns out there, but the good ones are all based upon the same principles, and grow out of them. In the above question, you ask about a specific technology (ASP.Net), and 2 specific tools (DataSets and Stored Procedures). This is a fairly common mistake, and I've been guilty of it myself, which is why I'm sharing what I've learned with you. By being so specific, you limit yourself to thinking only about a particular environment (such as ASP.Net) and the characteristics of the specific tools. You ask whether Stored Procedures should be used for only a specific purpose. A better way to state the question might be: How should business logic be handled in an application, and what sort of business classes might help to connect the data in the database with the user interface in such a way as to minimize the amount of change needed when a change is necessary in either the interface or the data? To what extent should any database member modify data, handle data validation and protect the integrity of the data in the database? Now, that's just an example, but notice that I avoid specifics. By doing so, I force myself to think about the whys and wherefores of the parts of the application, and free myself to think of more alternative solutions. ASP.Net and, for example, Windows Forms, have quite a lot in common, and quite a lot of principle is shared by both. One may apply the same principles to both environments. Every application centers around data of some sort. In every applicatioon, data is stored somewhere, whether externally or internally. Every application manipulates that data and/or changes it. Every application has an interface, whether it is a user interface or a programming interface. Every application has business logic which enforces a set of rules concerning what should be done with the data, and how it should be done. So, in effect, every application has data, process, and interface. Every application has a set of requirements, a set of needs that it is designed to fulfill. Beyond that, things become increasingly specific. But I find it useful to start at the most common level and work my way up (down?) to the specifics. Now, for maintenance and extensibility, some separation is in order. By loosely coupling these components, we eliminate dependencies, making it easier to islolate and change any individual part without having to rebuild the whole app at one time. The wheels of a vehicle are not welded to the axle; they are bolted. One can swap out a wheel without having to swap out the axle. By limiting wheels to a certain range of sizes, we gain the ability to use the same wheel on multiple vehicles. So, we generally start by separating out the 3 basic components of an app, data, process, and interface, and then begin to subdivy the components of each one until we have a nice, modular, extensible architecture. Once you have isolated each component, you can begin to think about the requirements, and what areas of responsibility each component needs to fulfill. At that point, you begin to get a little more specific. Remember that the requirements you are given may be more specific than is useful as well. Try to think of the requirements in a more abstract way. For example, you mentioned that your next project is a "forecasting system." You may have been given a set of requirements such as "We need to be able to gather weather information from the National Weather Service for the greater metropolitan area, and provide forecasts on our web site." But you need to break that requirement down. It may (and probably will) change at some point in the future. You might rephrase the requirement as "We need to be able to gather weather-related information from the best weather sources available, and provide that information in a variety of user and programming interfaces, for a variety of purposes." Now, when you design the solution, it will be extensible, and if you design your components well enough, isolating functionality into various categories that translate into namespaces, classes, etc., you will be able to extend, re-use, and maintain a variety of apps with different combinations of those components. If you can train yourself to think in this sort of way, and practice it, you will become a more powerful programmer over time. It's not just a matter of reading and studying, although those are indispensible. It is also a matter of practice. Best of luck to you! -- HTH, Kevin Spencer Microsoft MVP ..Net Developer We got a sick zebra a hat, you ultimate tuna. "NH" <NH@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote in message news:172E5541-F219-4273-8530-E625DE8A1408@xxxxxxxxxxxxxxxx Thanks for the replies. I think one of the reasons I have all these qyestions is that I am building these system on my own, I'm a one man team, so even though I read as much as I can about designing systems (and I cant really find any decent books) I dont have anyone to discuss the best approach or what would be best practice. On the DAL I do have classes with statis members such as DAL.GetUsers(), I just call these things from the button click handlers. I am beginning to realise Stored Procedures are limited, I have found in hard to maintain many SP's when business logic changes, and its difficult to pass result sets between SP's etc. How can I handle this business logic in asp.net, create some classes and use datasets so the application itself handles the logic? And then use SP's for purely retrieving data? I am just about to begin another application project (a forecasting system) and I again will be the sole developer, Database developer, report builder, anaylst etc etc. So I am considering a new approach to building and I dont mind if it takes me twice as long to build, I would like to focus on best practice. "Kevin Spencer" wrote: Hi NH, Good questions! I create systems for a medium sized business's (200+users) using SQL Server, ASP.Net 1.1 and 2.0. Generally I am very good at database design and development so I feel comfortable putting all of my Business Logic into stored procedures. Is this wrong? It depends on what you mean by "Business Logic." Business logic is different from process that fetches data. It is process that enforces the business rules, and manipulates the data in an application. This includes logic that enforces how data interacts with other data, what it is used to accomplish, and so on. Defined as such, no, it should not be in Stored Procedures. A database is a storage for data. Business rules may change. One should not have to change the data layer to change the business rules of an application. Now although I have a number of classes to handle data access I dont use objects to handle things in my code e.g. I dont use a Customer object to create a new instance of a customer or anything like that, my code just responds to clicks on buttons etc and executes SP's in response. So I do feel my data access layer is well abstracted away. But is this good practice? This depends upon the requirements of the application. If, for example, your application is simply used as an interface to a database, no, it is not necessary to create a Customer class to work with Customer data. This is because the Customer data is treated as data, not as a Customer. SQL Server Query Analyzer is an example of such an application. It works with the database, providing a user-friendly interface for working with the data and the database itself. Classes should reflect what they represent, in the context of the requirements of the application. This is a principle of abstraction, which makes classes easier to work with because they are intuitive to the developer. The classes themselves, their structure, provides clues to the developer about what they represent, and how they should behave and interact with one another. Should I be using OO techniques in my app? Is that the recommended way? But of course! ASP.Net, and the .Net platform, are fully object-oriented. There are good reasons for this. OOP was developed to make the complexity of programming simpler. Understand it, and use it correctly, and it will save you beaucoups time and trouble, and that means mo' money! -- HTH, Kevin Spencer Microsoft MVP ..Net Developer We got a sick zebra a hat, you ultimate tuna. - Follow-Ups: - Re: Questions for Desginers\Archiects - From: Kevin Spencer - References: - Re: Questions for Desginers\Archiects - From: Kevin Spencer - Re: Questions for Desginers\Archiects - From: NH - Re: Questions for Desginers\Archiects - From: Kevin Spencer - Prev by Date: RE: datasource.recordcount? How to obtain this? - Next by Date: Re: Internet - Intranet Data Dilema - Previous by thread: Re: Questions for Desginers\Archiects - Next by thread: Re: Questions for Desginers\Archiects - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2006-02/msg01708.html
crawl-002
refinedweb
1,897
63.19
Wednesday, December 28, 2016¶ I continued to work for #1285, but did not yet commit. Details later. About packages¶ As I work through the tutorials, I like to see with my own eyes the module and class/attribute that I am inheriting from calling or assigning. However, I went to the right place, but the files seem to be missing. For example, in Django tutorial Part 1:from django.conf.urls import include urls So I go to Github django code and I find: What I find is that urls is a directory with only files : __init__.py, i18n.pyand static.py. There is no urls.py file which might have url() or include() methods. Same with models.Models. I see some code line from django.db import modelsand on django Github site I navigate until django/django/db/models. I see that modelsis a directory, not a file with a class Model. So, what am I missing here? What you are missing is related to packages. Admittedly the corresponding doc section is not very clear about your problem. I make a summary: - A package is a module which can contain submodules - A package consists of a directory containing a file __init__.pyand optinally other .py files (which are then submodules of that package). - Everything defined in the __init__.pyfile is considered part of the package. This last point is important in your case. When using from package import item, the item can be either a submodule (or subpackage) of the package, or some other name defined in the package. Rule of thumb : when you want to see the code which defines a.b.c, then it can be either in a file a/b/c.py or in a file a/b/c/__init__.py.
http://luc.lino-framework.org/blog/2016/1228.html
CC-MAIN-2019-13
refinedweb
296
76.93
Developers who rely onNPM, the JavaScript package registry created by the Node.js ecosystem, experienced a shock earlier this week when a small package removed from NPM unexpectedly caused many others to stop working. The whole episode underscored the fact that interdependencies between NPM modules remain an unsolved problem — and that legal pressure on software developers can have repercussions far beyond the obvious. How the chain broke Developer Azer Koçulu, with dozens of modules registered in his name on NPM, stated he had been advised to remove his module named " kik " after receiving a warning letter from a lawyer at the company that makes the Kik mobile messenger product. In disgust at the way the owners of NPM appeared to be on Kik’s side, and no longer wanting to share his work there, Koçulu removed — "unpublished" — all of his modules from NPM. "[I] apologize … if your stuff just got broken due to this," he wrote. Koçulu suggested that those who relied on dependencies with one of his modules point instead to a version now hosted on GitHub . Unfortunately, many people weren’t able to take that advice immediately. One of the missing modules, left-pad , with a mere 17 lines of code, was required by a slew of major JavaScript projects, such asBabel. With left-pad missing , those projects no longer installed from NPM. The left-pad module on NPM was eventually " un-unpublished " and assigned to a new owner (developer Cameron Westlake). Dependent projects once again became installable. But the damage had been done, and for many NPM users the episode served as a reminder that NPM has fragilities that need addressing. The damage(s) done Two big issues have reared their heads in the wake of these events. First, copyright and trademark challenges in the software world can do immediate and widespread damage. Few provisions exist for dealing with a package that suddenly goes missing from a public software repository. It’s typically left to whoever installs the software to deal with extraordinary circumstances — e.g., when a repository is taken offline by a spurious DMCA request . This leads directly into the second issue: Package handling on NPM is fraught with many long-standing limitations. Developer Resi Respati noted several limitations in his analysis of the left-pad case, a chief one being the way the NPM namespace is global — all packages share the same namespace and are registered on a first-come first-served fashion. (GitHub, by contrast, employs a username/project namespacing system.) Unpublishing a package in NPM frees up its name for someone else to use, meaning there’s no guard against another package of the same name being sneaked in that does something untoward . A discussion is currently underway, to add signing and certification to Node.js package handling, but has yet to produce a working solution. Picking up the pieces At least one project exists as an alternate way to perform package management for Node. The ied project proposes several changes intended to solve some of the issues described above. Packages are identified by their SHA-1 checksums, and not merely by a package name, which guarantees that packages are unique and can’t be confused with (or arbitrarily substituted for) each other. Semantic versioning is also supported, so that a specific version of a package can be fetched. Unfortunately, it isn’t likely these improvements will find their way to a larger audience — not so long as most Node.js and JavaScript developers continue to depend on NPM as their default The design of the early Internet assumed that trust exists between all parties, an assumption that was fine for a closed-ended, academic environment. But as the Internet went public, t hat assumption has turned into a time bomb, as criminal attackers learned to leverage obsolete protocols or exploit limitations in existing ones . In the same way, many of the unquestioned assumptions about how NPM works — and, more generally, how public software repositories work — may have their biggest tests ahead one yanked JavaScript package wreaked havoc 评论 抢沙发
http://www.shellsec.com/news/5289.html
CC-MAIN-2018-09
refinedweb
678
50.67
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. line_profiler and kernprof. Contents Installation Source releases and any binaries can be downloaded from the PyPI link. The current release of the kernprof.py script may be downloaded separately here: To check out the development sources, you can use Mercurial: $ hg clone You may also download source tarballs of any snapshot from that URL. Source releases will require a C compiler in order to build line_profiler. In addition, Mercurial checkouts will also require Cython >= 0.10. Source releases on PyPI should contain the pregenerated C sources, so Cython should not be required in that case. kernprof.py is a single-file pure Python script and does not require a compiler. If you wish to use it to run cProfile and not line-by-line profiling, you may copy it to a directory on your PATH manually and avoid trying to build any C extensions. In order to build and install line_profiler, you will simply use the standard build and install for most Python packages: $ python setup.py install line_profiler The current profiling tools supported in Python 2.5 and later only time function calls. This is a good first step for locating hotspots in one's program and is frequently all one needs to do to optimize the program. However, sometimes the cause of the hotspot is actually a single line in the function, and that line may not be obvious from just reading the source code. These cases are particularly frequent in scientific computing. Functions tend to be larger (sometimes because of legitimate algorithmic complexity, sometimes because the programmer is still trying to write FORTRAN code), and a single statement without function calls can trigger lots of computation when using libraries like numpy. cProfile only times explicit function calls, not special methods called because of syntax. Consequently, a relatively slow numpy operation on large arrays like this, a[large_index_array] = some_other_large_array is a hotspot that never gets broken out by cProfile because there is no explicit function call in that statement. LineProfiler can be given functions to profile, and it will time the execution of each individual line inside those functions. In a typical workflow, one only cares about line timings of a few functions because wading through the results of timing every single line of code would be overwhelming. However, LineProfiler does need to be explicitly told what functions to profile. The easiest way to get started is to use the kernprof.py script. If you use "kernprof.py [-l/--line-by-line] script_to_profile.py", an instance of LineProfiler will be created and inserted into the __builtins__ namespace with the name "profile". It has been written to be used as a decorator, so in your script, you can decorate any function you want to profile with Profile Digital. @profile def slow_function(a, b, c): ... The default behavior of kernprof is to put the results into a binary file script_to_profile.py.lprof . You can tell kernprof to immediately view the formatted results at the terminal with the [-v/--view] option. Otherwise, you can view the results later like so: $ python -m line_profiler script_to_profile.py.lprof For example, here are the results of profiling a single function from a decorated version of the pystone.py benchmark (the first two lines are output from pystone.py, not kernprof): Pystone(1.1) time for 50000 passes = 2.48 This machine benchmarks at 20161.3 pystones/second Wrote profile results to pystone.py.lprof Timer unit: 1e-06 s File: pystone.py Function: Proc2 at line 149 Total time: 0.606656 s Line # Hits Time Per Hit % Time Line Contents ============================================================== 149 @profile 150 def Proc2(IntParIO): 151 50000 82003 1.6 13.5 IntLoc = IntParIO + 10 152 50000 63162 1.3 10.4 while 1: 153 50000 69065 1.4 11.4 if Char1Glob == 'A': 154 50000 66354 1.3 10.9 IntLoc = IntLoc - 1 155 50000 67263 1.3 11.1 IntParIO = IntLoc - IntGlob 156 50000 65494 1.3 10.8 EnumLoc = Ident1 157 50000 68001 1.4 11.2 if EnumLoc == Ident1: 158 50000 63739 1.3 10.5 break 159 50000 61575 1.2 10.1 return IntParIO The source code of the function is printed with the timing information for each line. There are six columns of information. - Line #: The line number in the file. - Hits: The number of times that line was executed. - Time: The total amount of time spent executing the line in the timer's units. In the header information before the tables, you will see a line "Timer unit:" giving the conversion factor to seconds. It may be different on different systems. - Per Hit: The average amount of time spent executing the line once in the timer's units. - % Time: The percentage of time spent on that line relative to the total amount of recorded time spent in the function. - Line Contents: The actual source code. Note that this is always read from disk when the formatted results are viewed, not when the code was executed. If you have edited the file in the meantime, the lines will not match up, and the formatter may not even be able to locate the function for display. If you are using IPython, there is an implementation of an %lprun magic command which will let you specify functions to profile and a statement to execute. It will also add its LineProfiler instance into the __builtins__, but typically, you would not use it like that. line_profiler ip.expose_magic('lprun', line_profiler.magic_lprun) For IPython 0.11+, you can install it by editing the IPython configuration file ~/.ipython/profile_default/ipython_config.py to add the 'line_profiler' item to the extensions list: c.TerminalIPythonApp.extensions = [ 'line_profiler', ] To get usage help for %lprun, use the standard IPython help mechanism: In [1]: %lprun? These two methods are expected to be the most frequent user-level ways of using LineProfiler and will usually be the easiest. However, if you are building other tools with LineProfiler, you will need to use the API. There are two ways to inform LineProfiler of functions to profile: you can pass them as arguments to the constructor or use the add_function(f) method after instantiation. profile = LineProfiler(f, g) profile.add_function(h) LineProfiler has the same run(), runctx(), and runcall() methods as cProfile.Profile as well as enable() and disable(). It should be noted, though, that enable() and disable() are not entirely safe when nested. Nesting is common when using LineProfiler as a decorator. In order to support nesting, use enable_by_count() and disable_by_count(). These functions will increment and decrement a counter and only actually enable or disable the profiler when the count transitions from or to 0. After profiling, the dump_stats(filename) method will pickle the results out to the given file. print_stats([stream]) will print the formatted results to sys.stdout or whatever stream you specify. get_stats() will return LineStats object, which just holds two attributes: a dictionary containing the results and the timer unit. kernprof kernprof also works with cProfile, its third-party incarnation lsprof, or the pure-Python profile module depending on what is available. It has a few main features: - Encapsulation of profiling concerns. You do not have to modify your script in order to initiate profiling and save the results. Unless if you want to use the advanced __builtins__ features, of course. - Robust script execution. Many scripts require things like __name__, __file__, and sys.path to be set relative to it. A naive approach at encapsulation would just use execfile(), but many scripts which rely on that information will fail. kernprof will set those variables correctly before executing the script. - Easy executable location. If you are profiling an application installed on your PATH, you can just give the name of the executable. If kernprof does not find the given script in the current directory, it will search your PATH for it. - Inserting the profiler into __builtins__. Sometimes, you just want to profile a small part of your code. With the [-b/--builtin] argument, the Profiler will be instantiated and inserted into your __builtins__ with the name "profile". Like LineProfiler, it may be used as a decorator, or enabled/disabled with enable_by_count() and disable_by_count(), or even as a context manager with the "with profile:" statement in Python 2.5 and 2.6. - Pre-profiling setup. With the [-s/--setup] option, you can provide a script which will be executed without profiling before executing the main script. This is typically useful for cases where imports of large libraries like wxPython or VTK are interfering with your results. If you can modify your source code, the __builtins__ approach may be easier. The results of profile script_to_profile.py will be written to script_to_profile.py.prof by default. It will be a typical marshalled file that can be read with pstats.Stats(). They may be interactively viewed with the command: $ python -m pstats script_to_profile.py.prof Such files may also be viewed with graphical tools like kcachegrind through the converter program pyprof2calltree or RunSnakeRun. Frequently Asked Questions Why the name "kernprof"? I didn't manage to come up with a meaningful name, so I named it after myself. Why not use hotshot instead of line_profile? hotshot can do line-by-line timings, too. However, it is deprecated and may disappear from the standard library. Also, it can take a long time to process the results while I want quick turnaround in my workflows. hotshot pays this processing time in order to make itself minimally intrusive to the code it is profiling. Code that does network operations, for example, may even go down different code paths if profiling slows down execution too much. For my use cases, and I think those of many other people, their line-by-line profiling is not affected much by this concern. Why not allow using hotshot from kernprof.py? I don't use hotshot, myself. I will accept contributions in this vein, though. The line-by-line timings don't add up when one profiled function calls another. What's up with that? Let's say you have function F() calling function G(), and you are using LineProfiler on both. The total time reported for G() is less than the time reported on the line in F() that calls G(). The reason is that I'm being reasonably clever (and possibly too clever) in recording the times. Basically, I try to prevent recording the time spent inside LineProfiler doing all of the bookkeeping for each line. Each time Python's tracing facility issues a line event (which happens just before a line actually gets executed), LineProfiler will find two timestamps, one at the beginning before it does anything (t_begin) and one as close to the end as possible (t_end). Almost all of the overhead of LineProfiler's data structures happens in between these two times. When a line event comes in, LineProfiler finds the function it belongs to. If it's the first line in the function, we record the line number and t_end associated with the function. The next time we see a line event belonging to that function, we take t_begin of the new event and subtract the old t_end from it to find the amount of time spent in the old line. Then we record the new t_end as the active line for this function. This way, we are removing most of LineProfiler's overhead from the results. Well almost. When one profiled function F calls another profiled function G, the line in F that calls G basically records the total time spent executing the line, which includes the time spent inside the profiler while inside G. The first time this question was asked, the questioner had the G() function call as part of a larger expression, and he wanted to try to estimate how much time was being spent in the function as opposed to the rest of the expression. My response was that, even if I could remove the effect, it might still be misleading. G() might be called elsewhere, not just from the relevant line in F(). The workaround would be to modify the code to split it up into two lines, one which just assigns the result of G() to a temporary variable and the other with the rest of the expression. I am open to suggestions on how to make this more robust. Or simple admonitions against trying to be clever. Why do my list comprehensions have so many hits when I use the LineProfiler? LineProfiler records the line with the list comprehension once for each iteration of the list comprehension. Why is kernprof distributed with line_profiler? It works with just cProfile, right? Partly because kernprof.py is essential to using line_profiler effectively, but mostly because I'm lazy and don't want to maintain the overhead of two projects for modules as small as these. However, kernprof.py is a standalone, pure Python script that can be used to do function profiling with just the Python standard library. You may grab it and install it by itself without line_profiler. Do I need a C compiler to build line_profiler? kernprof.py? You do need a C compiler for line_profiler. kernprof.py is a pure Python script and can be installed separately, though. Do I need Cython to build line_profiler? You should not have to if you are building from a released source tarball. It should contain the generated C sources already. If you are running into problems, that may be a bug; let me know. If you are building from a Mercurial checkout or snapshot, you will need Cython to generate the C sources. You will probably need version 0.10 or higher. There is a bug in some earlier versions in how it handles NULL PyObject* pointers. What version of Python do I need? Both line_profiler and kernprof have been tested with Python 2.4-2.7. It might work with Python 2.3, but does not currently work with Python 3.x. I get negative line timings! What's going on? There was a bug in 1.0b1 on Windows that resulted in this. It should be fixed in 1.0b2. If you are still seeing negative numbers, please let me know. To Do cProfile uses a neat "rotating trees" data structure to minimize the overhead of looking up and recording entries. LineProfiler uses Python dictionaries and extension objects thanks to Cython. This mostly started out as a prototype that I wanted to play with as quickly as possible, so I passed on stealing the rotating trees for now. As usual, I got it working, and it seems to have acceptable performance, so I am much less motivated to use a different strategy now. Maybe later. Contributions accepted! Bugs and Such If you find a bug, or a missing feature you really want added, please post to the enthought-dev mailing list or email the author at <robert.kern@enthought.com>. Changes 1.0b3 - ENH: Profile generators. - BUG: Update for compatibility with newer versions of Cython. Thanks to Ondrej Certik for spotting the bug. - BUG: Update IPython compatibility for 0.11+. Thanks to Yaroslav Halchenko and others for providing the updated imports.
https://bitbucket.org/mcfletch/line_profiler
CC-MAIN-2015-27
refinedweb
2,548
66.13
Pattern Searching | Set 6 (Efficient Construction of Finite Automata) In the previous post, we discussed the Finite Automata-based pattern searching algorithm. The FA (Finite Automata) construction method discussed in the previous post takes O((m^3)*NO_OF_CHARS) time. FA can be constructed in O(m*NO_OF_CHARS) time. In this post, we will discuss the O(m*NO_OF_CHARS) algorithm for FA construction. The idea is similar to LPs (longest prefix suffix) array construction discussed in the KMP algorithm. We use previously filled rows to fill a new row. The above diagrams represent graphical and tabular representations of pattern ACACAGA. Algorithm: 1) Fill the first row. All entries in the first row are always 0 except the entry for the pat[0] character. For pat[0] character, we always need to go to state 1. 2) Initialize lps as 0. lps for the first index is always 0. 3) Do following for rows at index i = 1 to M. (M is the length of the pattern) …..a) Copy the entries from the row at index equal to lps. …..b) Update the entry for pat[i] character to i+1. …..c) Update lps “lps = TF[lps][pat[i]]” where TF is the 2D array which is being constructed. Following is the implementation for the above algorithm. Implementation C++ C #include <stdio.h> #include <string.h> #define NO_OF_CHARS 256 /* This function builds the TF table which represents Finite Automata for a given pattern */ void computeTransFun(char* pat, int M, int TF[][NO_OF_CHARS]) { int i, lps = 0, x; // Fill entries in first row for (x = 0; x < NO_OF_CHARS; x++) TF[0][x] = 0; TF[0][pat[0]] = 1; // Fill entries in other rows for (i = 1; i <= M; i++) { // Copy values from row at index lps for (x = 0; x < NO_OF_CHARS; x++) TF[i][x] = TF[lps][x]; // Update the entry corresponding to this character TF[i][pat[i]] = i + 1; // Update lps for next row to be filled if (i < M) lps = TF[lps][pat[i]]; } } /* Prints all occurrences of pat in txt */ void search(char* pat, char* txt) { int M = strlen(pat); int N = strlen(txt); int TF[M + 1][NO_OF_CHARS]; computeTransFun(pat, M, TF); // process text over FA. int i, j = 0; for (i = 0; i < N; i++) { j = TF[j][txt[i]]; if (j == M) { printf("\n pattern found at index %d", i - M + 1); } } } /* Driver program to test above function */ int main() { char* txt = "GEEKS FOR GEEKS"; char* pat = "GEEKS"; search(pat, txt); getchar(); return 0; } Java Python3 C# Javascript Output: pattern found at index 0 pattern found at index 10 Time Complexity for FA construction is O(M*NO_OF_CHARS). The code for search is the same as the previous post and the time complexity for it is O(n). Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
https://www.geeksforgeeks.org/pattern-searching-set-5-efficient-constructtion-of-finite-automata/?ref=rp
CC-MAIN-2022-33
refinedweb
481
70.13
a unique name for that translation unit and that is it; they do not change the linkage of the symbol at all. Linkage is not changed on those because the second phase of two-phase name lookup ignores functions with internal linkages. Also, entities with internal linkage cannot be used as template arguments. So for now instead of using anonymous namespaces use static if you do not want a symbol to be exported. First and foremost: it is fine to delete a null pointer. So constructs like this that check for null before deleting are simply redundant:. In pointer context, the integer constant zero means "null pointer" - irrespective of the actual binary representation of a null pointer. This means that the choice between 0, 0L and NULL is a question of personal style and getting used to something rather than a technical one - as far as the code in KDE's SVN goes you will see 0 used more commonly than NULL.. You will encounter four major styles of marking class member variables in KDE, besides unmarked members: Unmarked members are more common in the case of classes that use d-pointers. As it often happens there is not.. If you need some constant data of simple data types in several places, you do good by defining it once at a central place, to avoid a mistype in one of the instances. If the data changes there is also only one place you need to edit. Even if there is only one instance you do good by defining it elsewhere, to avoid so-called "magic numbers" in the code which are unexplained (cmp. 42). Usually this is done at the top of a file to avoid searching for it. Define the constant data using the language constructs of C++, not the preprocessor instructions, like you may be used to from plain C. This way the compiler can help you to find mistakes by doing type checking. // Correct! static const int AnswerToAllQuestions = 42; // Wrong! #define AnswerToAllQuestions 42 If defining a constant array do not use a pointer as data type. Instead use the data type and append the array symbol with undefined length, [], behind the name. Otherwise you also define a variable to some const data. That variable could mistakenly be assigned a new pointer to, without the compiler complaining about. And accessing the array would have one indirection, because first the value of the variable needs to be read. // Correct! static const char SomeString[] = "Example"; // Wrong! static const char* SomeString = "Example"; // Wrong! #define SomeString "Example" You will reduce compile times by forward declaring classes when possible instead of including their respective headers. For example: #include <QWidget> // slow #include <QStringList> // slow #include <QString> // slow class SomeInterface { public: virtual void widgetAction( QWidget *widget ) =0; virtual void stringAction( const QString& str ) =0; virtual void stringListAction( const QStringList& strList ) =0; }; The above should instead be written like this: (QList is an example of such a container). When using a const_iterator also watch out that you are really calling the const version of begin() and end(). Unless your container is actually const itself this probably will not be the case, possibly causing an unnecessary detach of your container. So basically whenever you use const_iterator initialize them using constBegin()/constEnd() instead, to be on the safe side. Cache the return of the end() (or constEnd()) method call before doing iteration over large containers. For example: QList<SomeClass> container; //code which inserts a large number of elements to the container QList<SomeClass>::ConstIterator end = container.constEnd(); QList<SomeClass>::ConstIterator itr = container.constBegin(); for ( ; itr != end; ++itr ) { // use *itr (or itr.value()) here } This avoids the unnecessary creation of the temporary end() (or constEnd()) return object on each loop iteration, largely speeding it up. When using iterators, always use pre-increment and pre-decrement operators (i.e., ++itr) unless you have a specific reason not to. The use of post-increment and post-decrement operators (i.e., itr++) cause the creation of a temporary object. When you want to erase some elements from the list, you maybe would use code similar to this: QMap<int, Job *>::iterator it = m_activeTimers.begin(); QMap<int, Job *>::iterator itEnd = m_activeTimers.end(); for( ; it != itEnd ; ++it) { if(it.value() == job) { //A timer for this job has been found. Let's stop it. killTimer(it.key()); m_activeTimers.erase(it); } } This code will potentially crash because it is a dangling iterator after the call to erase(). You have to rewrite the code this way: QMap<int, Job *>::iterator it = m_activeTimers.begin(); while (it != m_activeTimers.end()) { if(it.value() == job) { //A timer for this job has been found. Let's stop it. killTimer(it.key()); it = m_activeTimers.erase(it); } else { ++it; } } This problem is also discussed in the Qt documentation for QMap::iterator but applies to all Qt iterators A very "popular" programming mistake is to do a new without a delete like in this program: mem_gourmet.cpp class t { public: t() {} }; void pollute() { t* polluter = new t(); } int main() { while (true) pollute(); } You see, pollute() instanciates a new object polluter of the class t. Then, the variable polluter is lost because it is local, but the content (the object) stays on the heap. I could use this program to render my computer unusable within 10 seconds. To solve this, there are the following approaches: t* polluter = new t(); would become t polluter; delete polluter; std::auto_ptr<t> polluter = new t(); A tool to detect memory leaks like this is Valgrind. You can only dynamic_cast to type T from type T2 provided that: For instance, we've seen some hard-to-track problems in non-KDE C++ code we're linking with (I think NMM) because of that. It happened that: Some classes in the NMM library did not have well-anchored vtables, so dynamic_casting failed inside the Phonon NMM plugin for objects created in the NMM's own plugins. In this section we will go over some common problems related to the design of Qt/KDE applications. Although the design of modern C++ applications can be very complex, one recurring problem, which is generally easy to fix, is not using the technique of delayed initialization. First, let us will go over some of our most common pet-peeves which affect data structures very commonly seen in Qt/KDE applications. Non-POD ("plain old data"), do are reading in a file, it is faster to convert it from the local encoding to Unicode (QString) in one go, rather than line by line. This means that methods like, along with using a timer to read in the blocks in the background, or by creating a local event loop. While one can also use qApp->processEvents(), it is discouraged as it easily leads to subtle yet often fatal problems. KProcess emits the signals readyReadStandard{Output|Error} as data comes in. A common mistake is reading all available data in the connected slot and converting it to QString right away: the data comes in arbitrarily segmented chunks, so multi-byte characters might be cut into pieces and thus invalidated. Several approaches to this problem exist: While QString is the tool of choice for many string handling situations, there is one where it is particularly inefficient. If you are pushing about and working on data in QByteArrays, take care not to pass it through methods which take QString parameters; then make QByteArrays from them again. For example: QByteArray myData; QString myNewData = mangleData( myData ); QString mangleData( const QString& data ) { QByteArray str = data.toLatin1(); // mangle return QString(str); } The expensive thing happening here is the conversion to QString, which does a conversion to Unicode internally. This is unnecessary because, the first thing the method does is convert it back using toLatin1(). So if you are sure that the Unicode conversion is not needed, try to avoid inadvertently using QString along the way. The above example should instead be written as: QByteArray myData; QByteArray myNewData = mangleData( myData ); QByteArray mangleData( const QByteArray&; } ... }
https://techbase.kde.org/index.php?title=Development/Tutorials/Common_Programming_Mistakes&diff=72925&oldid=40648
CC-MAIN-2016-44
refinedweb
1,328
53.81
please be aware, im new to python: i'm trying to create a defined function that can convert a list into a string, and allows me to put a separator in. The separator has to be ', '. My current thought process is to add each item from a list to an empty string variable, and then I'm trying to make use of the range function to add a separator in. I'm only wanting to use str() and range(). def list2Str(lisConv, sep = ', '): var = '' for i in lisConv: var = var + str(i) #test line print(var, "test line") var1 = int(var) for a in range(var1): print(str(var1)[a], sep = ', ') list1 = [2,0,1,6] result = list2Str(list1, ', ') print(result) list=['asdf', '123', 'more items...'] print ', '.join([str(x) for x in list]) If you wanted to create your own function to convert you could do the following. def convert(list, sep): n_str = '' for i in list: n_str += str(i) + sep return n_str
https://codedump.io/share/xcsHxCIGBBvg/1/convert-a-list-into-a-string-and-allow-for-a-separator
CC-MAIN-2016-44
refinedweb
164
76.15
To start learning about C#, you can create a new project by select File -> New project. Enter your project name, then click the OK button to complete. It's easy is not it using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Console.WriteLine("Hello, world!"); Console.ReadLine(); } } } Now, we'll add a "Hello, world!" string to your main program. The main thread will begin when your program start, to display results on your screen we use WriteLine method, to Read value from keyboard you can use ReadLine method. Now, press F5 to run your program, you will see the black window as below.
https://c-sharpcode.com/thread/getting-started-csharp/
CC-MAIN-2020-40
refinedweb
113
68.16
Opened 5 years ago Closed 5 years ago #17936 closed Bug (fixed) SimpleListFilter redirects to ?e=1 Description I try create custom admin filter. example in django documentation: class DecadeBornListFilter(SimpleListFilter): ... But, when I choise "in the eighties" or "in the nineties", I get 302 Redirect to /?e=1 Change History (5) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by admin.py class LevelListFilter(SimpleListFilter): title = u"decade born" parameter_name = 'decade' #template = "select_filter.html" def lookups(self, request, model_admin): return ( ('60s', u"60-e"), ('70s', u"70-e"), ('80s', u"80-e"), ('90s', u"90-e"), ) def queryset(self, request, queryset): if self.value() == '60s': return queryset.filter(birthday__year__gte=1960, birthday__year__lte=1969) if self.value() == '70s': return queryset.filter(birthday__year__gte=1970, birthday__year__lte=1979) if self.value() == '80s': return queryset.filter(birthday__year__gte=1980, birthday__year__lte=1989) if self.value() == '90s': return queryset.filter(birthday__year__gte=1990, birthday__year__lte=1999) class EmployeeAdmin(admin.ModelAdmin): list_filter = (LevelListFilter, "sex") models.py # -*- coding: utf-8 -*- from django.db import models SEX = ( ("m", u"male"), ("w", u"female") ) class Employee(models.Model): firstname = models.CharField(max_length=100) lastname = models.CharField(max_length=100) birthday = models.DateField() sex = models.CharField(choices=SEX, max_length=1) class Meta(object): ordering = ["lastname", "firstname"] def __unicode__(self): return self.firstname I downloaded trunk and got the error: FieldError at /admin/new_in_admin/employee/ Join on field 'birthday' not permitted. Did you misspell 'year' for the lookup type? If change code to: if self.value() == '70s': return queryset.filter(birthday__year=1970) OR if self.value() == '70s': return queryset.filter( birthday__gte=dt(1970, 1, 1), birthday__lte=dt(1979, 12, 31), ) all work. comment:3 Changed 5 years ago by Thank you for providing this code sample. I confirm that something seems quite wrong here; either in the admin documentation, or in the ORM. I would have imagined that " birthday__year__gte=1960" would be a valid lookup, but maybe that isn't supported by the ORM; if it isn't supported, then the admin documentation sample should be fixed. If it should be supported then this would be a bug in the ORM. I thought this might have been a regression introduced by [17450], but reverting the change from that commit doesn't make this code sample work either. It's late and my brain is a little fried. As a cautionary measure, I'm marking this ticket as release blocker until we dig out the real nature of this potential bug. comment:4 Changed 5 years ago by Ok, I've verified with the 1.3.X branch and there is no regression here. The ORM simply does not consider " birthday__year__gte" a valid lookup. So I'll just fix the documentation issue. Could you please upgrade to the latest revision of trunk and see either if the issue is resolved or if you get a more explicit error message? Also, have you used the exact example given in the documentation or have you customized it a bit. If you could post the exact code you've used, that would be helpful.
https://code.djangoproject.com/ticket/17936
CC-MAIN-2017-22
refinedweb
508
53.27
- Introduction - When Is a Web Service Not a Web Service? - Web Services in a Small to Medium-Sized Enterprise (SME) - Web Services in Bigger Enterprises - Web Services in the Large - Take the Long View Web Services in a Small to Medium-Sized Enterprise (SME) Let's look at how you would use Web Services in a typical project where you're building an application that combines new functionality with existing components and services within your organization. In this case, you may want to use Web Services as the distribution mechanism between new clients and servers. You may also want to wrap existing functionality or data sources using Web Services so that you have a consistent distribution mechanism. I'm not going to get into precisely how you access your new or existing data and business functionality. Let's consider instead how you'll plug things together. It may be rather obvious, but every system that exposes a Web Service will need to be running a web server of some form. Underneath this web server, most people will use either a Java servlet-based Web Service engine, such as Apache Axis, or will use Microsoft's .NET. (Okay, this is a broad generalizationI appeal to developers of Web Services in Perl and Python not to think too badly of me.) Given this assumption, you'll create either Java servlets (and associated classes) or ASP.NET classes in C# or VB.NET. In either case, the two platforms offer various connectivity options to access your existing data or functionality. If you're creating or consuming Web Services based on such data or functionality, you'll need a description of the Web Service and a way to transport data back and forth. Now to a degree, connecting to a Web Service is not an issue. Everyone is agreed on Simple Object Access Protocol (SOAP) as the way of connecting. Despite this, there have been certain issues when using SOAP as a transport, particularly between different platforms and/or toolkits. As with any specification-based standard, there have been issues around interpretation and implementation. However, the principal issue has concerned the encoding of the SOAP message and its layout. Sections 5 and 7 of the SOAP specification describe how to encode and lay out a message. However, this predates work on the XML Schema specification. More recent products have introduced encoding based on the use of XML Schemas (called "literal" types), which is not interoperable with the SOAP encoding rules. Also, if you decide to use "document-style" layout rather than "RPC-style" layout, this will also cause interoperability problems. Although recent products and toolkits should work with most variations on this SOAP encoding/RPC versus document/literal style, this has been a common source of interoperability issues. Moving on to the description of your Web Services, Web Services Description Language (WSDL) provides a commonly agreed way of describing Web Services. Although WSDL is flexible and reasonably comprehensive, some degree of thought needs to go into its use. In a typical analysis and development cycle, you'll create a Unified Modeling Language (UML) model containing your business classes and then map this onto "real" components on your chosen platform. Part of this process will involve deciding how components should be distributed physically, which leads to the introduction of Web Service boundaries. If you come from the world of distributed objects, especially from a Java background, you may be used to passing objects by value (that is, the object is serialized down the wire and a copy appears in the remote process). However, this really relies on having the same environment at both the sending and receiving ends. Given that Web Services are meant to be interoperable across platforms and languages, this setup is certainly not guaranteed. Hence, when defining parameters and return values, WSDL only describes data structures and their relationships, not functionality. As an example, if you define a method in an ASP.NET Web Service that passes an object parameter, only the publicly accessible data members of this object will form part of the WSDL description. Effectively, any functionality that the object has is "stripped off" at the Web Service boundary. The same applies to private, internal data held by the object. If you want to retrieve "live" objects from your interaction with Web Services, it's your responsibility to intercept the data defined in the WSDL and "reconstitute" the object as it arrives. Although toolkits provide hooks for such functionality, this adds to the code that you would need to write, compared to doing the same thing in RMI or using binary encoded .NET remoting. You might also need to change the design of your object so that more of its internal state is publicly visible, so that this state is passed across the Web Service boundary. This may cause problems where encapsulation is important. There are some other impedance issues when mapping your business components onto Web Services. Take the example of exposing multiple business components using .NET Web Services. You could have a single Web Service that exposes all of the methods in these components. However, you'll lose the grouping and cohesion gained from putting the functionality into separate components. Alternatively, you could go for one Web Service per component. If your components are sufficiently granular, this works quite well. However, consider the case of consuming multiple services under Visual Studio .NET. When you add a Web Reference, it imports the WSDL for the service and generates a client-side proxy for you to use. Each proxy is generated in its own .NET namespace (like a Java package). If your client uses two related Web Services that expose the same complex datatype (derived from the same object or struct parameter), this appears in the client under the same name in two different .NET packages, and the compiler won't treat them as equivalent. You can get around this problem by manually generating your client-side proxies, or by changing the WSDL for the two services to use shared types. However, each of these approaches adds a manual step to the development process. This is not intended to be a particular criticism of Visual Studio, simply a reflection that a lot of effort is required to make the use of Web Services "seamless." Think about other things that you may require in a typical application. Any exceptions that your methods raise must be mapped to SOAP faults and then "reconstituted" at the client. If your application requires security between the client and the Web Service, you can use Secure Sockets Layer (SSL) to provide point-to-point authentication and privacy with varying levels of additional work. However, support for other things that you may want to use, such as distributed transactions across Web Services, is not presentunless you want to do it yourself. This leads to a question about what we're trying to achieve. If we really want our Web Services to be platform-independent and reusable, we must sacrifice some level of comfort. This is the sort of lesson that CORBA should have taught us.
http://www.informit.com/articles/article.aspx?p=27353&seqNum=4
CC-MAIN-2019-13
refinedweb
1,188
52.49
Thursday, July 28, 2016¶ Notification framework¶ This morning I believed that ticket #1079 should be solved now, and so I did a relase in CPAS de Châtelet. But nope, it seems that it is not solved. The rest of my workday went into finding bugs and writing test cases for the notification framework, partly with some more changes to the API. It was intensive work which required long-term concentration, but now Lino Welfare also has a new tested document Notifications in Lino Welfare, and the automatic tests in lino_welfare.projects.std.tests.test_notify are now more or less complete. It was really time to write these test cases! I removed the get_actors_module() method and instead, at startup, set default values for rt.actors from rt.models. During the release I also stumbled over the following problem which took me at least two hours. Supervisor failed to terminate linod¶ In CPAS de Châtelet they were having a big Lino log file which is filled with lines as the following: 201607-24 13:02:44 INFO __init__ : Running job Every 10 seconds do send_pending_emails() (last run: 2016-07-24 13:02:34, next run: 2016-07-24 13:02:44) I immediately guessed that it had to do with the logger configuration for schedule. The schedule module is clear and simple, it does this: import logging logger = logging.getLogger('schedule') class Job(object): def run(self): """Run the job and immediately reschedule it.""" logger.info('Running job %s', self) ret = self.job_func() self.last_run = datetime.datetime.now() self._schedule_next_run() return ret So indeed we must set the schedule logger level to WARNING. lino.core.site.Site.setup_logging() does this now. I then did a lot of Lino commits because the change “somehow didn’t work”, and I thought that the problem had to do with the logger configuration. The actual guilty was supervisor: for some reason (I guess because I had changed several times the actual name of the linod process to start) there were a dozen of linod processes running, and of course these processes continued to to their work trustfully… TIL: when you change the configuration of supervisor, make sure that any old processes have been stopped! Later I realized that it was not at all inadvertance when playing with configuration. Supervisor did not terminate the process correctly: it created two processes and killed only one of them. Other Supervisor users helped my to understand why: it was because the linod.sh script spawned a subprocess which (for some reason) was not seen by Supervisor and therfore remained alive. And that the problem must be solved by adding an exec to the linod.sh script. As I (now) explain in The Lino Daemon.
http://luc.lino-framework.org/blog/2016/0728.html
CC-MAIN-2018-05
refinedweb
454
61.46
. Version 6.3 Corporate HeadquartersCisco Systems, Inc.170 West Tasman DriveSan Jose, CA 95134-1706USA: 408 526-4000 800 553-NETS (6387)Fax: 408 526-4100 THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THATSHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSEOR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB’s publicdomain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS” WITHALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUTLIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OFDEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING,WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCOOR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. CCIP, CCSP, the Cisco Arrow logo, the Cisco Powered Network mark, the Cisco Systems Verified logo, Cisco Unity, Follow Me Browsing, FormShare, iQ Net ReadinessScorecard, Networking Academy, and ScriptShare are trademarks of Cisco Systems, Inc.; Changing the Way We Work, Live, Play, and Learn, The Fastest Way to IncreaseYour Internet Quotient, and iQuick Study are service marks of Cisco Systems, Inc.; and Aironet, ASIST, BPX, Catalyst, CCDA, CCDP, CCIE, CCNA, CCNP, Cisco, theCisco Certified Internetwork Expert logo, Cisco IOS, the Cisco IOS logo, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Empowering theInternetbetween Cisco and any other company. (0303R) Document Objectives ix Audience ix Document Organization x Document Conventions x Related Documentation xi Obtaining Documentation xi Cisco.com xi Documentation CD-ROM xi Ordering Documentation xi Documentation Feedback xii Obtaining Technical Assistance xii Cisco.com xii Technical Assistance Center xiii Cisco TAC Website xiii Cisco TAC Escalation Center xiii Obtaining Additional Publications and Information xiv Ports 2-3 Protocols 2-6 aaa-server 3-17 access-group 3-21 access-list 3-23 activation-key 3-35 alias 3-37 arp 3-40 auth-prompt 3-42 auto-update 3-43 banner 3-45 ca 4-1 capture 4-11 clear 4-14 clock 4-19 conduit 4-21 configure 4-28 console 4-32 copy 4-33 crashinfo 4-37 debug 5-1 dhcpd 5-12 dhcprelay 5-17 disable 5-20 domain-name 5-20 dynamic-map 5-21 eeprom 5-21 enable 5-24 established 5-26 exit 5-29 failover 5-29 filter 5-36 flashfs 5-55 floodguard 5-57 fragment 5-58 global 6-1 help 6-4 hostname 6-5 http 6-6 icmp 6-7 igmp 6-8 interface 6-9 ip address 6-15 ip audit 6-19 isakmp 6-26 kill 6-36 logging 6-37 login 6-44 mac-list 7-1 management-access 7-2 mgcp 7-3 mroute 7-5 mtu 7-6 multicast 7-7 name/names 7-9 nameif 7-11 nat 7-12 ntp 7-20 object-group 7-25 outbound/apply 7-31 pager 7-36 password 7-37 pdm 7-38 perfmon 7-44 ping 7-45 prefix-list 7-46 privilege 7-47 quit 7-49 reload 7-50 rip 7-51 route 7-53 route-map 7-54 service 8-1 setup 8-2 show 8-4 shun 8-57 snmp-server 8-58 ssh 8-62 static 8-65 syslog 8-73 sysopt 8-73 telnet 9-1 terminal 9-4 tftp-server 9-5 timeout 9-6 url-block 9-8 url-cache 9-10 url-server 9-12 username 9-14 virtual 9-15 vpdn 9-18 vpnclient 9-26 vpngroup 9-29 who 9-33 write 9-33 INDEX This preface introduces the Cisco PIX Firewall Command Reference and contains the following sections: • Document Objectives, page ix • Audience, page ix • Document Organization, page x • Document Conventions, page x • Related Documentation, page xi • Obtaining Documentation, page xi • Obtaining Technical Assistance, page xii • Obtaining Additional Publications and Information, page xiv Document Objectives This guide contians the commands available for use with the Cisco PIX Firewall to protect your network from unauthorized use and to establish Virtual Private Networks (VPNs) to connect remote sites and users to your network. Audience This guide is for network managers who perform any of the following tasks: • Managing network security • Configuring firewalls • Managing default and static routes, and TCP and UDP services Use this guide with the Cisco PIX Firewall Hardware Installation Guide and the Cisco PIX Firewall and VPN Configuration Guide. Document Organization This guide includes the following chapters: • Chapter 1, “PIX Firewall Software Version 6.3 Commands,” provides you with a quick reference to the commands available in the PIX Firewall software. • Chapter 2, “Using PIX Firewall Commands,” introduces you to the PIX Firewall commands, access modes, and common port and protocol numbers. • Chapter 3, “A through B Commands,” provides detailed descriptions of all commands that begin with the letters A or B. • Chapter 4, “C Commands,” provides detailed descriptions of all commands that begin with the letter C. • Chapter 5, “D through F Commands,” provides detailed descriptions of all commands that begin with the letters D through F. • Chapter 6, “G through L Commands,” provides detailed descriptions of all commands that begin with the letters G through L. • Chapter 7, “M through R Commands,” provides detailed descriptions of all commands that begin with the letters M through R. • Chapter 8, “S Commands,” provides detailed descriptions of all commands that begin with the letter S. • Chapter 9, “T through Z Commands,” provides detailed descriptions of all commands that begin with the letters T through X. Document Conventions The PIX Firewall command syntax descriptions use the following conventions: Command descriptions use these conventions: • Braces ({ }) indicate a required choice. • Square brackets ([ ]) indicate optional elements. • Vertical bars ( | ) separate alternative, mutually exclusive elements. • Boldface indicates commands and keywords that are entered literally as shown. • Italics indicate arguments for which you supply values. Examples use these conventions: • Examples depict screen displays and the command line in screen font. • Information you need to enter in examples is shown in boldface screen font. • Variables for which you must supply a value are shown in italic screen font. Graphic user interface access uses these conventions: • Boldface indicates buttons and menu items. • Selecting a menu item (or screen) is indicated by the following convention: Click Start>Settings>Control Panel. Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual. Related Documentation Use this document in conjunction with the PIX Firewall documentation available online at the following site: web sites: Documentation Feedback You can submit comments electronically on Cisco.com. On the Cisco Documentation home page, click Feedback at the top of the page. You can e-mail your comments to bug-doc@cisco.com. You can submit your comments by mail by using the response card behind the front cover of your document or by writing to the following address: Cisco Systems Attn: Customer Document Ordering 170 West Tasman Drive San Jose, CA 95134-9883 We appreciate your comments. Cisco.com. Table 1-1 lists the commands that are supported in PIX Firewall software Version 6.3. This chapter introduces the Cisco PIX Firewall Command Reference and contains the following sections: • Introduction, page 2-1 • Command Modes, page 2-2 • Ports, page 2-3 • Protocols, page 2-6 • Deprecated Commands, page 2-7 Introduction This section provides a brief introduction to using PIX Firewall commands and where to go for more information on configuring and using your PIX Firewall. The following table lists some basic PIX Firewall commands. Tips Tip When using the PIX Firewall command-line interface (CLI), you can do the following: • Check the syntax before entering a command. Enter a command and press the Enter key to view a quick summary, or precede a command with help, as in, help aaa. • Abbreviate commands. For example, you can use the config t command to start configuration mode, the write t command statement to list the configuration, and the write m command to write to Flash memory. Also, in most commands, show can be abbreviated as sh. This feature is called command completion. • After changing or removing the alias, access-list, conduit, global, nat, outbound, and static commands, use the clear xlate command to make the IP addresses available for access. • Review possible port and protocol numbers at the following IANA websites: • Create your configuration in a text editor and then cut and paste it into the configuration. PIX Firewall lets you paste in a line at a time or the whole configuration. Always check your configuration after pasting large blocks of text to be sure everything copied. Command Modes The PIX Firewall contains a command set based on Cisco IOS technologies and provides configurable command privilege modes based on the following command modes: • Unprivileged mode. When you first access the firewall, it displays the “>” prompt. This is unprivileged mode, and it lets you view firewall settings. The unprivileged mode prompt appears as follows: pixfirewall> • Privileged mode, which displays the “#” prompt and lets you change current settings. Any unprivileged mode command also works in privileged mode. Use the enable command to start privileged mode from unprivileged mode as follows: pixfirewall> enable Password: pixfirewall# Use the exit or quit commands to exit privileged mode and return to unprivileged mode as follows: pixfirewall# exit Logoff Use the disable command to exit privileged mode and return to unprivileged mode as follows: pixfirewall# disable pixfirewall> • Configuration mode, which displays the “(config)#” prompt and lets you change the firewall configuration. All privileged, unprivileged, and configuration mode commands are available in this mode. Use the configure terminal command to start configuration mode as follows: pixfirewall# configure terminal pixfirewall(config)# Use the exit or quit commands to exit configuration mode and return to privileged mode as follows: pixfirewall(config)# quit pixfirewall# Use the disable command to exit configuration mode and return to unprivileged mode as follows: pixfirewall(config)# disable pixfirewall> Ports Literal names can be used instead of a numerical port value in access-list commands. The PIX Firewall uses port 1521 for SQL*Net. This is the default port used by Oracle for SQL*Net; however, this value does not agree with IANA port assignments. The PIX Firewall listens for RADIUS on ports 1645 and 1646. If your RADIUS server uses ports 1812 and 1813, you must reconfigure it to listen on ports 1645 and 1646. To assign a port for DNS access, use domain, not dns. The dns keyword translates into the port value for dnsix. Note By design, the PIX Firewall drops DNS packets sent to UDP port 53 (usually used for DNS) that have a packet size larger than 512 bytes. TCP or Literal UDP? Value Description aol TCP 5190 America On-line bgp TCP 179 Border Gateway Protocol, RFC 1163 biff UDP 512 Used by mail system to notify users that new mail is received bootpc UDP 68 Bootstrap Protocol Client bootps UDP 67 Bootstrap Protocol Server chargen TCP 19 Character Generator citrix-ica TCP 1494 Citrix Independent Computing Architecture (ICA) protocol cmd TCP 514 Similar to exec except that cmd has automatic authentication ctiqbe TCP 2748 Computer Telephony Interface Quick Buffer Encoding daytime TCP 13 Day time, RFC 867 discard TCP, UDP 9 Discard domain TCP, UDP 53 DNS (Domain Name System) dnsix UDP 195 DNSIX Session Management Module Audit Redirector echo TCP, UDP 7 Echo exec TCP 512 Remote process execution finger TCP 79 Finger ftp TCP 21 File Transfer Protocol (control port) ftp-data TCP 20 File Transfer Protocol (data port) gopher TCP 70 Gopher https TCP 443 Hyper Text Transfer Protocol (SSL) h323 TCP 1720 H.323 call signalling hostname TCP 101 NIC Host Name Server ident TCP 113 Ident authentication service imap4 TCP 143 Internet Message Access Protocol, version 4 irc TCP 194 Internet Relay Chat protocol isakmp UDP 500 Internet Security Association and Key Management Protocol kerberos TCP, UDP 750 Kerberos klogin TCP 543 KLOGIN kshell TCP 544 Korn Shell ldap TCP 389 Lightweight Directory Access Protocol ldaps TCP 636 Lightweight Directory Access Protocol (SSL) TCP or Literal UDP? Value Description lpd TCP 515 Line Printer Daemon - printer spooler login TCP 513 Remote login lotusnotes TCP 1352 IBM Lotus Notes mobile-ip UDP 434 MobileIP-Agent nameserver UDP 42 Host Name Server netbios-ns UDP 137 NetBIOS Name Service netbios-dgm UDP 138 NetBIOS Datagram Service netbios-ssn TCP 139 NetBIOS Session Service nntp TCP 119 Network News Transfer Protocol ntp UDP 123 Network Time Protocol pcanywhere-status UDP 5632 pcAnywhere status pcanywhere-data TCP 5631 pcAnywhere data pim-auto-rp TCP, UDP 496 Protocol Independent Multicast, reverse path flooding, dense mode pop2 TCP 109 Post Office Protocol - Version 2 pop3 TCP 110 Post Office Protocol - Version 3 pptp TCP 1723 Point-to-Point Tunneling Protocol radius UDP 1645 Remote Authentication Dial-In User Service radius-acct UDP 1646 Remote Authentication Dial-In User Service (accounting) rip UDP 520 Routing Information Protocol secureid-udp UDP 5510 SecureID over UDP smtp TCP 25 Simple Mail Transport Protocol snmp UDP 161 Simple Network Management Protocol snmptrap UDP 162 Simple Network Management Protocol - Trap sqlnet TCP 1521 Structured Query Language Network ssh TCP 22 Secure Shell sunrpc (rpc) TCP, UDP 111 Sun Remote Procedure Call syslog UDP 514 System Log tacacs TCP, UDP 49 Terminal Access Controller Access Control System Plus talk TCP, UDP 517 Talk telnet TCP 23 RFC 854 Telnet tftp UDP 69 Trivial File Transfer Protocol time UDP 37 Time uucp TCP 540 UNIX-to-UNIX Copy Program TCP or Literal UDP? Value Description who UDP 513 Who whois TCP 43 Who Is www TCP 80 World Wide Web xdmcp UDP 177 X Display Manager Control Protocol Protocols Literal names can be used instead of a numerical port value in access-list commands. Protocol numbers can be viewed online at the IANA website: Note Many routing protocols use multicast packets to transmit their data. If you send routing protocols across the PIX Firewall, configure the surrounding routers with the Cisco IOS software neighbor command. If routes on an unprotected interface are corrupted, the routes transmitted to the protected side of the firewall will pollute routers there as well. The PIX Firewall supports the protocol literal values listed in Table 2-2 . Deprecated Commands The following commands are no longer used to configure the firewall: sysopt route dnat, sysopt security fragguard, fragguard, and session enable. The sysopt route dnat command is ignored, starting in PIX Firewall software Version 6.2. Instead, overlapping configurations (network addresses and routes) are automatically handled by outside NAT. The sysopt security fragguard and fragguard commands have been replaced by the fragment command. The session enable command is deprecated because the AccessPro router it was intended to support no longer exists. aaa accounting Enable, disable, or view LOCAL, TACACS+, or RADIUS user accounting (on a server designated by the aaa-server command). [no] aaa accounting include | exclude service if_name local_ip local_mask foreign_ip foreign_mask server_tag show aaa Syntax Description accounting Enable or disable accounting services. Use of this command requires that you previously used the aaa-server command to designate a AAA server. exclude Create an exception to a previously stated rule by excluding the specified service from accounting.. Defaults For protocol/port, the TCP protocol appears as 6, the UDP protocol appears as 17, and so on, and port is the TCP or UDP destination port. A port value of 0 (zero) means all ports. For protocols other than TCP and UDP, the port is not applicable and should not be used. Usage Guidelines User accounting services keep a record of which network services a user has accessed. These records are also kept on the designated AAA server. Accounting information is only sent to the active server in a server group. Use the aaa accounting command with the aaa authentication and aaa authorization commands. The include and exclude options are not backward compatible with previous PIX Firewall versions. If you downgrade to an earlier version, the aaa command statements will be removed from your configuration. For outbound connections, first use the nat command to determine which IP addresses can access the PIX Firewall. For inbound connections, first use the static and access-list command statements to determine which inside IP addresses can be accessed through the PIX Firewall from the outside network. If you want to allow connections to come from any host, code the local IP address and netmask as 0.0.0.0 0.0.0.0, or 0 0. The same convention applies to the foreign host IP address and netmask; 0.0.0.0 0.0.0.0 means any foreign host. Tip The help aaa command displays the syntax and usage for the aaa authentication, aaa authorization, aaa accounting, and aaa proxy-limit commands in summary form.. Related Commands aaa authentication Enables, disables, or displays LOCAL, TACACS+, or RADIUS user authentication on a server designated by the aaa-server command, or for PDM user authentication. aaa authorization Enables or disables authentication Enable, disable, or view LOCAL, TACACS+, or RADIUS user authentication, on a server designated by the aaa-server command, or PDM user authentication. [no] aaa authentication include | exclude authen_service if_name local_ip local_mask [foreign_ip foreign_mask] server_tag [no] aaa authentication [serial | enable | telnet | ssh | http] console server_tag show aaa Syntax Description authen_service Specifies the type of traffic to include or exclude from authentication based on the service option selected. access authentication The access authentication service options are as follows: enable, serial, ssh, and telnet. Specify serial for serial console access, telnet for Telnet access, ssh for SSH access, and enable for enable-mode access. cut-through authentication The cut-through authentication service options are as follows: telnet, ftp, http, https, icmp/type, proto, tcp/port, and udp/port. The variable proto can be any supported IP protocol value or name: for example, ip or igmp. Only Telnet, FTP, HTTP, or HTTPS traffic triggers interactive user authentication. You can enter an ICMP message type number for type to include or exclude that specific ICMP message type from authentication. For example, icmp/8 includes or excludes type 8 (echo request) ICMP messages. The tcp/0 option enables authentication for all TCP traffic, which includes FTP, HTTP, HTTPS, and Telnet. When a specific port is specified, only the traffic with a matching destination port is included or excluded for authentication. Note that FTP, Telnet, HTTP, and HTTPS are equivalent to tcp/21, tcp/23, tcp/80, and tcp/443, respectively. If ip is specified, all IP traffic is included or excluded for authentication, depending on whether include or exclude is specified. When all IP traffic is included for authentication, following are the expected behaviors: • Before a user (source IP-based) is authenticated, an FTP, Telnet, HTTP, or HTTPS request triggers authentication and all other IP requests are denied. • After a user is authenticated through FTP, Telnet, HTTP, HTTPS, or virtual Telnet authentication (see the virtual command), all traffic is free from authentication until the uauth timeout. authentication Enable or disable user authentication, prompt user for username and password, and verify information with authentication server. When used with the console option, enables or disables authentication service for access to the PIX Firewall console over Telnet or from the Console connector on the PIX Firewall unit. Use of the aaa authentication command requires that you previously used the aaa-server command to designate an authentication server. The aaa authentication command supports HTTP authentication. The PIX Firewall requires authentication verification of the HTTP server through the aaa authentication http console command before PDM can access the PIX Firewall. console Specify that access to the PIX Firewall console require authentication and optionally, log configuration changes to a syslog server. The maximum password length for accessing the console is 16 characters. enable Access verification for the PIX Firewall unit’s privilege mode. Defaults If a aaa authentication http console server_tag command statement is not defined, you can gain access to the PIX Firewall (via PDM) with no username and the PIX Firewall enable password (set with the password command). If the aaa commands are defined but the HTTP authentication requests a time out, which implies the AAA servers may be down or not available, you can gain access to the PIX Firewall using the username pix and the enable password. By default, the enable password is not set. The PIX Firewall supports authentication usernames up to 127 characters and passwords of up to 16 characters (some AAA servers accept passwords up to 32 characters). A password or username may not contain an “@” character as part of the password or username string, with a few exceptions. Tip The help aaa command displays the syntax and usage for the aaa authentication, aaa authorization, aaa accounting, and aaa proxy-limit commands in summary form. The authentication ports supported for AAA are fixed. We support port 21 for FTP, port 23 for Telnet, and port 80 for HTTP. For this reason, do not use Static PAT to reassign ports for services you wish to authenticate. In other words, when the port to authenticate is not one of the three known ports, the firewall rejects the connection instead of authenticating it. Usage Guidelines To use the aaa authentication command, you must first designate an authentication server with the aaa-server command. Also, for each IP address, one aaa authentication command is permitted for inbound connections and one for outbound connections. Use the if_name, local_ip, and foreign_ip variables to define where access is sought and from whom. The address for local_ip is always on the highest security level interface and foreign_ip is always on the lowest. The aaa authentication command is not intended to mandate your security policy. The authentication servers determine whether a user can or cannot access the system, what services can be accessed, and what IP addresses the user can access. The PIX Firewall interacts with FTP, HTTP , HTTPS, and Telnet to display the credentials prompts for logging in to the network or logging in to exit the network. You can specify that only a single service be authenticated, but this must agree with the authentication server to ensure that both the firewall and server agree. The include and exclude options are not backward compatible with previous PIX Firewall versions. If you downgrade to an earlier version, these aaa authentication command statements will be removed from your configuration. Note When a cut-through proxy is configured, TCP sessions (TELNET, FTP, HTTP, or HTTPS) may have their sequence number randomized even if the norandomseq option is used in the nat or static command. This occurs when a AAA server proxies the TCP session to authenticate the user before permitting access. accessing privileged mode for serial, Telnet, or SSH connections. The ssh option requests a username and password before the first command line prompt on the SSH console connection. The ssh option allows a maximum of three authentication attempts. Telnet access to the PIX Firewall console is available from any internal interface, and from the outside interface with IPSec configured, and requires previous use of the telnet command. SSH access to the PIX Firewall console is also available from any interface without IPSec configured, and requires previous use of the ssh command. The new ssh option specifies the group of AAA servers to be used for SSH user authentication. The authentication protocol and AAA server IP addresses are defined with the aaa-server command statement. Similar to the Telnet model, if a aaa authentication ssh console server_tag command statement is not defined, you can gain access to the PIX Firewall console with the username pix and with the PIX Firewall Telnet password (set with the passwd command). If the aaa command is defined but the SSH authentication requests timeouts, which implies the AAA servers may be down or not available, you can gain access to the PIX Firewall using username pix and the enable password (set with the enable password command). By default, the Telnet password is cisco and the enable password is not set. If the console login request times out, you can gain access to the PIX Firewall from the serial console by entering the pix username and the enable password. where “...” represents your values for authen_service if_name local_ip local_mask [foreign_ip foreign_mask] server_tag. The following are limitations of the aaa authentication secure-http-client command: • At runtime, a maximum of 16 HTTPS authentication processes are allowed. If all 16 HTTPS authentication processes are running, the 17th, new HTTPS connection requiring authentication is dropped. • When uauth timeout 0 is configured (the uauth timeout is set to 0), HTTPS authentication may not work. If a browser initiates multiple TCP connections to load a web page after HTTPS authentication, the first connection is let through but the subsequent connections trigger authentication. As a result, users are presented with an authentication page, continuously, even if the correct username and password are entered each time. You can workaround this by setting the uauth timeout to 1 second with the timeout uauth 0:0:1 command. However, this workaround opens a 1-second window of opportunity that may allow non-authenticated users to go through the firewall if they are comming from the same source IP address. • Because HTTPS authentication occurs on the SSL port 443, users must not configure an access-list command statement to block traffic from the HTTP client to HTTP server on port 443. Furthermore, if static PAT is configured for web traffic on port 80, it must also be configured for the SSL port. In the following example, the first line configures static PAT for web traffic and the second line must be added to support the HTTPS authentication configuration: static (inside,outside) tcp 10.132.16.200 www 10.130.16.10 www static (inside,outside) tcp 10.132.16.200 443 10.130.16.10 443 Enabling Authentication The aaa authentication command enables or disables the following features: • User authentication services provided by a TACACS+ or RADIUS server are first designated with the aaa authorization command. A user starting a connection via FTP, Telnet, or over the World Wide Web is prompted for their username and password. If the username and password are verified by the designated TACACS+ or RADIUS authentication server, the PIX Firewall unit will allow further traffic between the authentication server and the connection to interact independently through the PIX Firewall unit’s “cut-through proxy” feature. • Administrative authentication services providing access to the PIX Firewall unit's console via Telnet, SSH, or the serial console. Telnet access requires previous use of the telnet command. SSH access requires previous use of the ssh command. The prompts users see requesting AAA credentials differ between the three services that can access the PIX Firewall for authentication: Telnet, FTP, HTTP, and HTTPS: •. •. • HTTP users see a pop-up window generated by the browser itself if aaa authentication secure-http-client is not configured. If aaa authentication secure-http-client is configured, a form will load in the browser which is designed to collect username and password. In either case, if a user enters an incorrect password, the user is reprompted. When the web server and the authentication server are on different hosts, use the virtual command to get the correct authentication behavior. Authenticated access to the PIX Firewall console has different types of prompts depending on the option you choose with the aaa authentication console command: • enable option—Allows three tries before stopping with “Access denied.” The enable option requests a username and password before accessing privileged mode for serial or Telnet connections. • serial option—Causes the user to be prompted continually until successfully logging in. The serial option requests a username and password before the first command line prompt on the serial console connection. • ssh option—Allows three tries before stopping with "Rejected by Server." The ssh option requests a username and password before the first command line prompt appears. • telnet option—Causes the user to be prompted continually until successfully logging in. The telnet option forces you to specify a username and password before the first command line prompt of a Telnet console connection. You can specify an interface name with the aaa authentication command. In previous versions, if you specified aaa authentication include include any outbound 0 0 server aaa authentication exclude outbound perim_net perim_mask server When a host is configured for authentication, all users on the host must. The PIX Firewall only accepts 7-bit characters during authentication. After authentication, the client and server can negotiate for 8 bits if required. During authentication, the PIX Firewall only negotiates Go-Ahead, Echo, and NVT (network virtual terminal). HTTP Authentication When using HTTP authentication to a site running Microsoft IIS that has “Basic text authentication” or “NT Challenge” enabled, users may be denied access from the Microsoft IIS server. This occurs because the browser appends the string: “Authorization: Basic=Uuhjksdkfhk==” to the HTTP GET commands. This string contains the PIX Firewall authentication credentials.. As long as the user repeatedly browses the Internet, the browser resends the “Authorization: Basic=Uuhjksdkfhk==” string to transparently reauthenticate the user. Multimedia applications such as CU-SeeMe, Intel Internet Phone, MeetingPoint, and MS NetMeeting silently start the HTTP service before an H.323 session is established from the inside to the outside. Network browsers such as Netscape Navigator do not present a challenge value during authentication; therefore, only password authentication can be used from a network browser. Note multimedia programs may fail on the PC and may even crash the PC after establishing outgoing sessions from the inside. Similar to IPSec, the keyword permit means “yes” and deny means “no.” Therefore, the following command, aaa authentication match yourlist outbound tacacs The aaa command statement list is order-dependent between access-list command statements. If the following command is entered: aaa authentication match yourlist outbound tacacs The PIX Firewall tries to find a match in the mylist access-list command statement group before it tries to find a match in the yourlist access-list command statement group. Old aaa command configuration and functionality stays the same and is not converted to the access-list command format. Hybrid access control configurations (that is, old configurations combined with new access-list command-based configurations) are not recommended. Examples The following example shows use of the aaa authentication command: pixfirewall(config) aaa authentication telnet console radius The following example lists the new include and exclude options: aaa authentication include any outbound 172.31.0.0 255.255.0.0 0.0.0.0 0.0.0.0 tacacs+ aaa authentication exclude telnet outbound 172.31.38.0 255.255.255.0 0.0.0.0 0.0.0.0 tacacs+ The following examples demonstrate ways to use the if_name parameter. The PIX Firewall has an inside network of 192.168.1.0, an outside network of 209.165.201.0 (subnet mask 255.255.255.224), and a perimeter network of 209.165.202.128 (subnet mask 255.255.255.224). This example enables authentication for connections originated from the inside network to the outside network: aaa authentication include any outbound 192.168.1.0 255.255.255.0 209.165.201.0 255.255.255.224 tacacs+ This example enables authentication for connections originated from the inside network to the perimeter network: aaa authentication include any outbound 192.168.1.0 255.255.255.0 209.165.202.128 255.255.255.224 tacacs+ This example enables authentication for connections originated from the outside network to the inside network: aaa authentication include any inbound 192.168.1.0 255.255.255.0 209.165.201.0 255.255.255.224 tacacs+ This example enables authentication for connections originated from the outside network to the perimeter network: aaa authentication include any inbound 209.165.201.0 255.255.255.224 209.165.202.128 255.255.255.224 tacacs+ This example enables authentication for connections originated from the perimeter network to the outside network: aaa authentication include any outbound 209.165.202.128 255.255.255.224 209.165.201.0 255.255.255.224 tacacs+ This example specifies that IP addresses 10.0.0.1 through 10.0.0.254 can originate outbound connections and then enables user authentication so that those addresses must enter user credentials to exit the PIX include any outbound 0 0 tacacs+ aaa authentication exclude outbound 10.0.0.42 255.255.255.255 tacacs+ any This example permits inbound access to any IP address in the range of 209.165.201.1 through 209.165.201.30 indicated by the 209.165.201.0 network address (subnet mask 255.255.255.224). All services are permitted by the access-list) 209.165.201.0 10.16.1.0 netmask 255.255.255.224 access-list acl_out permit tcp 10.16.1.0 255.255.255.0 209.165.201.0 255.255.255.224 access-group acl_out in interface outside aaa authentication include any inbound 0 0 AuthIn Related Commands aaa authorization Enable or disable authorization Enable or disable LOCAL or TACACS+ user authorization services. [no] aaa authorization include | exclude svc if_name local_ip local_mask foreign_ip foreign_mask clear aaa [authorization [include | exclude svc if_name local_ip local_mask foreign_ip foreign_mask]] show aaa Syntax Description authorization Enable or disable TACACS+ user authorization for services (PIX Firewall does not support RADIUS authorization). The authentication server determines what services the user is authorized to access. exclude Create an exception to a previously stated rule by excluding the specified service from authentication, authorization, or accounting to the specified host. Specifies to use the PIX Firewall local user database for local command authorization (using privilege levels).. match acl_name Specify an access-list command statement name. server_tag The AAA server group tag as defined by the aaa-server command. You can also enter LOCAL for the group tag value and use the local firewall database AAA services such as local command authorization privilege levels. svc The services which require authorization. Use any, ftp, http, telnet, or protocol/port. Use any to provide authorization for all TCP services. To provide authorization for UDP services, use the protocol/port form. Services not specified are authorized implicitly. (Services specified in the aaa authentication command do not affect the services that require authorization.) For protocol/port: • protocol—the protocol (6 for TCP, 17 for UDP, 1 for ICMP, and so on). • port—the TCP or UDP destination port, or port range. The port can also be the ICMP type; that is, 8 for ICMP echo or ping. A port value of 0 (zero) means all ports. Port ranges only applies to the TCP and UDP protocols, not to ICMP. For protocols other than TCP, UDP, and ICMP the port is not applicable and should not be used. An example port specification follows. aaa authorization include udp/53-1024 inside 0 0 0 0 This example enables authorization for DNS lookups to the inside interface for all clients, and authorizes access to any other services that have ports in the range of 53 to 1024. Note Specifying a port range may produce unexpected results at the authorization server. PIX Firewall sends the port range to the server as a string with the expectation that the server will parse it out into specific ports. Not all servers do this. In addition, you may want users to be authorized on specific services, which will not occur if a range is accepted. tacacs_server Specifies to use a TACACS user authentication server. _tag Usage Guidelines Except for its use with command authorization, the aaa authorization command requires previous configuration with the aaa authentication command; however, use of the aaa authentication command does not require use of a aaa authorization command. Currently, the aaa authorization command is supported for use with LOCAL and TACACS+ servers but not with RADIUS servers. For each IP address, one aaa authorization command is permitted. If you want to authorize more than one service with aaa authorization, use the any parameter for the service type. If the first attempt at authorization fails and a second attempt causes a timeout, use the service resetinbound command to reset the client that failed the authorization so that it will not retransmit any connections. An example authorization timeout message in Telnet follows. Unable to connect to remote host: Connection timed out User authorization services control which network services a user can access. After a user is authenticated, attempts to access restricted services cause the PIX Firewall unit to verify the access permissions of the user with the designated AAA server. The include and exclude options are not backward compatible with previous PIX Firewall versions. If you downgrade to an earlier version, the aaa command statements will be removed from your configuration. Note RADIUS authorization is supported for use with access-list command statements and for use in configuring a RADIUS server with an acl=acl_name vendor-specific identifier. Refer to the access-list command page for more information. Also see the aaa-server radius-authport commands. If the AAA console login request times out, you can gain access to the PIX Firewall from the serial console by entering the pix username and the enable password.. The following example enables authorization for DNS lookups from the outside interface: aaa authorization include udp/53 inbound 0.0.0.0 0.0.0.0 The following example enables authorization of ICMP echo-reply packets arriving at the inside interface from inside hosts: aaa authorization include 1/0 outbound 0.0.0.0 0.0.0.0 This means that users will not be able to ping external hosts if they have not been authenticated using Telnet, HTTP, or FTP. The following example enables authorization for ICMP echoes (pings) only that arrive at the inside interface from an inside host: aaa authorization include 1/8 outbound 0.0.0.0 0.0.0.0 Related Commands aaa authentication Enables, disables, or displays LOCAL, TACACS+, or RADIUS user authentication on a server designated by the aaa-server command, or for PDM user authentication. mac-exempt Exempts a list of MAC addresses from authentication and authorization. Syntax Description id A MAC access list number. (Configured with the mac-list command.) Defaults None. Command Modes The aaa mac-exempt match id command is available in configuration mode. Usage Guidelines The aaa mac-exempt match id command exempts a list of MAC addresses from authentication and authorization. Note When configuring mac-exempt, do not use the same IP address for two MACs. If a mac-exempt command is configured for two MACs, M1 and M2, and both attempt to use the same ip address, only the traffic from M1 would be permitted. If a mac-exempt is configured for M1 or M2, or if one of them is not configured at all, then the traffic from second host would be allowed to pass. A syslog alerting you to a possible spoof attack, is generated. pixfirewall(config)# aaa ? Usage: [no] aaa authentication|authorization|accounting include|exclude <svc> <if_name><l_ip> <l_mask> [<f_ip> <f_mask>] <server_tag> [no] aaa authentication serial|telnet|ssh|http|enable console <server_tag> [no] aaa authentication|authorization|accounting match <acl_name> <if_name> <server_tag> [no] aaa authorization command {LOCAL | tacacs_server_tag} aaa proxy-limit <proxy limit> | disable [no] aaa mac-exempt match <mcl-id> Related Commands aaa authentication Enable, disable, or view LOCAL, TACACS+, or RADIUS user authentication, on a server designated by the aaa-server command, or PDM user authentication. aaa authorization Enable or disable LOCAL or TACACS+ user authorization services. access-list Create an access list, or use downloadable access lists. (Downloadable access lists are supported for RADIUS servers only.) mac-list Adds a list of MAC addresses using a first match search, and used by the firewall VPN client in performing MAC-based authentication. aaa proxy-limit Specifies the number of concurrent proxy connections allowed per user. Usage Guidelines The aaa proxy-limit command enables you to manually configure the uauth session limit by setting the maximum number of concurrent proxy connections allowed per user. By default, this value is set to 16. If a source address is a proxy server, consider excluding this IP address from authentication or increasing the number of allowable outstanding AAA requests. The show aaa proxy-limit command displays the number of outstanding authentication requests allowed, or indicates that the proxy limit is disabled if disabled. Examples The following example shows how to set and display the maximum number of outstanding authentication requests allowed: pixfirewall(config)# aaa proxy-limit 6 pixfirewall(config)# show aaa proxy-limit aaa proxy-limit 6 Related Commands aaa authentication Enable, disable, or view LOCAL, TACACS+, or RADIUS user authentication, on a server designated by the aaa-server command, or PDM user authentication aaa authorization Enable or disable LOCAL or TACACS+ user authorization services. aaa-server Specifies a AAA server. aaa-server Defines the AAA server group. show aaa-server Syntax Description aaa-server Specifies a AAA server or up to 14 groups of servers with a maximum of 14 servers each. Certain types of AAA services can be directed to different servers. Services can also be set up to fail over to multiple servers. acct_port RADIUS authentication port number. The default is 1645. auth_port RADIUS accounting port number. The default is 1646. debug radius session Captures RADIUS session information and attributes for sent and received RADIUS packets. host server_ip The IP address of the TACACS+ or RADIUS server. if_name The interface name on which the server resides. key A case-sensitive, alphanumeric keyword of up to 127 characters that is the same value as the key on the TACACS+ server. Any characters entered past 127 are ignored. The key is used between the client and server for encrypting data between them. The key must be the same on both the client and server systems. Spaces are not permitted in the key, but other special characters are. no aaa-server Unbinds a AAA server from and interface or host. protocol auth_protocol The type of AAA server, either tacacs+ or radius. radius-acctport Sets the port number of the RADIUS server which the PIX Firewall unit will use for accounting functions. The default port number used for RADIUS accounting is 1646. radius-authport Sets the port number of the RADIUS server which the PIX Firewall will use for authentication functions. The default port number used for RADIUS authentication is 1645. server_tag An alphanumeric string which is the name of the server group. Use the server_tag in the aaa command to associate aaa authentication and aaa accounting command statements to a AAA server. Up to 14 server groups are permitted. However, LOCAL cannot used with aaa-server command because LOCAL is predefined by the PIX Firewall. timeout seconds The timeout interval for the request. This is the time after which the PIX Firewall gives up on the request to the primary AAA server. If there is a standby AAA server, the PIX Firewall will send the request to the backup server. The retransmit timeout is currently set to 10 seconds and is not user configurable. Defaults By default, the PIX Firewall listens for RADIUS on ports 1645 for authentication and 1646 for accounting. (The default ports 1645 for authentication and 1646 for accounting are as defined in RFC 2058.) The default configuration provides the following aaa-server command protocols: aaa-server TACACS+ protocol tacacs+ aaa-server RADIUS protocol radius aaa-server LOCAL protocol local The default timeout value is 5 seconds. Some AAA servers accept passwords up to 32 characters, but the PIX Firewall allows passwords up to 16 characters only. Usage Guidelines The aaa-server command lets you specify AAA server groups.. Other aaa commands reference the server tag group defined by the aaa-server command server_tag parameter. This is a global setting that takes effect when the TACACS+ or RADIUS service is started. Note When a cut-through proxy is configured, TCP sessions (TELNET, FTP, or HTTP) may have their sequence number randomized even if the norandomseq option is used in the nat or static command. This occurs when a AAA server proxies the TCP session to authenticate the user before permitting access. AAA server groups are defined by a tag name that directs different types of traffic to each authentication server. If the first authentication server in the list fails, the AAA subsystem fails over to the next server in the tag group. You can have up to 14 tag groups and each group can have up to 14 AAA servers for a total of up to 196 AAA servers. If accounting is in effect, the accounting information goes only to the active server. The show aaa-server command displays AAA server configuration. Examples. authentication include any inbound 0 0 0 0 AuthIn aaa authentication include any outbound 0 0 0 0 AuthOut The following example lists the commands that can be used to establish an Xauth crypto map: ip address inside 10.0.0.1 255.255.255.0 ip address outside 168.20.1.5 255.255.255.0 ip local pool dealer 10.1.2.1-10.1.2.254 nat (inside) 0 access-list 80 aaa-server TACACS+ aaa-server command is used with the crypto map command to establish an authentication association so that VPN clients are authenticated when they access the PIX Firewall. Related Commands aaa authentication Enable, disable, or view LOCAL, TACACS+, or RADIUS user authentication, on a server designated by the aaa-server command, or PDM user authentication. aaa authorization Enable or disable LOCAL or TACACS+ user authorization services. crypto ipsec Creates, displays, or deletes IPSec security associations, security association global lifetime values, and global transform sets. isakmp Negotiates IPSec security associations and enables IPSec secure communications. access-group Binds the access list to an interface. Usage Guidelines and generates the following syslog message. %PIX-4-106019: IP packet from source_addr to destination_addr, protocol protocol received from interface interface_name deny by access-group id PIX Firewall Version 6.3(2) adds support for the per-user-override option, which allows downloaded access lists to override the access list applied to the interface. If the per-user-override optional argument is not present, thePIX Firewall preserves the existing filtering behavior. When per-user-override is present, the PIX Firewall allows the permit or deny status from the per-user access-list (if one is downloaded) associated to a user to override the permit or deny status from the access-group command associated access list. Additionally, the following rules are observed: • At the time a packet arrives, if there is no per-user access list associated with the packet, the interface access list will be applied. • The per-user access list is governed by the timeout value specified by the uauth option of the timeout command but it can be overriden by the AAA per-user session timeout value. • Existing access list log behavior will be the same. For example, if user traffic is denied because of a per-user access list, syslog message 109015 will be logged. If user traffic is permitted, no syslog message is generated. The log option in the per-user access-list will have no effect. Always use the access-list command with the access-group command. Note The use of access-group command overrides the conduit and outbound command statements for the specified interface_name. The no access-group command unbinds the access-list from the interface interface_name. The show access-group command displays the current access list bound to the interfaces. The clear access-group command removes all entries from an access list indexed by access-list. If access-list is not specified, all access-list command statements are removed from the configuration. The static command statement provides a global address of 209.165.201.3 for the web server at 10.1.1.3. The access-list command statement lets any host access the global address using port 80. The access-group command specifies that the access-list command statement applies to traffic entering the outside interface. Related Commands access-list Creates an access list, or uses a downloadable access list. access-list Create an access list, or use a downloadable access list. (Downloadable access lists are supported for RADIUS servers only). [no] access-list id [line line-num] {deny | permit} icmp {source_addr | local_addr} {source_mask | local_mask} | interface if_name | object-group network_obj_grp_id {destination_addr | remote_addr} {destination_mask | remote_mask} | interface if_name | object-group network_obj_grp_id [icmp_type | object-group icmp_type_obj_grp_id] [log [[disable | default] | [level]]] [interval secs]] Syntax Description alert-interval secs Specifies the time interval, from 1 to 3600 seconds, for generating syslog message 106101, which alerts you that the firewall has reached a deny flow maximum. In other words, when the deny flow maximum is reached, another 106101 message is generated if has been at least secs seconds since the last 106101 message. If this option is not specified, the default interval is 300 seconds. compiled When used in conjunction with the access-list command, this turns on TurboACL unless the no qualifier is used, in which case the command no access-list id compiled turns off TurboACL for that access list. To use TurboACL globally, enter the access-list compiled command and to globally turn off TurboACL, enter the no access-list compiled command. After TurboACL has been globally configured, individual access lists or groups can have TurboACL enabled or disabled using individual [no] access-list id compiled commands. TurboACL is compiled only if the number of access list elements is greater than or equal to 19. log disable | When the log option is specified, it generates syslog message 106100 for the access default | level list element (ACE) to which it is applied. (Syslog message 106100 is generated for every matching permit or deny ACE flow passing through the firewall.) The first-match flow is cached. Subsequent matches increment the hit count displayed in the show access-list command (hitcnt) for the ACE, and new 106100 messages will be generated at the end of the interval defined by interval secs if the hit count for the flow is not zero. The default ACL logging behavior (the log keyword not specified) is that if a packet is denied, then message 106023 is generated, and if a packet is permitted, then no syslog message is generated. An optional syslog level (0 - 7) may be specified for the generated syslog messages (106100). If no level is specified, the default level is 6 (informational) for a new ACE. If the ACE already exists, then its existing log level remains unchanged. If the log disable option is specified, access list logging is completely disabled. No syslog message, including message 106023, will be generated. The log default option restores the default access list logging behavior. mask The netmask. obj_grp_id An existing object group. object-group Specifies an object group. Refer to the object-group command for information on how to configure object groups. operator The operator compares the source IP address (sip) or destination IP address (dip) ports. Possible operands include lt for less than, gt for greater than, eq for equal, neq for not equal, and range for an inclusive range. Use the access-list command the without an operator and port to indicate all ports by default. For example, access-list acl_out permit tcp any host 209.165.201.1 Use eq and a port to permit or deny access to just that port. For example, use eq ftp to permit or deny access only to FTP. access-list acl_out deny tcp any host 209.165.201.1 eq ftp Use lt and a port to permit or deny access to all ports less than the port you specify. For example, use lt 2025 to permit or deny access to the well-known ports (1 to 1024). access-list acl_dmz1 permit tcp any host 192.168.1.1 lt 1025 Use gt and a port to permit or deny access to all ports greater than the port you specify. For example, use gt 42 to permit or deny ports 43 to 65535. access-list acl_dmz1 deny udp any host 192.168.1.2 gt 42 Use neq and a port to permit or deny access to every port except the ports that you specify. For example, use neq 10 to permit or deny ports 1-9 and 11 to 65535. access-list acl_dmz1 deny tcp any host 192.168.1.3 neq 10 Use range and a port range to permit or deny access to only those ports named in the range. For example, use range 10 1024 to permit or deny access only to ports 10 through 1024. All other ports are unaffected. The use of port ranges can dramatically increase the number of IPSec tunnels. For example, if a port range of 5000 to 65535 is specified for a highly dynamic protocol, up to 60,535 tunnels can be created. permit When used with the access-group command, the permit option selects a packet to traverse the PIX Firewall. By default, PIX Firewall denies all inbound or outbound packets unless you specifically permit access. When used with a crypto map command statement, permit selects a packet for IPSec protection. The permit option causes all IP traffic that matches the specified conditions to be protected by IPSec using the policy described by the corresponding crypto map command statements. prefix The network number. For more information, refer to the prefix-list command. port Services you permit or deny access to. Specify services by the port that handles it, such as smtp for port 25, www for port 80, and so on. You can specify ports by either a literal name or a number in the range of 0 to 65535. You can view valid port numbers online at the following website: See “Ports” in Chapter 2, “Using PIX Firewall Commands” for a list of valid port literal names in port ranges; for example, ftp h323. You can also specify numbers. protocol Name or number of an IP protocol. It can be one of the keywords icmp, ip, tcp, or udp, or an integer in the range 1 to 254 representing an IP protocol number. To match any Internet protocol, including ICMP, TCP, and UDP, use the keyword ip. remark text The text of the remark to add before or after an access-list command statement, up to 100 characters in length. remote_addr IP address of the network or host remote to the PIX Firewall. Specify a remote_addr when the access-list command statement is used in conjunction with a crypto access-list command statement, a nat 0 access-list command statement, or a vpdn group split-tunnel command statement. remote_mask Netmask bits (mask) to be applied to remote_addr, if the remote address is a network mask. source_addr Address of the network or host from which the packet is being sent. Use this field when an access-list command statement is used in conjunction with an access-group command statement, or with the aaa match access-list command and the aaa authorization command. source_mask Netmask bits (mask) to be applied to source_addr, if the source address is for a network mask. Defaults By default, PIX Firewall denies all inbound or outbound packets unless you specifically permit access. TurboACL is used only if the number of access list elements is greater than or equal to 19. The default time interval at which to generate syslog message 106100 is 300 seconds. The default time interval for a deny flow maximum syslog message (106101) is 300 seconds. The default ACL logging behavior is to generate syslog message 106023 for denied packets. When the log option is specified, the default level for syslog message 106100 is 6 (informational). Usage Guidelines The access-list command lets you specify if an IP address is permitted or denied access to a port or protocol. In this document, one or more access-list command statements with the same access list name are referred to as an “access list.” Access lists associated with IPSec are known as “crypto access lists.” By default, all access-list commands have an implicit deny unless you explicitly specify permit. In other words, by default, all access in an access list is denied unless you explicitly grant access using a permit statement. Note Do not use the string “multicastACL” following the name of a PIX Firewall interface in an access-list name because this is a reserved keyword used by PIX Device Manager (PDM). Additionally, you can use the object-group command to group access lists like any other network object. Use the following guidelines for specifying a source, local, or destination address: • Use a 32-bit quantity in four-part, dotted-decimal format. • Use the keyword any as an abbreviation for an address and mask of 0.0.0.0 0.0.0.0. This keyword is normally not recommended for use with IPSec. • Use host address as an abbreviation for a mask of 255.255.255.255. Use the following guidelines for specifying a network mask: • Do not specify a mask if the address is for a host; if the destination address is for a host, use the host parameter before the address. For example: access-list acl_grp permit tcp any host 192.168.1.1 • If the address is a network address, specify the mask as a 32-bit quantity in four-part, dotted-decimal format. Place zeros in the bit positions you want to ignore. • Remember that you specify a network mask differently than with the Cisco IOS software access-list command. With PIX Firewall, use 255.0.0.0 for a Class A address, 255.255.0.0 for a Class B address, and 255.255.255.0 for a Class C address. If you are using a subnetted network address, use the appropriate network mask. For example: access-list acl_grp permit tcp any 209.165.201.0 255.255.255.224 If appropriate, after you have defined an access list, bind it to an interface using the access-group command. For IPSec use, bind it with a crypto ipsec command statement. In addition, you can bind an access list with the RADIUS authorization feature (described in the next section). The access-list command supports the sunrpc service. The show access-list command lists the access-list command statements in the configuration and the hit count of the number of times each element has been matched during an access-list command search. Additionally, it displays the number of access list statements in the access list and indicates whether or not the list is configured for TurboACL. (If the list has less than eighteeen access control entries then it is marked to be turbo-configured but is not actually configured for TurboACL until there are 19 or more entries.) The show access-list source_addr option filters the show output so that only those access-list elements that match the source IP address (or with any as source IP address) are displayed. The clear access-list command removes all access-list command statements from the configuration or, if specified, access lists by their id. The clear access-list id counters command clears the hit count for the specified access list. The no access-list command removes an access-list command from the configuration. If you remove all the access-list command statements in an access list, the no access-list command also removes the corresponding access-group command from the configuration. Note The aaa, crypto map, and icmp commands make use of the access-list command statements. The following example illustrates the use of access list based logging in an ICMP context: 1. An inbound ICMP echo request (1.1.1.1 -> 192.168.1.1) arrives on the outside interface. 2. An ACL called outside-acl is applied for the access check. 3. The packet is permitted by the first ACE of outside-acl which has the log option enabled. 4. The log flow (ICMP, 1.1.1.1, 0, 192.168.1.1, 8) has not been cached, so the following syslog message is generated and the log flow is cached: 106100: access-list outside-acl permitted icmp outside/1.1.1.1(0) -> inside/192.168.1.1(8) hit-cnt 1 (first hit) 5. Twenty such packets arrive on the outside interface within the next 10 minutes (600 seconds). Because the log flow has been cached, the log flow is located and the hit count of the log flow is incremented for each packet. 6. At the end of 10th minute, the following syslog message is generated and the hit count of the log flow is reset to 0: 106100: access-list outside-acl permitted icmp outside/1.1.1.1(0) -> inside/192.168.1.1(8) hit-cnt 20 (300-second interval) 7. No such packets arrive on the outside interface within the next 10 minutes. So the hit count of the log flow remains 0. 8. At the end of 20th minute, the cached flow (ICMP, 1.1.1.1, 0, 192.168.1.1, 8) is deleted because of the 0 hit count. To disable a log option without having to remove the ACE, use access-list id log disable. When removing an access control element (ACE) with a log option enabled using a no access-list command, it is not necessary to specify all the log options. The ACE is removed as long as its permit or deny rule is used to uniquely identify it. However, the removal of an ACE (with a log option enabled) does not remove the associated cached flows. You must remove the entire access control list (ACL) to remove the cached flows. When a cached flow is flushed due to the removal of an ACL, a syslog message will be generated if the hit count of the flow is non-zero. The clear access-list command removes all the cached flows. RADIUS Authorization PIX Firewall allows a RADIUS server to send user group attributes to the PIX Firewall in the RADIUS authentication response message. Additionally, the PIX Firewall allows downloadable access lists from the RADIUS server. For example, you can configure an access list on a Cisco Secure ACS server and download it to the PIX Firewall during RADIUS authorization. After the PIX Firewall authenticates a user, it can then use the CiscoSecure acl attribute returned by the authentication server to identify an access list for a given user group. To maintain consistency, PIX Firewall also provides the same functionality for TACACS+. To restrict users in a department to three servers and deny everything else, the access-list command statements are as follows: to identify the access-list identification name. The PIX Firewall gets the acl=id from CiscoSecure and extracts the ACL number from the attribute string, which it places in a user’s uauth entry. When a user tries to open a connection, PIX Firewall checks the access list in the user’s uauth entry, and depending on the permit or deny status of the access list match, permits or denies the connection. When a connection is denied, which network services the user is permitted or denied access to. If you want to specify that only users logging in from a given subnet may use the specified services, specify the subnet instead of using any. Note An access list used for RADIUS authorization does not require an access-group command to bind the statements to an interface. TurboACL On the PIX Firewall, TurboACL is turned on globally with the command access-list compiled (and turned off globally by the command no access-list compiled). The PIX Firewall default mode is TurboACL off (no access-list compiled), and TurboACL is active only on access lists with 19 or more entries. The minimum amount of Flash memory required to run TurboACL is 2.1 MB. If memory allocation fails, the TurboACL lookup tables will not be generated. Note Use TurboACL only on PIX Firewall platforms that have 16 MB or more of Flash memory. Consequently, TurboACL is not supported on the PIX 501 because it has 8 MB of Flash memory. If TurboACL is configured, some access control list or access control list group modifications can trigger regeneration of the TurboACL internal configuration. Depending on the extent of TurboACL configuration(s), this could noticeably consume CPU resources. Consequently, we recommend modifying turbo-complied access lists during non-peak system usage hours. For more information on how to use TurboACL, refer to the Cisco PIX Firewall and VPN Configuration Guide, Version 6.2 or higher. Usage Notes 1. The clear access-list command automatically unbinds an access list from a crypto map command or interface. The unbinding of an access list from a crypto map command can lead to a condition that discards all packets because the crypto map command statements referencing the access list are incomplete. To correct the condition, either define other access-list command statements to complete the crypto map command statements or remove the crypto map command statements that pertain to the access-list command statement. Refer to the crypto map command for more information. 2. Access control lists that are dynamically updated on the PIX Firewall by a AAA server can only be shown using the show access-list command. The write command does not save or display these updated lists. 3. The access-list command operates on a first match basis. 4. If you specify an access-list command statement and bind it to an interface with the access-group command statement, by default, all traffic inbound to that interface is denied. You must explicitly permit traffic. Note that “inbound” in this context means traffic passing through the interface, rather than the more typical PIX Firewall usage of inbound meaning traffic passing from a lower security level interface to a higher security level interface. 5. Always permit access first and then deny access afterward. If the host entries match, then use a permit statement, otherwise use the default deny statement. You only need to specify additional deny statements if you need to deny specific hosts and permit everyone else. 6. You can view security levels for interfaces with the show nameif command. 7. The ICMP message type (icmp_type) option is ignored in IPSec applications because the message type cannot be negotiated with ISAKMP. 8. Only one access list can be bound to an interface using the access-group command. 9. If you specify the permit option in the access list, the PIX Firewall continues to process the packet. If you specify the deny option in the access list, PIX Firewall discards the packet and generates the following syslog message. %PIX-4-106019: IP packet from source_addr to destination_addr, protocol protocol received from interface interface_name deny by access-group id The access-list command uses the same syntax as the Cisco IOS software access-list command except that PIX Firewall uses a subnet mask, whereas Cisco IOS software uses a wildcard mask. (In Cisco IOS software, the mask in this example would be specified with the 0.0.0.255 value.) For example, in the Cisco IOS software access-list command, a subnet mask of 0.0.0.255 would be specified as 255.255.255.0 in the PIX Firewall access-list command. 10. We recommend that you do not use the access-list command with the conduit and outbound commands. While using these commands together will work, the way in which these commands operate may cause debugging issues because the conduit and outbound commands operate from one interface to another whereas the access-list command used with the access-group command applies only to a single interface. If these commands must be used together, PIX Firewall evaluates the access-list command before checking the conduit and outbound commands. 11. Refer to the Cisco PIX Firewall and VPN Configuration Guide for a detailed description about using the access-list command to provide server access and to restrict outbound user access. 12. Refer to the aaa-server radius-acctport and aaa-server radius-authport commands to verify or change port settings. If you specify an ICMP message type for use with IPSec, PIX Firewall ignores it. For example: access-list 10 permit icmp any any echo-reply IPSec is enabled such that a crypto map command references the (ACL) id for this access-list command, then the echo-repy ICMP message type is ignored. • Process inbound traffic to filter out and discard traffic that IPSec protects. • Determine whether or not to accept requests for IPSec security associations on behalf of the requested data flows when processing IKE negotiation from the IPSec peer. (Negotiation is only done for crypto map command statements with the ipsec-isakmp option.) For a peer’s initiated IPSec negotiation to be accepted, it must specify a data flow that is permitted by a crypto access list associated with an ipsec-isakmp crypto map entry. You can associate a crypto access list with an interface by defining the corresponding crypto map command statement and applying the crypto map set to an interface. Different access lists must be used in different entries of the same crypto map set. However, both inbound and outbound traffic will be evaluated against the same “outbound” IPSec access list. Therefore, the access list’s criteria are applied in the forward direction to traffic exiting your PIX Firewall and the reverse direction to traffic entering your PIX Firewall.. These different access lists are then used in different crypto map entries that specify different IPSec policies. We recommend that you configure “mirror image” crypto access lists for use by IPSec and that you avoid using the any keyword. See the Cisco PIX Firewall and VPN Configuration Guide for more information. If you configure multiple statements for a given crypto access list, in general, the first permit statement matched, will be the statement used to determine the scope of the IPSec security association. That is, the IPSec security association will be set up to protect traffic that meets the criteria of the matched statement only. Later, if traffic matches a different permit statement of the crypto access list, a new, separate IPSec security association will be negotiated to protect traffic matching the newly matched access list command statement. Some services such as FTP require two access-list command statements, one for port 10 and another for port 21, to properly encrypt FTP traffic. Examples The following example creates a numbered access list that specifies a Class C subnet for the source and a Class C subnet for the destination of IP packets. Because the access-list command is referenced in the crypto map command statement, PIX Firewall encrypts all IP traffic that is exchanged between the source and destination subnets. access-list 101 permit ip 172.21.3.0 255.255.0.0 172.22.2.0 255.255.0.0 access-group 101 in interface outside crypto map mymap 10 match address 101 The next example only lets an ICMP message type of echo-reply be permitted into the outside interface: access-list acl_out permit icmp any any echo-reply access-group acl_out interface outside The following example shows how access list entries (ACEs) are numbered by the firewall and how remarks are inserted: pixfirewall(config)# show access-list ac access-list ac; 2 elements access-list ac line 1 permit ip any any (hitcnt=0) access-list ac line 2 permit tcp any any (hitcnt=0) The following shows how to insert an access list statement at a specific line number: pixfirewall(config)# show access-list ac access-list ac; 3 elements access-list ac line 1 permit ip any any (hitcnt=0) access-list ac line 2 permit tcp any any (hitcnt=0) access-list ac line 3 remark This comment decribes the ACE line 3 access-list ac line 4 permit tcp 172.0.0.0 255.0.0.0 any (hitcnt=0) which shows the total number of cached ACL log flows (total), the number of cached deny-flows (denied), and the maximum number of allowed deny-flows. activation-key Updates the activation key on your PIX Firewall and checks the activation key running on your PIX Firewall against the activation key stored in the Flash memory of the PIX Firewall. activation-key activation-key-four-tuple show activation-key Syntax Description activation-key Updates the PIX Firewall activation key unless there is a mismatch between the Flash memory and running PIX Firewall software versions. activation-key-four-tuple A four-element hexidecimal string with one space between each element. For example: 0xe02888da 0x4ba7bed6 0xf1c123ae 0xffd8624e Usage Guidelines Use the activation-key activation-key-four-tuple command to change the activation key on your PIX Firewall. Caution Use only an activation key valid for your PIX Firewall software version and platform or your system may not reload after rebooting. The activation-key activation-key-four-tuple command output indicates the status of the activation key as follows: • If the PIX Firewall Flash memory software image version is the same as the running PIX Firewall software version, and the PIX Firewall Flash memory activation key is the same as the running PIX Firewall software activation key, then the activation-key command output reads as follows: The flash activation key has been modified. The flash activation key is now the SAME as the running key. • If the PIX Firewall Flash memory image version is the same as the running PIX Firewall software, and the PIX Firewall Flash memory activation key is different from the running PIX Firewall activation key, then the activation-key command output reads as follows: The flash activation key has been modified. The flash activation key is now DIFFERENT from the running key. The flash activation key will be used when the unit is reloaded. • If the PIX Firewall Flash memory image version is not the same as the running PIX Firewall software, then the activation-key command output reads as follows: The flash image is DIFFERENT from the running image. The two images must be the same in order to modify the flash activation key. • If the PIX Firewall Flash memory image version is the same as the running PIX Firewall software, and the entered activation key is not valid, then the activation-key command output reads as follows: ERROR: The requested key was not saved because it is not valid for this system. • If the PIX Firewall Flash memory activation key is the same as the entered activation key, then the activation-key command output reads as follows: The flash activation key has not been modified. The requested key is the SAME as the flash activation key. The show activation-key command output indicates the status of the activation key as follows: • If the activation key in the PIX Firewall Flash memory is the same as the activation key running on the PIX Firewall, then the show activation-key output reads as follows: The flash activation key is the SAME as the running key. • If the activation key in the PIX Firewall Flash memory is the different from the activation key running on the PIX Firewall, then the show activation-key output reads as follows: The flash activation key is DIFFERENT from the running key. The flash activation key takes effect after the next reload. • If the PIX Firewall Flash memory software image version is not the same as the running PIX Firewall software image, then the show activation-key output reads as follows: The flash image is DIFFERENT from the running image. The two images must be the same in order to examine the flash activation key. Usage Notes 1. The PIX Firewall must be rebooted for a new activation key to be enabled. 2. If the PIX Firewall software image is being upgraded to a higher version and the activation key is being updated at the same time, we recommend that you first install the software image upgrade and reboot the PIX Firewall unit, and then update the activation key in the new image and reboot the unit again. 3. If you are downgrading to a lower PIX Firewall software version, we recommend that you ensure that the activation key running on your system is not intended for a higher version before installing the lower version software image. If this is the case, you must first change the activation key to one that is compatible with the the lower version before installing and rebooting. Otherwise, your system may refuse to reload after installation of the new software image. Examples The following example shows sample out from the show activation-key command: pixfirewall(config)# show activation-key Serial Number: 480221353 (0x1c9f98a9) Related Commands show version Displays the PIX Firewall operating information. alias Administer overlapping addresses with dual NAT. clear alias show alias Syntax Description dnat_ip An IP address on the internal network that provides an alternate IP address for the external address that is the same as an address on the internal network. foreign_ip IP address on the external network that has the same address as a host on the internal network. if_name The internal network interface name in which the foreign_ip overlaps. netmask Network mask applied to both IP addresses. Use 255.255.255.255 for host masks. Defaults None. Usage Guidelines 209.165.201.1, you can use the alias command to redirect traffic to another address, such as, 209.165.201.30. Note For DNS fixup to work properly, proxy-arp has to be disabled. If you are using the alias command for DNS fixup, disable proxy-arp with the following command after the alias command has been executed: If the alias command is used with the sysopt ipsec pl-compatible command, a static route command statement must be added for each IP address specified in the alias command statement. There must be an A (address) record in the DNS zone file for the “dnat” address in the alias command. Use the no alias command to disable a previous set alias command statement. Use the show alias command to display alias command statements in the configuration. Use the clear alias command to remove all alias commands from the configuration. After changing or removing an alias command statement, use the clear xlate command. The alias command changes the default behavior of the PIX Firewall in three ways: • When receiving a packet coming in through the interface identified by if_name, destined for the address identified by dnat_ip, PIX Firewall sends it to the address identified by foreign_ip. • When receiving a DNS A response, containing the address identified by foreign_ip, coming from a lower security interface, and destined for the host behind the inteface identified by if_name, PIX Firewall changes foreign_ip in the reply to dnat_ip. This can be turned off by using the command sysopt nodnsalias inbound. • When receiving a DNS A response, containing the address identified by dnat_ip, coming from a DNS server behind the interface, if_name, and destined for a host behind the lower security interface, PIX Firewall changes dnat_ip address to foreign_ip. This can be turned off using the command sysopt nodnsalias outbound. The alias command is applied on a per-interface basis, while the sysopt nodnsalias changes the behaviour for all interfaces. Also, note that addresses in the zone transfers made across the PIX Firewall, are not changed. You can specify a net alias by using network addresses for the foreign_ip and dnat_ip IP addresses. For example, the alias 192.168.201.0 209.165.201.0 255.255.255.224 command creates aliases for each IP address between 209.165.201.1 and 209.165.201.30. Note ActiveX blocking does not occur when users access an IP address referenced by the alias command. ActiveX blocking is set with the filter activex command. Usage Notes • To access an alias dnat_ip address with static and access-list command statements, specify the dnat_ip address in the access-list command statement as the address from which traffic is permitted from. The following example illustrates this note. alias (inside) 192.168.201.1 209.165.201.1 255.255.255.255 static (inside,outside) 209.165.201.1 192.168.201.1 netmask 255.255.255.255 access-list acl_out permit tcp host 192.168.201.1 host 209.165.201.1 eq ftp-data access-group acl_out in interface outside An alias is specified with the inside address 192.168.201.1 mapping to the foreign address 209.165.201.1. • You can use the sysopt nodnsalias command to disable inbound embedded DNS A record fixups according to aliases that apply to the A record address and outbound replies. Examples In the following example, the inside network contains the IP address 209.165.201.29, which on the Internet belongs to example.com. When inside clients try to access example.com, the packets do not go to the PIX Firewall because the client assumes 209.165.201.29 is on the local inside network. To correct this, use the alias command as follows: alias (inside) 192.168.201.0 209.165.201.0 255.255.255.224 show alias alias 192.168.201.0 209.165.201.0 255.255.255.224 When the inside network client 209.165.201.2 connects to example.com, the DNS response from an external DNS server to the internal client’s query would be altered by the PIX Firewall to be 192.168.201.29. If the PIX Firewall uses 209.165.200.225 through 209.165.200.254 as the global pool IP addresses, the packet goes to the PIX Firewall with SRC=209.165.201.2 and DST=192.168.201.29. The PIX Firewall translates the address to SRC=209.165.200.254 and DST=209.165.201.29 on the outside. In the next example, a web server is on the inside at 10.1.1.11 and a static command statement was created for it at 209.165.201.11. The source host is on the outside with address 209.165.201.7. A DNS server on the outside has a record for as follows:. IN A 209.165.201.11 The period at the end of the. domain name must be included. The alias command follows: alias 10.1.1.11 209.165.201.11 255.255.255.255 PIX Firewall doctors the nameserver replies to 10.1.1.11 for inside clients to directly connect to the web server. The static command statement is as follows: static (inside,outside) 209.165.201.11 10.1.1.11 You can test the DNS entry for the host with the following UNIX nslookup command: nslookup -type=any Related Commands access-list Creates an access list, or uses a downloadable access list. static Configures a persistent one-to-one address translation rule by mapping a local IP address to a global IP address, also known as Static Port Address Translation (Static PAT). arp Configure the Address Resolution Protocol (ARP) cache timeout value, static ARP table entries, or static proxy ARP, and view the ARP cache, status, or timeout value. Syntax Description arp Configure a static ARP mapping (IP-to-physical address binding) for the addresses specified. These entries are not cleared when the ARP persistence timer times out and are automatically stored in the configuration when you use the write command to store the configuration. arp alias Configure a static proxy ARP mapping (proxied IP-to-physical address binding) for the addresses specified. These entries are not cleared when the ARP persistence timer times out and are automatically stored in the configuration when you use the write command to store the configuration. if_name The interface name whose ARP table will be changed or viewed. (The interface name itself is specified by the nameif command.) ip IP address for an ARP table entry. mac Hardware MAC address for the ARP table entry; for example, 00e0.1e4e.3d8b. seconds Duration that a dynamic ARP entry can exist in the ARP table before being cleared. statistics The ARP statistics, including block usage. Defaults The default value for the ARP persistence timer is 14,400 seconds (4 hours). Usage Guidelines The Address Resolution Protocol (ARP) maps an IP address to a MAC address and is defined in RFC 826. Proxy Address Resolution Protocol (proxy ARP) is a variation of the ARP protocol in which an intermediate device (for example, the firewall) sends an ARP response on behalf of an end node to the requesting host. ARP mapping occurs automatically as the firewall processes traffic, however, you can configure the ARP cache timeout value, static ARP table entries, or proxy ARP. Note Because ARP is a low-level TCP/IP protocol that resolves a node’s MAC (physical) address from its IP address (through an ARP request asking the node with a particular IP address to send back its physical address), the presence of entries in the ARP cache indicates that the firewall has network connectivity. The arp timeout command specifies the duration to wait before the ARP table rebuilds itself, automatically updating new host information. This feature is also known as the ARP persistence timer. The no arp timeout command resets the ARP persistence timer to its default value. The show arp timeout command displays the current timeout value. The arp if_name ip mac command adds a static (persistent) entry to the firewall ARP cache. (This matches the behavior of Cisco IOS). For example, you could use the arp if_name ip mac command to set up a static IP-to-MAC address mapping for hosts on your network. Use the no arp if_name ip mac command to remove the static ARP mapping. The arp if_name ip mac alias command configures proxy ARP for the IP and MAC addresses specified. Enable proxy ARP when you want the firewall to respond to ARP requests for another host (determined by the IP address of the host) with the MAC address you specify in the arp alias command. Use the no arp if_name ip mac alias command to remove the static proxy ARP mapping. The clear arp command clears all entries in the ARP cache table except for those you configure directly with the arp if_name ip mac command. Use the no arp if_name ip mac command to remove these entries. The show arp command lists the entries in the ARP table. The show arp statistics command displays the following ARP information: pixfirewall(config)# show arp statistics Dropped blocks in ARP: 6 Maximum Queued blocks: 3 Queued blocks: 1 Interface collision ARPs Received: 5 ARP-defense Gratuitous ARPS sent: 4 Total ARP retries: 15 Unresolved hosts: 1 Maximum Unresolved hosts: 2 Examples The following examples illustrate use of the arp and arp timeout commands: arp inside 192.168.0.42 00e0.1e4e.2a7c arp outside 192.168.0.43 00e0.1e4e.3d8b alias show arp outside 192.168.0.43 00e0.1e4e.3d8b alias inside 192.168.0.42 00e0.1e4e.2a7c arp timeout 42 show arp timeout arp timeout 42 seconds no arp timeout show arp timeout arp timeout 14400 seconds auth-prompt Change the AAA challenge text for through the firewall user sessions. (Configuration mode.) clear auth-prompt Syntax Description accept If a user authentication via Telnet is accepted, display the prompt string. prompt The AAA challenge prompt string follows this keyword. This keyword is optional for backward compatibility. reject If a user authentication via Telnet is rejected, display the prompt string. string A string of up to 235 alphanumeric characters or 31 words, limited by whichever maximum is first reached. Special characters should not be used; however, spaces and punctuation characters are permitted. Entering a question mark or pressing the Enter key ends the string. (The question mark appears in the string.) Usage Guidelines The auth-prompt command lets you change the AAA challenge text for HTTP, FTP, and Telnet access through the firewall requiring user authentication from TACACS or RADIUS servers. This text is primarily for cosmetic purposes and displays above the username and password prompts that users view when logging in. If the user authentication occurs from Telnet, you can use the accept and reject options to display different status prompts to indicate that the authentication attempt is accepted or rejected by the AAA server. Following is the authentication sequence showing when each auth-prompt string is displayed: 1. A user initiates a telnet session from the inside interface through the firewall to the outside interface. 2. The user receives the auth-prompt challenge text, followed by the username prompt. 3. The user enters the AAA username/password username and password, or in the formats aaa_user@outside_user and aaa_pass@outside_pass. 4. The firewall sends the aaa_user/aaa_pass to the TACACS or RADIUS AAA server. 5. If the AAA server authenticates the user, the firewall displays the auth-prompt accept text to the user, otherwise the reject challenge text is displayed. Authentication of http and ftp sessions displays only the challenge text at the prompt. The accept and reject text are not displayed. If you do not use this command, FTP users view FTP authentication, HTTP users view HTTP Authentication, and challenge text does not appear for Telnet access. Examples The following example shows how to set the authentication prompt and how users view the prompt: auth-prompt XYZ Company Firewall Access After this string is added to the configuration, users view the following: Example.com Company Firewall Access User Name: Password: Related Commands aaa authentication Enables, disables, or displays LOCAL, TACACS+, or RADIUS user authentication on a server designated by the aaa-server command, or for PDM user authentication. auto-update Specifies how often to poll an Auto Update Server. clear auto-update show auto-update if_name Specifies the interface to use (with its corresponding IP or MAC address) to uniquely identify the device. ipaddress Specifies to use the IP address of the specified PIX Firewall interface to uniquely identify the firewall. mac-address Specifies to use the MAC address of the specified PIX Firewall interface to uniquely identify the firewall. period Specifies how long to attempt to contact the Auto Update Server, after the last successful contact, before stopping all traffic passing through the firewall. poll_period Specifies how often, in minutes, to poll an Auto Update Server. The default is 720 minutes (12 hours). retry_count Specifies how many times to try reconnecting to the Auto Update Server if the first attempt fails. The default is 0. retry_period Specifies how long to wait, in minutes, between connection attempts. The default is 5 minutes and the valid range of values is from 1 to 35791. text Specifies the text string to uniquely identify the device to the Auto Update Server. url Specifies the location of the Auto Update Server using the following syntax: http[s]:[[user:password@] location [:port ]] / pathname See the copy command for variable descriptions. verify_certificate Specifies to verify the certificate returned by the Auto Update Server. Usage Guidelines The clear auto-update command removes the entire auto-update configuration. The auto-update poll-period command specifies how often to poll the Auto Update Server for configuration or software image updates. The no auto-update poll-period command resets the poll period to the default. The auto-update server command specifies the URL of the Auto Update Server. Only one server can be configured. The no auto-update server command disables polling for auto-update updates (by terminating the auto-update daemon). The auto-update timeout command is used to stop all new connections to the PIX Firewall if the Auto Update Server has not been contacted for period minutes. This can be used to ensure that the PIX Firewall has the most recent image and configuration. The show auto-update command displays the Auto Update Server, poll time, and timeout period. Examples The show auto-update command displays the Auto Update Server, poll time, and timeout period. The following is sample output from the command: show auto-update Server: Poll period: 1 minutes, retry count: 0, retry period: 5 minutes Timeout: none Device ID: string [device1] Next poll in 0.13 minutes Last poll: 23:43:33 UTC Fri Jun 7 2002 The format of the URL, /autoupdate/AutoUpdateServlet, is the standard URL format on the Auto Update Server. The port 443 (the default port for HTTPS) can be omitted because it is the default setting. Related Commands copy Changes software images without requiring access to the TFTP monitor mode. banner Configures the session, login, or message-of-the-day banner. clear banner Syntax Description exec Configures the system to display a banner before displaying the enable prompt. login Configures the system to display a banner before the password login prompt when accessing the firewall using telnet. motd Configures the system to display a message-of-the-day banner. text The line of message text to be displayed in the firewall CLI. Subsequent text entries are added to the end of an existing banner unless the banner is cleared first. The tokens $(domain) and $(hostname) are replaced with the host name and domain name of the firewall. Usage Guidelines The banner command configures a banner to display for the option specified. The text string consists of all characters following the first whitespace (space) until the end of the line (carriage return or LF). Spaces in the text are preserved. However, tabs cannot be entered through the CLI. Multiple lines in a banner are handled by entering a new banner command for each line you wish to add. Each line is then appended to the end of the existing banner. If the text is empty, then a carriage return (CR) will be added to the banner. There is no limit on the length of a banner other than RAM and Flash memory limits. When accessing the firewall through Telnet or SSH, the session closes if there is not enough system memory available to process the banner messages or if a TCP write error occurs in attempting to display the banner messages. To replace a banner, use the no banner command before adding the new lines. The no banner {exec | login | motd} command removes all the lines for the banner option specified. The no banner command does not selectively delete text strings, so any text entered at the end of the no banner command is ignored. The clear banner command removes all the banners. The show banner {motd | exec | login} command displays the specified banner option and all the lines configured for it. If a banner option is not specified, then all the banners are displayed. Examples The following example shows how to configure the motd, exec, and login banners: pixfirewall(config)# banner motd Think on These Things pixfirewall(config)# banner exec Enter your password carefully pixfirewall(config)# banner login Enter your password to log in pixfirewall(config)# show banner exec: Enter your password carefully login: Enter your password to log in motd: Think on These Things ca Configure the PIX Firewall to interoperate with a certification authority (CA). show ca certificate show ca crl show ca configure show ca identity show ca subject-name show ca verifycertdn verifycertdn Verifies the certificate’s Distinguished Name (DN) and acts as a subject name filter, based on the X.500_string. If the subject name of the peer certificate matches the X.500_string, then it is filtered out and ISAKMP negotiation fails. X.500_string Specify per RFC1779. The entered string will be the Distingulished Name (DN) sent. ca authenticate The ca authenticate command allows the PIX Firewall to authenticate its certification authority (CA) by obtaining the CA’s self-signed certificate, which contains the CA’s public key. To authenticate a peer’s certificate(s), a PIX Firewall must obtain the CA certificate containing the CA public key. Because the CA certificate is a self-signed certificate, the key should be authenticated manually by contacting the CA administrator. You are given the choice of authenticating the public key in that certificate by including within the ca authenticate command the key’s fingerprint, which is retrieved in an out-of-band process. The PIX Firewall will discard the received CA certificate and generate an error message, if the fingerprint you specified is different from the received one. You can also simply compare the two fingerprints without having to enter the key within the command. If you are using RA mode (within the ca configure command), when you issue the ca authenticate command, the RA signing and encryption certificates will be returned from the CA, as well as the CA certificate. The ca authenticate command is not saved to the PIX Firewall configuration. However, the public keys embedded in the received CA (and RA) certificates are saved in the configuration as part of the RSA public key record (called the “RSA public key chain”). To save the public keys permanently to Flash memory, use the ca save all command. To view the CA’s certificate, use the show ca certificate command. Note If the CA does not respond by a timeout period after this command is issued, the terminal control will be returned so it will not be tied up. If this happens, you must re-enter the command. ca configure The ca configure command is used to specify the communication parameters between the PIX Firewall and the CA. Use the no ca configure command to reset each of the communication parameters to the default value. If you want to show the current settings stored in RAM, use the show ca configure command. The following example indicates that myca is the name of the CA and the CA will be contacted rather than the RA. It also indicates that the PIX Firewall will wait 5 minutes before sending another certificate request, if it does not receive a response, and will resend a total of 15 times before dropping its request. If the CRL is not accessible, crloptional tells the PIX Firewall to accept other peer’s certificates. ca configure myca ca 5 15 crloptional ca crl request The ca crl request command allows the PIX Firewall to obtain an updated CRL from the CA at any time. The no ca crl command deletes the CRL within the PIX Firewall. A CRL lists all the network's devices' certificates that have been revoked. The PIX Firewall will not accept revoked certificates; therefore, any peer with a revoked certificate cannot exchange IPSec traffic with your PIX Firewall. The first time your PIX Firewall receives a certificate from a peer, it will download a CRL from the CA. Your PIX Firewall then checks the CRL to make sure the peer's certificate has not been revoked. (If the certificate appears on the CRL, it will not accept the certificate and will not authenticate the peer.) A CRL can be reused with subsequent certificates until the CRL expires. When the CRL does expire, the PIX Firewall automatically updates it by downloading a new CRL and replaces the expired CRL with the new CRL. If your PIX Firewall has a CRL which has not yet expired, but you suspect that the CRL's contents are out of date, use the ca crl request command to request that the latest CRL be immediately downloaded to replace the old CRL. The ca crl request command is not saved with the PIX Firewall configuration between reloads. The following example indicates the PIX Firewall will obtain an updated CRL from the CA with the name myca: ca crl request myca The show ca crl command lets you know whether there is a CRL in RAM, and where and when the CRL is downloaded. The following is sample output from the show ca crl command. See Table 4-2 for descriptions of the strings within the following sample output. show ca crl CRL: CRL Issuer Name: CN = MSCA, OU = Cisco, O = VSEC, L = San Jose, ST = CA, C = US, EA =<16> username@example.com LastUpdate:17:07:40 Jul 11 2000 ca enroll The ca enroll command is used to send an enrollment request to the CA requesting a certificate for all of your PIX Firewall unit’s key pairs. This is also known as “enrolling” with the CA. (Technically, enrolling and obtaining certificates are two separate events, but they both occur when this command is issued.) Your PIX Firewall needs a signed certificate from the CA for each of its RSA key pairs; if you previously generated general purpose keys, the ca enroll command will obtain one certificate corresponding to the one general purpose RSA key pair. If you previously generated special usage keys, this command will obtain two certificates corresponding to each of the special usage RSA key pairs. If you already have a certificate for your keys, you will be unable to complete this command; instead, you will be prompted to remove the existing certificate first. The ca enroll command is not saved with the PIX Firewall configuration between reloads. To verify if the enrollment process succeeded and to display the PIX Firewall unit’s certificate, use the show ca certificate command. If you want to cancel the current enrollment request, use the no ca enroll command. The required challenge password is necessary in the event that you need to revoke your PIX Firewall unit's certificate(s). When you ask the CA administrator to revoke your certificate, you must supply this challenge password as a protection against fraudulent or mistaken revocation requests. Note This password is not stored anywhere, so you must remember this password. If you lose the password, the CA administrator may still be able to revoke the PIX Firewall's certificate, but will require further manual authentication of the PIX Firewall administrator identity. The PIX Firewall unit's serial number is optional. If you provide the serial option, the serial number will be included in the obtained certificate. The serial number is not used by IPSec or IKE but may be used by the CA to either authenticate certificates or to later associate a certificate with a particular device. Ask your CA administrator if serial numbers should be included in the certificate. If you are in doubt, specify the serial option. The PIX Firewall unit's IP address is optional. If you provide the ipaddress option, the IP address will be included in the obtained certificate. Normally, you would not include the ipaddress option because the IP address binds the certificate more tightly to a specific entity. Also, if the PIX Firewall is moved, you would need to issue a new certificate. Note When configuring ISAKMP for certificate-based authentication, it is important to match the ISAKMP identity type with the certificate type. The ca enroll command used to acquire certificates will, by default, get a certificate with the identity based on host name. The default identity type for the isakmp identity command is based on address instead of host name. You can reconcile this disparity of identity types by using the isakmp identity address command. See the isakmp command for information about the isakmp identity address command. The following example indicates that the PIX Firewall will send an enrollment request to the CA myca.example.com. The password 1234567890 is specified, as well as a request for the PIX Firewall unit’s serial number to be embedded in the certificate. ca enroll myca.example.com 1234567890 serial ca generate rsa The ca generate rsa command generates RSA key pairs for your PIX Firewall. RSA keys are generated in pairs—one public RSA key and one private RSA key. If your PIX Firewall already has RSA keys when you issue this command, you will be warned and prompted to replace the existing keys with new keys. Note Before issuing this command, make sure your PIX Firewall has a host name and domain name configured (using the hostname and domain-name commands). You will be unable to complete the ca generate rsa command without a host name and domain name. The ca generate rsa command is not saved in the PIX Firewall configuration. However, the keys generated by this command are saved in the persistent data file in Flash memory, which is never displayed to the user or backed up to another device. In this example, one general-purpose RSA key pair is to be generated. The selected size of the key modulus is 2048. ca generate rsa key 2048 Note You cannot generate both special usage and general purpose keys; you can only generate one or the other. ca identity The ca identity command declares the CA that your PIX Firewall will use. Currently, PIX Firewall supports one CA at one time. The no ca identity command removes the ca identity command from the configuration and deletes all certificates issued by the specified CA and CRLs. The show ca identity command shows the current settings stored in RAM. The PIX Firewall uses a subset of the HTTP protocol to contact the CA, and so must identify a particular cgi-bin script to handle CA requests. The default location and script on the CA server is /cgi-bin/pkiclient.exe. If the CA administrator has not put the CGI script in the previously listed location, include the location and the name of the script within the ca identity command statement. By default, querying of a certificate or a CRL is done via Cisco’s PKI protocol. If the CA supports Lightweight Directory Access Protocol (LDAP), query functions may use LDAP as well. The IP address of the LDAP server must be included within the ca identity command statement. The following example indicates that the CA myca.example.com is declared as the PIX Firewall unit’s supported CA. The CA’s IP address of 205.139.94.231 is provided. ca identity myca.example.com 205.139.94.231 ca save all The ca save all commands lets you save the PIX Firewall unit’s RSA key pairs, the CA, RA and PIX Firewall unit’s certificates, and the CA’s CRLs in the persistent data file in Flash memory between reloads. The no ca save command removes the saved data from PIX Firewall unit’s Flash memory. The ca save command itself is not saved with the PIX Firewall configuration between reloads. To view the current status of requested certificates, and relevant information of received certificates, such as CA and RA certificates, use the show ca certificate command. Because the certificates contain no sensitive data, any user can issue this show command. When the ca subject-name ca_nickname X.500_string command is configured, the firewall enrolls the device certificate with the subject Distinguished Name (DN) that is specified in the X.500_string, using RFC 1779 format. The supported DN attributes are listed in Table 4-1 Table 4-1 Supported Distinguished Name attributes. Attribute Description ou OrganizationalUnitName o OrganizationName st StateOrProvinceName c CountryName ea Email address (a non-RFC 1779 format attribute) Note If the X.500_string is being using to communicate between a Cisco VPN 3000 headend and the firewall, the VPN 3000 headend must not be configured to use DNS names for its backup servers. Instead, the backup servers must be specified by their IP addresses. ca verifycertdn X.500_string The ca verifycertdn X.500_string command verifies the certificate’s Distinguished Name (DN) and acts as a subject name filter, based on the X.500_string. If the subject name of the peer certificate matches the X.500_string, then it is filtered out and ISAKMP negotiation fails. ca zeroize rsa The ca zeroize rsa command deletes all RSA keys that were previously generated by your PIX Firewall. If you issue this command, you must also perform two additional tasks. Perform these tasks in the following order: 1. Use the no ca identity command to manually remove the PIX Firewall unit’s certificates from the configuration. This will delete all the certificates issued by the CA. 2. Ask the CA administrator to revoke your PIX Firewall unit’s certificates at the CA. Supply the challenge password you created when you originally obtained the PIX Firewall unit’s certificates using the crypto ca enroll command. To delete a specific RSA key pair, specify the name of the RSA key you want to delete using the option keypair_name within the ca zeroize rsa command statement. Note You may have more than one pair of RSA keys due to SSH. See the ssh command in Chapter 8, “S Commands” for more information. show ca commands The show ca certificate command displays the CA Server’s subject name, CRL distribution point (where the PIX Firewall will obtain the CRL), and lifetime of both the CA server’s root certificate and the PIX Firewall’s certificates. The following is sample output from the show ca certificate command. The CA certificate stems from a Microsoft CA server previously generated for this PIX Firewall. show ca certificate RA Signature Certificate Status:Available Certificate Serial Number:6106e08a000000000005 Key Usage:Signature CN = SCEP OU = VSEC O = Cisco L = San Jose ST = CA C = US EA =<16> username@example.com Validity Date: start date:17:17:09 Jul 11 2000 Certificate Status:Available Certificate Serial Number:1f80655400000000000a Key Usage:General Purpose Subject Name Name:pixfirewall.example.com Validity Date: start date:20:06:23 Jul 17 2000 CA Certificate Status:Available Certificate Serial Number:25b81813efe58fb34726eec44ae82365 Key Usage:Signature CN = MSCA OU = Cisco O = VSEC L = San Jose ST = CA C = US EA =<16> username@example.com Validity Date: start date:17:07:34 Jul 11 2000 RA KeyEncipher Certificate Status:Available Certificate Serial Number:6106e24c000000000006 Key Usage:Encryption CN = SCEP OU = VSEC O = Cisco L = San Jose ST = CA C = US EA =<16> username@example.com Validity Date: start date:17:17:10 Jul 11 2000 Table 4-2 describes strings within the show ca certificate command sample output. The show ca crl command displays whether there is a certificate revocation list (CRL) in the PIX Firewall RAM, and where and when the CRL downloaded. The show ca configure command displays the current communication parameter settings stored in the PIX Firewall RAM. The show ca identity command displays the the current certification authority (CA) settings stored in RAM. The show ca mypubkey rsa command displays the PIX Firewall unit’s public keys in a DER/BER encoded PKCS#1 representation. The following is sample output from the show ca mypubkey rsa command. Special usage RSA keys were previously generated for this PIX Firewall using the ca generate rsa command. show ca mypubkey rsa Examples In the following example, a request for the CA’s certificate was sent to the CA. The fingerprint was not included in the command. The CA sends its certificate and the PIX Firewall prompts for verification of the CA’s certificate by checking the CA certificate’s fingerprint. Using the fingerprint associated with the CA’s certificate retrieved in some out-of-band process from a CA administrator, compare the two fingerprints. If both fingerprints match, then the certificate is considered valid. ca authenticate myca Certificate has the following attributes: Fingerprint: 0123 4567 89AB CDEF 0123 The following example shows the error message. This time, the fingerprint is included in the command. The two fingerprints do not match, and therefore the certificate is not valid. ca authenticate myca 0123456789ABCDEF0123 Certificate has the following attributes: Fingerprint: 0123 4567 89AB CDEF 5432 %Error in verifying the received fingerprint. Type help or ‘?’ for a list of available commands. Syntax Description ca generate rsa key Generates an RSA key for the PIX Firewall. modulus Defines the modulus used to generate the RSA key. This is a size measured in bits. You can specify a modulus between 512, 768, 1024, and 2048. Note Before issuing this command, make sure your PIX Firewall host name and domain name have been configured (using the hostname and domain-name commands). If a domain name is not configured, the PIX Firewall uses a default domain of ciscopix.com. Defaults RSA key modulus default (during PDM setup) is 768. The default domain is ciscopix.com. Usage Guidelines If your PIX Firewall already has RSA keys when you issue this command, you are warned and prompted to replace the existing keys with new keys. Note The larger the key modulus size you specify, the longer it takes to generate an RSA. We recommend a default value of 768. PDM uses the Secure Sockets Layer (SSL) communications protocol to communicate with the PIX Firewall. SSL uses the private key generated with the ca generate rsa command. For a certificate, SSL uses the key obtained from a certification authority (CA). If that does not exist, it uses the PIX Firewall self-signed certificate created when the RSA key pair was generated. If there is no RSA key pair when an SSL session is initiated, the PIX Firewall creates a default RSA key pair using a key modulus of 768. The ca generate rsa command is not saved in the PIX Firewall configuration. However, the keys generated by this command are saved in a persistent data file in Flash memory, which can be viewed with the show ca my rsa key command. Examples The following example demonstrates how one general purpose RSA key pair is generated. The selected size of the key modulus is 1024. router(config) ca generate rsa key 1024 Key name:pixfirewall.cisco.com Usage:General Purpose Key Key Data: 30819f30 0d06092a 864886f7 0d010101 05000381 8d003081 89028181 00c8ed4c 9f5e0b52 aea931df 04db2872 5c4c0afd 9bd0920b 5e30de82 63d834ac f2e1db1f 1047481a 17be5a01 851835f6 18af8e22 45304d53 12584b9c 2f48fad5 31e1be5a bb2ddc46 2841b63b f92cb3f9 8de7cb01 d7ea4057 7bb44b4c a64a9cf0 efaacd42 e291e4ea 67efbf6c 90348b75 320d7fd3 c573037a ddb2dde8 00df782c 39020301 0001 capture Enables packet capture capabilities for packet sniffing and network fault isolation. Syntax Description access-list Selects packets based on IP or higher fields. By default, all IP packets are matched. acl_name The access list id. buffer Defines the buffer size used to store the packet. The default size is 512 KB. Once the buffer is full, packet capture stops. bytes The number of bytes (b) to allocate. capture_name A name to uniquely identify the packet capture. circular-buffer Overwrites the buffer, starting from the beginning, when the buffer is full. detail Shows additional protocol information for each packet. dump Shows a hexidecimal dump of the packet transported over the data link transport. (However, the MAC information is not shown in the hex dump.) ethernet-type Selects packets based on the Ethernet type. An exception is the 802.1Q or VLAN type. The 802.1Q tag is automatically skipped and the inner Ethernet type is used for matching. By default, all Ethernet types are accepted. interface The interface for packet capture. name The name of the interface on which to use packet capture. packet-length Sets the maximum number of bytes of each packet to store in the capture buffer. By default, the maximum is 68 bytes. type An Ethernet type to exclude from capture. The default is 0, so you can restore the default at any time by setting type to 0. Usage Guidelines To enable packet capturing, attach the capture to an interface with the interface option. Multiple interface statements attach the capture to multiple interfaces. If the buffer contents are copied to a TFTP server in ASCII format, then only the headers can be seen. The details and hex dump of the packets can not be seen. To see the details and hex dump, transfer the buffer in PCAP format and then read with TCPDUMP or Ethereal using the options to show the detail and hex dump of the packets. The ethernet-type and access-list options select the packets to store in the buffer. A packet must pass both the Ethernet and access list filters before the packet is stored in the capture buffer. The capture capture_name circular-buffer command enables the capture buffer to overwrite itself, starting from the beginning, when the capture buffer is full. Enter the no capture command with either the access-list or interface option unless you want to clear the capture itself. Entering no capture without options deletes the capture. If the access-list option is specified, the access list is removed from the capture and the capture is preserved. If the interface option is specified, the capture is detached from the specified interface and the capture is preserved. To clear the capture buffer, use the clear capture capture_name command. The short form of clear capture is not supported to prevent accidental destruction of all packet captures. Note The capture command is not saved to the configuration, and the capture command is not replicated to the standby unit during failover. Use the copy capture: capture_name t [pcap] command to copy capture information to a remote TFTP server. Use the[/pcap] command to view the packet capture information with a web browser. If the pcap option is specified, then a libpcap-format file is downloaded to your web browser and can be saved using your web browser. (A libcap file can be viewed with Tcpdump or Ethereal.) The show capture command displays the capture configuration when no options are specified. If the capture_name is specified, then it displays the capture buffer contents for that capture. Output Formats The decoded output of the packets are dependent on the protocol of the packet. In Table 4-3, the bracketed output is displayed when the detail option is specified. Examples On a web browser, the capture contents for a capture named “mycapture” can be viewed at the following location: To download a libpcap file (used in web browsers such as Internet Explorer or Netscape Navigator) to a local machine, enter the following: In the following example, the traffic is captured from an outside host at 209.165.200.241 to an inside HTTP server. access-list http permit tcp host 10.120.56.15 eq http host 209.165.200.241 access-list http permit tcp host 209.165.200.241 host 10.120.56.15 eq http capture http access-list http packet-length 74 interface inside The following stores a PPPoED trace to a file name “pppoed-dump” on a TFTP server at 209.165.201.17. (Some TFTP servers require that the file exists and is world writable, so check your TFTP server for the appropriate permissions and file first.) pixfirewall(config)# copy capture:pppoed t Writing to file '/tftpboot/pppoed-dump' at 209.165.201.17 on outside To display the capture configuration, use the show capture command without specifying any options as follows: pixfirewall(config)# show capture capture arp ethernet-type arp interface outside capture http access-list http packet-length 74 interface inside clear Removes configuration files and commands from the configuration, or resets command values. However, using the no form of a command is preferred to using the clear form to change your configuration because the no form is usually more precise. clear command no command Command Modes Configuration mode for clear commands that remove or reset firewall configurations. Privilege mode for commands that clear items such as counters in show commands. Additionally, the clear commands available in less secure modes are available in subsequent (more secure) modes. However, commands from a more secure mode are not available in a less secure mode. Syntax Description Table 4-4, Table 4-5, and Table 4-6 list the clear commands available in each mode. clock Set the PIX Firewall clock for use with the PIX Firewall Syslog Server (PFSS) and the Public Key Infrastructure (PKI) protocol. clear clock [no] clock summer-time zone recurring [week weekday month hh:mm week weekday month hh:mm] [offset] [no] clock summer-time zone date {day month | month day} year hh:mm {day month | month day} year hh:mm [offset] Syntax Description date The date command form is used as an alternative to the recurring form of the clock summer-time command. It specifies that summertime should start on the first date entered and end on the second date entered. If the start date month is after the end date month, the summer time zone is accepted and assumed to be in the Southern Hemisphere. day The day of the month to start, from 1 to 31. detail Displays the clock source and current summertime settings. hh:mm:ss The hour:minutes:seconds expressed in 24-hour time; for example, 20:54:00 for 8:54 pm. Zeros can be entered as a single digit; for example, 21:0:0. hours The hours of offset from UTC. minutes The minutes of offset from UTC. month The month expressed as the first three characters of the month; for example, apr for April. offset The number of minutes to add during summertime. The default is 60 minutes. recurring Specifies the start and end dates for local summer “daylight savings” time. The first date entered is the start date and the second date entered is the end date. (The start date is relative to UTC and the end date is relative to the specified summer time zone.) If no dates are specified, United States Daylight Savings Time is used. If the start date month is after the end date month, the summer time zone is accepted and assumed to be in the Southern Hemisphere. summer-time The clock summer-time command displays summertime hours during the specified summertime date range. This command affects the clock display time only. timezone clock timezone sets the clock display to the time zone specified. It does not change internal PIX Firewall time, which remains UTC. week Specifies the week of the month. The week is 1 through 4 and first or last for partial weeks at the begin or end a month, respectively. For example, week 5 of any month is specified by using last. weekday Specifies the day of the week: Monday, Tuesday, Wednesday, and so on. year The year expressed as four digits; for example, 2000. The year range supported for the clock command is 1993 to 2035. zone The name of the time zone. Usage Guidelines The clock command lets you specify the time, month, day, and year for use with time stamped syslog messages, which you can enable with the logging timestamp command. You can view the time with the clock or the show clock command. The clear clock command removes all summertime settings and resets the clock display to UTC. The show clock command outputs the time, time zone, day, and full date. Note The lifetime of a certificate and the certificate revocation list (CRL) is checked in UTC, which is the same as GMT. If you are using IPSec with certificates, set the PIX Firewall clock to UTC to ensure that CRL checking works correctly. You can interchange the settings for the day and the month; for example, clock set 21:0:0 1 apr 2000. The maximum date range for the clock command is 1993 through 2035. A time prior to January 1, 1993, or after December 31, 2035, will not be accepted. unit’s motherboard. Should this battery fail, contact Cisco TAC for a replacement PIX Firewall unit. Cisco’s PKI (Public Key Infrastructure) protocol uses the clock to make sure that a certificate revocation list (CRL) is not expired. Otherwise, the CA may reject or allow certificates based on an incorrect timestamp. Refer to the Cisco PIX Firewall and VPN Configuration Guide for a description of IPSec concepts. Examples To enable PFSS time stamp logging for the first time, use the following commands: clock set 21:0:0 apr 1 2000 show clock 21:00:05 Apr 01 2000 logging host 209.165.201.3 logging timestamp logging trap 5 In this example, the clock command sets the clock to 9 p.m. on April 1, 2000. The logging host command specifies that a syslog server is at IP address 209.165.201.3. The PIX Firewall automatically determines that the server is a PFSS and sends syslog messages to it via TCP and UDP. The logging timestamp command enables sending time stamped syslog messages. The logging trap 5 command in this example specifies that messages at syslog level 0 through 5 be sent to the syslog server. The value 5 is used to capture severe and normal messages, but also those of the aaa authentication enable command. The following clock summer-time command specifies that summertime starts on the first Sunday in April at 2 a.m. and ends on the last Sunday in October at 2 a.m.: pix_name (config)# clock summer-time PDT recurring 1 Sunday April 2:00 last Sunday October 2:00 If you live in a place where summertime follows the Southern Hemisphere pattern, you can specify the exact date and times. In the following example, daylight savings time (summer time) is configured to start on October 12, 2001, at 2 a.m. and end on April 26, 2002, at 2 a.m.: pix_name (config)# clock summer-time PDT date 12 October 2001 2:00 26 April 2002 2:00 conduit Add, delete, or show conduits through the PIX Firewall for incoming connections. However, the conduit command has been superseded by the access-list command. We recommend that you migrate your configuration away from the conduit command to maintain future compatibility. [no] conduit permit | deny protocol global_ip global_mask [operator port [port]] foreign_ip foreign_mask [operator port [port]] clear conduit show conduit This example lets foreign host 209.165.201.2 access any global address for FTP. This example lets any foreign host access global address 209.165.201.1 for FTP. global_mask Network mask of global_ip. The global_mask is a 32-bit, four-part dotted decimal; such as, 255.255.255.255. Use zeros in a part to indicate bit positions to be ignored. Use subnetting if required. If you use 0 for global_ip, use 0 for the global_mask; otherwise, enter the global_mask appropriate to global_ip. icmp_type The type of ICMP message. Table 4-7 lists the ICMP type literals that you can use in this command. Omit this option to include all ICMP types. The conduit permit icmp any any command permits all ICMP types and lets ICMP pass inbound and outbound. icmp_type An existing ICMP type object group. _obj_grp_id object-group Specifies an object group. operator A comparison operand that lets you specify a port or a port range. Use without an operator and port to indicate all ports. For example: conduit permit tcp any any Use eq and a port to permit or deny access to just that port. For example use eq ftp to permit or deny access only to FTP: conduit deny tcp host 209.165.200.247 eq ftp 209.165.201.1 Use lt and a port to permit or deny access to all ports less than the port you specify. For example, use lt 2025 to permit or deny access to the well-known ports (1 to 1024). conduit permit tcp host 209.165.200.247 lt 1025 any Use gt and a port to permit or deny access to all ports greater than the port you specify. For example, use gt 42 to permit or deny ports 43 to 65535. conduit deny udp host 209.165.200.247 gt 42 host 209.165.201.2 Use neq and a port to permit or deny access to every port except the ports that you specify. For example, use neq 10 to permit or deny ports 1-9 and 11 to 65535. conduit deny tcp host 209.165.200.247 neq 10 host 209.165.201.2 neq 42 Use range and a port range to permit or deny access to only those ports named in the range. For example, use range 10 1024 to permit or deny access only to ports 10 through 1024. All other ports are unaffected. conduit deny tcp any range ftp telnet any This command is the default condition for the conduit command in that all ports are denied until explicitly permitted. You can view valid port numbers online at the following website: See "“Ports”"in Chapter 2, “Using PIX Firewall Commands” for a list of valid port literal names in port ranges; for example, ftp h323. You can also specify numbers. protocol Specify the transport protocol for the connection. Possible literal values are icmp, tcp, udp, or an integer in the range 0 through 255 representing an IP protocol number. Use ip to specify all transport protocols. You can view valid protocol numbers online at the following website: If you specify the icmp protocol, you can permit or deny ICMP access to one or more global IP addresses. Specify the ICMP type in the icmp_type variable, or omit to specify all ICMP types. See "Usage Guidelines" for a complete list of the ICMP types. protocol_obj_grp_id An existing protocol object group. service_obj_grp_id An existing service (port) object group. Usage Guidelines We recommend that you use the access-list command instead of the conduit command because using an access list is a more secure way of enabling connections between hosts. Specifically, the conduit command functions by creating an exception to the PIX Firewall Adaptive Security Algorithm that then permits connections from one PIX Firewall network interface to access hosts on another.. The show conduit command displays the conduit command statements in the configuration and the number of times (hit count) an element has been matched during a conduit command search. Step 1 View the static command format. This command normally precedes both the conduit and access-list commands. The static command syntax is as follows. static (high_interface,low_interface) global_ip local_ip netmask mask For example: static (inside,outside) 209.165.201.5 192.168.1.5 netmask 255.255.255.255 This command maps the global IP address 209.165.201.5 on the outside interface to the web server 192.168.1.5 on the inside interface. The 255.255.255.255 is used for host addresses. Step 2 View the conduit command format. The conduit command is similar to the access-list command in that it restricts access to the mapping provided by the static command. The conduit command syntax is as follows. This command permits TCP for the global IP address 209.165.201.5 that was specified in the static command statement and permits access over port 80 (www). The “any” option lets any host on the outside interface access the global IP address. The static command identifies the interface that the conduit command restricts access to. Step 3 Create the access-list command from the conduit command options. The acl_name in the access-list command is a name or number you create to associate access-list command statements with an access-group or crypto map command statement. Normally the access-list command format is as follows: access-list acl_name [deny | permit] protocol src_addr src_mask operator port dest_addr dest_mask operator port However, using the syntax from the conduit command in the access-list command, you can see how the foreign_ip in the conduit command is the same as the src_addr in the access-list command and how the global_ip option in the conduit command is the same as the dest_addr in the access-list command. The access-list command syntax overlaid with the conduit command options is as follows. access-list acl_name action protocol foreign_ip foreign_mask foreign_operator foreign_port [foreign_port] global_ip global_mask global_operator global_port [global_port] For example: access-list acl_out permit tcp any host 209.165.201.5 eq www This command identifies the access-list command statement group with the “acl_out” identifier. You can use any name or number for your own identifier. (In this example the identifier, “acl” is from ACL, which means access control list and “out” is an abbreviation for the outside interface.) It makes your configuration clearer if you use an identifier name that indicates the interface to which you are associating the access-list command statements. The example access-list command, like the conduit command, permits TCP connections from any system on the outside interface. The access-list command is associated with the outside interface with the access-group command. Step 4 Create the access-group command using the acl_name from the access-list command and the low_interface option from the static command. The format for the access-group command is as follows. access-group acl_name in interface low_interface For example: access-group acl_out in interface outside This command associates with the “acl_out” group of access-list command statements and states that the access-list command statement restricts access to the outside interface. Note The conduit command statements are processed in the order they are entered into the configuration. The permit and deny options for the conduit command are processed in the order listed in the PIX Firewall configuration. In the following example, host 209.165.202.129 is not denied access through the PIX Firewall because the permit option precedes the deny option. conduit permit tcp host 209.165.201.4 eq 80 any conduit deny tcp host 209.165.201.4 host 209.165.202.129 eq 80 any Note If you want internal users to be able to ping external hosts, use the conduit permit icmp any any command. After changing or removing a conduit command statement, use the clear xlate command. You can remove a conduit command statement with the no conduit command. The clear conduit command removes all conduit command statements from your configuration. The clear conduit counters command clears the current conduit hit count. If you prefer more selective ICMP access, you can specify a single ICMP message type as the last option in this command. Table 4-7 lists possible ICMP types values. Usage Notes. If you use Port Address Translation (PAT), you cannot use a conduit command statement using the PAT address to either permit or deny access to ports.. The two conduit command statements for the PPTP transport protocol, which is a subset of the GRE protocol, are as shown in the following example: static (dmz2,outside) 209.165.201.5 192.168.1.5 netmask 255.255.255.255 conduit permit tcp host 209.165.201.5 eq 1723 any conduit permit gre host 209.165.201.5 any In this example, PPTP is being used to handle access to host 192.168.1.5 on the dmz2 interface from users on the outside. Outside users access the dmz2 host using global address 209.165.201.5. The first conduit command statement opens access for the PPTP protocol and gives access to any outside users. The second conduit command statement permits access to GRE. If PPTP was not involved and GRE was, you could omit the first conduit command statement. 6. The RPC conduit command support fixes up UDP portmapper and rpcbind exchanges. TCP exchanges are not supported. This lets simple RPC-based programs work; however, remote procedure calls, arguments, or responses that contain addresses or ports will not be fixed up. For MSRPC, two conduit command statements are required, one for port 135 and another for access to the high ports (1024-65535). For Sun RPC, a single conduit command statement is required for UDP port 111. Once you create a conduit command statement for RPC, you can use the following command to test its activity from a UNIX host: rpcinfo -u unix_host_ip_address 150001 In this case, the host at 209.165.202.3 has Intel Internet Phone access in addition to its blanket FTP access. Examples 1. The following commands permit access between an outside UNIX gateway host at 209.165.201.2, to an inside SMTP server with Mail Guard at 192.168.1.49. Mail Guard is enabled in the default configuration for PIX Firewall with the fixup protocol smtp 25 command. The global address on the PIX Firewall is 209.165.201.1. static (inside,outside) 209.165.201.1 192.168.1.49 netmask 255.255.255.255 0 0 conduit permit tcp host 209.165.201.1 eq smtp host 209.165.201.2 2. You can set up an inside host to receive H.323 Intel Internet Phone calls and allow the outside network to connect inbound via the IDENT protocol (TCP port 113). In this example, the inside network is at 192.168.1.0, the global addresses on the outside network are referenced via the 209.165.201.0 network address with a 255.255.255.224 mask. static (inside,outside) 209.165.201.0 192.168.1.0 netmask 255.255.255.224 0 0 conduit permit tcp 209.165.201.0 255.255.255.224 eq h323 any conduit permit tcp 209.165.201.0 255.255.255.224 eq 113 any 3. You can create a web server on the perimeter interface that can be accessed by any outside host as follows: static (perimeter,outside) 209.165.201.4 192.168.1.4 netmask 255.255.255.255 0 0 conduit permit tcp host 209.165.201.4 eq 80 any In this example, the static command statement maps the perimeter host, 192.168.1.4. to the global address, 209.165.201.4. The conduit command statement specifies that the global host can be accessed on port 80 (web server) by any outside host.. show configure For older PIX Firewall units that have a floppy drive only: configure floppy Syntax Description address_mask Specifies the address mask for the inside interface IP address. The default address mask is 255.255.255.0. all Combines the primary and secondary options. clear Clears aspects of the current configuration in RAM. Use the write erase command to clear the complete configuration. factory-default Specifies to clear the current configuration and regenerate the default, factory-loaded configuration. This command is supported for the PIX 501 and PIX 506/506E only in PIX Firewall software Version 6.2. filename A filename you specify to qualify the location of the configuration file on the TFTP server named in server_ip. If you set a filename with the tftp-server command, do not specify it in the configure command; instead just use a colon ( : ) without a filename. floppy Merges the current configuration with that on diskette. http_pathname The name of the HTTP server path that contains the PIX Firewall configuration to copy. http[s] Specifies to retrieve configuration information from an HTTP server. (SSL is used when https is specified.) inside_ip_address Specifies the inside IP address. The default inside interface IP address is 192.168.1.1. location The IP address (or defined name) of the HTTP server to log into. memory Merges the current configuration with that in Flash memory. net Loads the configuration from a TFTP server and the path you specify. password The password for logging into the HTTP server. pathname The name of the resource that contains the PIX Firewall configuration to copy. port Specifies the port to contact on the HTTP server. It defaults to 80 for http and 443 for https. primary Sets the interface, ip, mtu, nameif, and route commands to their default values. In addition, interface names are removed from all commands in the configuration. secondary Removes the aaa-server, alias, access-list, apply, conduit, global, outbound, static, telnet, and url-server command statements from your configuration. location The IP address or name of the server from which to merge in a new configuration. This server address or name is defined with the tftp-server command. terminal Starts configuration mode to enter configuration commands from a terminal. Exit configuration mode by entering the quit command. user The username for logging into the HTTP server. Command Modes The configure terminal command (with the short form “config t”) is available in privileged mode, and it changes the firewall over to configuration mode. All other configure commands are available in configuration mode. Usage Guidelines You must be in configuration mode to use the configuration commands, except for the configure terminal (config t) command. The configure terminal command starts configuration mode from privileged mode. You can exit configuration mode with the quit command. After exiting configuration mode, use the write memory command to store your changes in Flash memory or write floppy to store the configuration on diskette. Each command statement from Flash memory (with configure memory), TFTP transfer (with configure net), or diskette (with configure floppy) is read into the current configuration and evaluated in the same way as commands entered from a keyboard with the following rules: • If the command in Flash memory or on diskette is identical to an existing command in the current configuration, it is ignored. • If the command in Flash memory or on diskette is an additional instance of an existing command, such as if you already have one telnet command for IP address 10.2.3.4 and the diskette configuration has a telnet command for 10.7.8.9, then both commands appear in the current configuration. • If the command redefines an existing command, the command on diskette or Flash memory overwrites the command in the current configuration in RAM. For example, if you have the hostname ram command in the current configuration and the hostname floppy command on diskette, the command in the configuration becomes hostname floppy and the command line prompt changes to match the new hostname when that command is read from diskette. The show configure and show startup-config commands display the startup configuration of the firewall. The write terminal and show running-config commands display the configuration currently running on the firewall. The clear configure [all] command removes the aaa-server, alias, access-list, apply, conduit, global, outbound, static, telnet, and url-server command statements from the configuration. However, the clear configure secondary command does not remove tftp-server command statements. Note Save your configuration before using a clear configure command. The clear configure primary and clear configure secondary commands do not prompt you before deleting lines from your configuration. configure factory-default On the PIX 501 and PIX 506/506E, the configure factory-default command reinstates the factory default configuration. (This command is not supported on other PIX Firewall platforms at this time.) Use this command carefully because, before reinstating the factory default configuration, this command has the same effect as the clear configure all command; it clears all existing configuration information. With no options specified, the configure factory-default command gives a default IP address of 192.168.1.1, and a netmask of 255.255.255.0, to the PIX Firewall inside interface. With the configure factory-default ip-address command, if you specify an inside IP address but no netmask, the default address mask is derived from the specified IP address and is based on the IP address class. With the configure factory-default ip-address netmask command, the specified IP address and netmask are assigned to the inside interface of the firewall. For the PIX 501, the 10-user license is limited to a DHCP pool of 32 addresses, the 50-user license is limited to a DHCP pool size of 128 addresses, and the unlimited user license is limited to a DHCP pool size of 253 addresses. (It would be 256 addresses for the unlimited user license, but the default IP address is class C and 256 DHCP addresses cannot be supported within a class C address.) The PIX 506/506E is limited to a DHCP pool size of 253. configure http[s] The configure http[s] command retrieves configuration information from an HTTP server for remotely managing a PIX Firewall configuration. The configuration can be either a text file or an XML file. Text files merge regardless of errors that may be in the cofiguration. XML files require the use of the message “config-data” in the XML file to explicitly control merging and error handling. configure net The configure net command merges the current running configuration with a TFTP configuration stored at the IP address you specify and from the file you name. If you specify both the IP address and path name in the tftp-server command, you can specify server_ip :filename as simply a colon ( : ). For example: configure net : Use the write net command to store the configuration in the file. If you have an existing PIX Firewall configuration on a TFTP server and store a shorter configuration with the same filename on the TFTP server, some TFTP servers will leave some of the original configuration after the first “:end” mark. This does not affect the PIX Firewall because the configure net command stops reading when it reaches the first “:end” mark. However, this may cause confusion if you view the configuration and see extra text at the end of the configuration. Note Many TFTP servers require the configuration file to be world-readable to be accessible. configure floppy The configure floppy command merges the current running configuration with the configuration stored on diskette. This command assumes that the diskette was previously created by the write floppy command. configure memory The configure memory command merges the configuration in Flash memory into the current configuration in RAM. Examples The following example shows how to configure the PIX Firewall using a configuration retrieved with TFTP: configure net 10.1.1.1:/tftp/config/pixconfig The pixconfig file is stored on the TFTP server at 10.1.1.1 in the tftp/config folder. The following example shows how to configure the PIX Firewall from a diskette: configure floppy The following example shows how to configure the PIX Firewall from the configuration stored in Flash memory: configure memory The following example shows the commands you enter to access configuration mode, view the configuration, and save it in Flash memory. Access privileged mode with the enable command and configuration mode with the configure terminal command. View the current configuration with the write terminal command and save your configuration to Flash memory using the write memory command. pixfirewall> enable password: pixfirewall# configure terminal pixfirewall(config)# write terminal : Saved [...current configuration...] : End write memory When you enter the configure factory-default command on a platform other than the PIX 501 or PIX 506/506E, the PIX Firewall displays a “not supported” error message. On the PIX 515/515E, for example, the following message is displayed: pixdfirewall(config)# configure factory default 'config factory-default' is not supported on PIX-515 console Sets the idle timeout for the serial-cable console session of the PIX Firewall. Syntax Description number Idle time in minutes (0-60) after which the serial-cable console session ends. Defaults The default timeout is 0, which means the console will not time out. The zero value in the command console timeout 0 has the same meaning as zero value in the command exec-timeout 0 0 in Cisco IOS software. Usage Guidelines The console timeout command sets the timeout value for any authenticated, enable mode, or configuration mode user session when accessing the firewall console through a serial cable. This timeout does not alter the Telnet or SSH timeouts; these access methods maintain their own timeout values. The no console timeout command resets the console timeout value to its default. The show console timeout command displays the currently configured console timeout value. Examples The following example shows how to set the console timeout to fifteen (15) minutes: pixfirewall(config)# console timeout 15 The following example shows how to display the configured timeout value: pixfirewall(config)# show console timeout console timeout 15 Related Commands aaa authorization Enable or disable LOCAL or TACACS+ user authorization services. password Sets the password for Telnet access to the PIX Firewall console. ssh Specifies a host for PIX Firewall console access through Secure Shell (SSH). telnet Specifies the host for PIX Firewall console access via Telnet. copy Change software images without requiring access to the TFTP monitor mode or copy a capture file to a TFTP server. Syntax Description copy capture Copies capture information to a remote TFTP server. capture_name is a capture_name unique name that identifies the capture. copy http[s] Downloads a software image into the Flash memory of the firewall from an HTTP server. (SSL is used when https is specified.) copy tftp flash Downloads a software image into Flash memory of the firewall via TFTP without using monitor mode. http_pathname The name of the resource that contains the PIX Firewall software image or PDM file to copy. image Download the selected PIX Firewall image to Flash memory. An image you download is made available to the PIX Firewall on the next reload (reboot). location Either an IP address or a name that resolves to an IP address via the PIX Firewall naming resolution mechanism. password The password for logging into the HTTP server. pdm Download the selected PDM image files to Flash memory. These files are available to the PIX Firewall immediately, without a reboot. port Specifies the port to contact on the HTTP server. It defaults to 80 for http and 443 for https. tftp_pathname PIX Firewall must know how to reach this location via its routing table information. This information is determined by the ip address command, the route command, or also RIP, depending upon your configuration. The pathname can include any directory names in addition to the actual last component of the path to the file on the server. user The username for logging into the HTTP server. copy http[s] The copy http[s]://[user:password@] location [:port ] / http_pathname flash [: [image | pdm] ] command enables you to download a software image into the Flash memory of the firewall from an HTTP server. SSL is used when the copy https command is specified. The user and password options are used for authentication when logging into the HTTP server. The location option is the IP address (or a name that resolves to an IP address) of the HTTP server. The :port option specifies the port on which to contact the server. The value for :port defaults to port 80 for HTTP and port 443 for HTTP through SSL. The pathname option is the name of the resource that contains the image or PDM file to copy. copy tftp The copy tftp flash command enables you to download a software image into the Flash memory of the firewall via TFTP. You can use the copy tftp flash command with any PIX Firewall model running Version 5.1 or higher. The image you download is made available to the PIX Firewall on the next reload (reboot). The command syntax is as follows: If the command is used without the location or pathname optional parameters, then the location and filename are obtained from the user interactively via a series of questions similar to those presented by Cisco IOS software. If you only enter a colon (:), parameters are taken from the tftp-server command settings. If other optional parameters are supplied, then these values would be used in place of the corresponding tftp-server command setting. Supplying any of the optional parameters, such as a colon and anything after it, causes the command to run without prompting for user input. The location is either an IP address or a name that resolves to an IP address via the PIX Firewall naming resolution mechanism (currently static mappings via the name and names commands). PIX Firewall must know how to reach this location via its routing table information. This information is determined by the ip address command, the route command, or also RIP, depending upon your configuration. The pathname can include any directory names besides the actual last component of the path to the file on the server. The pathname cannot contain spaces. If a directory name has spaces, set the directory in the TFTP server instead of in the copy tftp flash command. If your TFTP server has been configured to point to a directory on the system from which you are downloading the image, you need only use the IP address of the system and the image filename. The TFTP server receives the command and determines the actual file location from its root directory information. The server then downloads the TFTP image to the PIX Firewall. You can download a TFTP server from the following website: Note Images prior to Version 5.1 cannot be retrieved using this mechanism. If the TFTP server is already configured, the location or file name can be left unspecified as follows: tftp-server outside 209.165.200.228 tftp/cdisk copy capture:abc tftp:/tftp/abc.cap The following example shows how to use the defaults of the preconfigured TFTP server in the copy capture command: copy capture:abc tftp:pcap copy http[s] The following example shows how to copy the PIX Firewall software image from a public HTTP server into the Flash memory of your PIX Firewall: copy flash:image The following example show how to copy the PDM software image through HTTPS (HTTP over SSL), where the SSL authentication is provided by the username robin and the password xyz: copy flash:pdm The following example show how to copy the PIX Firewall software image from an HTTPS server running on a non-standard port, where the file is copied into the software image space in Flash memory by default: copy flash The following examples copy files from 192.133.219.25, which is the IP address for, to the Flash memory of your PIX Firewall. To use these examples, replace the username and password "cco-username:cco-password" with your CCO username and password. Also note that the URL contains a '?'. To enter this while using the PIX Firewall CLI, it must be preceded by typing Ctrl-v. To copy PIX Firewall software Version 6.2.2 into the Flash memory of your PIX Firewall from Cisco.com, enter the following command: copy download.cgi/pix622.bin?&filename=cisco/ciscosecure/pix/pix622.bin flash:image To copy PDM Version 2.0.2 into the Flash memory of your PIX Firewall from Cisco.com, enter the following command: copy download.cgi/pdm-202.bin?&filename=cisco/ciscosecure/pix/pdm-202.bin flash:pdm copy tftp The following example causes the PIX Firewall to prompt you for the filename and location before you start the TFTP download: copy tftp flash Address or name of remote host [127.0.0.1]? 10.1.1.5 Source file name [cdisk]? pix512.bin copying t to flash [yes|no|again]? yes !!!!!!!!!!!!!!!!!!!!!!!… Received 1695744 bytes. Erasing current image. Writing 1597496 bytes of image. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!… Image installed. The next example takes the information from the tftp-server command. In this case, the TFTP server is in an intranet and resides on the outside interface. The example sets the filename and location from the tftp-server command, saves memory, and then downloads the image to Flash memory. pixfirewall(config)# tftp-server outside 10.1.1.5 pix512.bin Warning: 'outside' interface has a low security level (0). The next example overrides the information in the tftp-server command to let you specify alternate information about the filename and location. If you have not set the tftp-server command, you can also use the copy tftp flash command to specify all information as shown in the second example that follows. copy tftp:/pix512.bin flash copy t flash The next example maps an IP address to the TFTP host name with the name command and uses the tftp-host name in the copy commands: name 10.1.1.6 tftp-host copy t flash copy t flash crashinfo Configure crash information to write to Flash memory, with the option to force a crash of the firewall. crashinfo test clear crashinfo Syntax Description page-fault Forces a crash of the firewall with a page fault. save disable Disables crash information from writing to Flash memory. save enable Configures crash information to write to Flash memory. (This is the default behavior.) test Tests the firewall’s ability to save crash information to Flash memory. This does not actually crash the firewall. watchdog Forces a crash of the firewall as a result of watchdogging. Defaults By default, the firewall saves the crash information file to Flash memory. In other words, by default the crashinfo save command is in your configuration. Command Modes The crashinfo save commands are available in configuration mode. The show crashinfo commands are available in privileged mode. Usage Guidelines The crashinfo save enable command does not need to be entered to save crash information to the Flash memory of your firewall; this is the default behavior of the firewall. However, if the firewall unit crashes during start up, the crash information file is not saved, whether or not the crashinfo save enable command is in your configuration.The firewall must be fully initialized and running first, and then it can save crash information as it crashes. The crashinfo save disable command turns off saving crash information to the Flash memory of the firewall. After a crashinfo save disable command is written to your configuration, crash information is dumped to your console screen only. Use the crashinfo save enable or no crashinfo save disable command to re-enable saving the crash information to Flash memory. The crashinfo test command provides a simulated crash information file, which it saves to Flash memory. It does not crash the firewall. Use the crashinfo test command to test your crash information file configuration without actually having to crash your firewall. However, if a previous crash information file was in Flash memory, the test crash information file overwrites it automatically. Caution Do not use the crashinfo force command in a production environment. The crashinfo force command truly crashes the firewall and forces it to reload. The crashinfo force page-fault command crashes the firewall as a result of a page fault, and the crashinfo force watchdog command crashes the firewall as a result of watchdogging. In the crash output, there is nothing that differentiates a real crash from a crash resulting from the crashinfo force page-fault or crashinfo force watchdog command (because these are real crashes). The firewall reloads after the crash dump is complete. This command is available only in configuration mode. If save to crash (crashinfo save enable) is enabled then the crash is first dumped to Flash memory and then to the console. Otherwise, it is only dumped to console. When the crashinfo force page-fault command is issued, a warning prompt similar to the following is displayed: pixfirewall(config)# crashinfo force page-fault WARNING: This command will force the PIX to crash and reboot. Do you wish to proceed? [confirm]: If you enter a carriage return (by pressing the return or enter key on your keyboard), “ Y”, or “y” the firewall crashes and reloads; all three of these are interpreted as confirmation. Any other character is interpreted as a no, and the firewall returns to the command-line configuration mode prompt. show crashinfo The show crashinfo save command displays whether or not the firewall is currently configured to save crash information to Flash memory. The show crashinfo command displays the crash information file that is stored in the Flash memory of the firewall. If the crash information file is from a test crash (from the crashinfo test command), the first string of the crash information file is “: Saved_Test_Crash” and the last one is “: End_Test_Crash”. If the crash information file is from a real crash, the first string of the crash information file is “: Saved_Crash” and the last one is “: End_Crash” (this includes crashes from use of the crashinfo force page-fault or crashinfo force watchdog commands). The clear crashinfo command deletes the crash information file from the Flash memory of the firewall. Examples The following example shows how to display the current crash information configuration: pixfirewall(config)# show crashinfo save crashinfo save enable The following example shows the output for a crash information file test. (However, this test does not actually crash the firewall. It provides a simulated example file.) pixfirewall(config)# crashinfo test pixfirewall(config)# exit pixfirewall# show crashinfo : Saved_Test_Crash Traceback: 0: 00323143 1: 0032321b 2: 0010885c 3: 0010763c 4: 001078db 5: 00103585 6: 00000000 vector 0x000000ff (user defined) edi 0x004f20c4 esi 0x00000000 ebp 0x00e88c20 esp 0x00e88bd8 ebx 0x00000001 edx 0x00000074 ecx 0x00322f8b eax 0x00322f8b error code n/a eip 0x0010318c cs 0x00000008 eflags 0x00000000 CR2 0x00000000 Stack dump: base:0x00e8511c size:16384, active:1476 0x00e89118: 0x004f1bb4 0x00e89114: 0x001078b4 0x00e89110-0x00e8910c: 0x00000000 0x00e89108-0x00e890ec: 0x12345678 0x00e890e8: 0x004f1bb4 0x00e890e4: 0x00103585 0x00e890e0: 0x00e8910c 0x00e890dc-0x00e890cc: 0x12345678 0x00e890c8: 0x00000000 0x00e890c4-0x00e890bc: 0x12345678 0x00e890b8: 0x004f1bb4 0x00e890b4: 0x001078db 0x00e890b0: 0x00e890e0 0x00e890ac-0x00e890a8: 0x12345678 0x00e890a4: 0x001179b3 0x00e890a0: 0x00e890b0 0x00e8909c-0x00e89064: 0x12345678 0x00e89060: 0x12345600 0x00e8905c: 0x20232970 0x00e89058: 0x616d2d65 0x00e89054: 0x74002023 0x00e89050: 0x29676966 0x00e8904c: 0x6e6f6328 0x00e89048: 0x31636573 0x00e89044: 0x7069636f 0x00e89040: 0x64786970 0x00e8903c-0x00e88e50: 0x00000000 0x00e88e4c: 0x000a7473 0x00e88e48: 0x6574206f 0x00e88e44: 0x666e6968 0x00e88e40: 0x73617263 0x00e88e3c-0x00e88e38: 0x00000000 0x00e88e34: 0x12345600 0x00e88e30-0x00e88dfc: 0x00000000 0x00e88df8: 0x00316761 0x00e88df4: 0x74706100 0x00e88df0: 0x12345600 0x00e88dec-0x00e88ddc: 0x00000000 0x00e88dd8: 0x00000070 0x00e88dd4: 0x616d2d65 0x00e88dd0: 0x74756f00 0x00e88dcc: 0x00000000 0x00e88dc8: 0x00e88e40 0x00e88dc4: 0x004f20c4 0x00e88dc0: 0x12345600 0x00e88dbc: 0x00000000 0x00e88db8: 0x00000035 0x00e88db4: 0x315f656c 0x00e88db0: 0x62616e65 0x00e88dac: 0x0030fcf0 0x00e88da8: 0x3011111f 0x00e88da4: 0x004df43c 0x00e88da0: 0x0053fef0 0x00e88d9c: 0x004f1bb4 0x00e88d98: 0x12345600 0x00e88d94: 0x00000000 0x00e88d90: 0x00000035 0x00e88d8c: 0x315f656c 0x00e88d88: 0x62616e65 0x00e88d84: 0x00000000 0x00e88d80: 0x004f20c4 0x00e88d7c: 0x00000001 0x00e88d78: 0x01345678 0x00e88d74: 0x00f53854 0x00e88d70: 0x00f7f754 0x00e88d6c: 0x00e88db0 0x00e88d68: 0x00e88d7b 0x00e88d64: 0x00f53874 0x00e88d60: 0x00e89040 0x00e88d5c-0x00e88d54: 0x12345678 0x00e88d50-0x00e88d4c: 0x00000000 0x00e88d48: 0x004f1bb4 0x00e88d44: 0x00e88d7c 0x00e88d40: 0x00e88e40 0x00e88d3c: 0x00f53874 0x00e88d38: 0x004f1bb4 0x00e88d34: 0x0010763c 0x00e88d30: 0x00e890b0 0x00e88d2c: 0x00e88db0 0x00e88d28: 0x00e88d88 0x00e88d24: 0x0010761a 0x00e88d20: 0x00e890b0 0x00e88d1c: 0x00e88e40 0x00e88d18: 0x00f53874 0x00e88d14: 0x0010166d 0x00e88d10: 0x0000000e 0x00e88d0c: 0x00f53874 0x00e88d08: 0x00f53854 0x00e88d04: 0x0048b301 0x00e88d00: 0x00e88d30 0x00e88cfc: 0x0000000e 0x00e88cf8: 0x00f53854 0x00e88cf4: 0x0048a401 0x00e88cf0: 0x00f53854 0x00e88cec: 0x00f53874 0x00e88ce8: 0x0000000e 0x00e88ce4: 0x0048a64b 0x00e88ce0: 0x0000000e 0x00e88cdc: 0x00f53874 0x00e88cd8: 0x00f7f96c 0x00e88cd4: 0x0048b4f8 0x00e88cd0: 0x00e88d00 0x00e88ccc: 0x0000000f 0x00e88cc8: 0x00f7f96c 0x00e88cc4-0x00e88cc0: 0x0000000e 0x00e88cbc: 0x00e89040 0x00e88cb8: 0x00000000 0x00e88cb4: 0x00f5387e 0x00e88cb0: 0x00f53874 0x00e88cac: 0x00000002 0x00e88ca8: 0x00000001 0x00e88ca4: 0x00000009 0x00e88ca0-0x00e88c9c: 0x00000001 0x00e88c98: 0x00e88cb0 0x00e88c94: 0x004f20c4 0x00e88c90: 0x0000003a 0x00e88c8c: 0x00000000 0x00e88c88: 0x0000000a 0x00e88c84: 0x00489f3a 0x00e88c80: 0x00e88d88 0x00e88c7c: 0x00e88e40 0x00e88c78: 0x00e88d7c 0x00e88c74: 0x001087ed 0x00e88c70: 0x00000001 0x00e88c6c: 0x00e88cb0 0x00e88c68: 0x00000002 0x00e88c64: 0x0010885c 0x00e88c60: 0x00e88d30 0x00e88c5c: 0x00727334 0x00e88c58: 0xa0ffffff 0x00e88c54: 0x00e88cb0 0x00e88c50: 0x00000001 0x00e88c4c: 0x00e88cb0 0x00e88c48: 0x00000002 0x00e88c44: 0x0032321b 0x00e88c40: 0x00e88c60 0x00e88c3c: 0x00e88c7f 0x00e88c38: 0x00e88c5c 0x00e88c34: 0x004b1ad5 0x00e88c30: 0x00e88c60 0x00e88c2c: 0x00e88e40 0x00e88c28: 0xa0ffffff 0x00e88c24: 0x00323143 0x00e88c20: 0x00e88c40 0x00e88c1c: 0x00000000 0x00e88c18: 0x00000008 0x00e88c14: 0x0010318c 0x00e88c10-0x00e88c0c: 0x00322f8b 0x00e88c08: 0x00000074 0x00e88c04: 0x00000001 0x00e88c00: 0x00e88bd8 0x00e88bfc: 0x00e88c20 0x00e88bf8: 0x00000000 0x00e88bf4: 0x004f20c4 0x00e88bf0: 0x000000ff 0x00e88bec: 0x00322f87 0x00e88be8: 0x00f5387e 0x00e88be4: 0x00323021 0x00e88be0: 0x00e88c10 0x00e88bdc: 0x004f20c4 0x00e88bd8: 0x00000000 * 0x00e88bd4: 0x004eabb0 0x00e88bd0: 0x00000001 0x00e88bcc: 0x00f5387e 0x00e88bc8-0x00e88bc4: 0x00000000 0x00e88bc0: 0x00000008 0x00e88bbc: 0x0010318c 0x00e88bb8-0x00e88bb4: 0x00322f8b 0x00e88bb0: 0x00000074 0x00e88bac: 0x00000001 0x00e88ba8: 0x00e88bd8 0x00e88ba4: 0x00e88c20 0x00e88ba0: 0x00000000 0x00e88b9c: 0x004f20c4 0x00e88b98: 0x000000ff 0x00e88b94: 0x001031f2 0x00e88b90: 0x00e88c20 0x00e88b8c: 0xffffffff 0x00e88b88: 0x00e88cb0 0x00e88b84: 0x00320032 0x00e88b80: 0x37303133 0x00e88b7c: 0x312f6574 0x00e88b78: 0x6972772f 0x00e88b74: 0x342f7665 0x00e88b70: 0x64736666 0x00e88b6c: 0x00020000 0x00e88b68: 0x00000010 0x00e88b64: 0x00000001 0x00e88b60: 0x123456cd 0x00e88b5c: 0x00000000 0x00e88b58: 0x00000008 outside: received (in 865565.090 secs): 6139 packets 830375 bytes 0 pkts/sec 0 bytes/sec transmitted (in 865565.090 secs): 90 packets 6160 bytes 0 pkts/sec 0 bytes/sec inside: received (in 865565.090 secs): 0 packets 0 bytes 0 pkts/sec 0 bytes/sec Related Commands failover Enable or disable the PIX Firewall failover feature on a standby PIX Firewall. crypto dynamic-map Create, view, or delete a dynamic crypto map entry. Syntax Description dynamic-map-name Specify the name of the dynamic crypto map set. dynamic-seq-num Specify the sequence number that corresponds to the dynamic crypto map entry. subcommand Various subcommands (match address, set transform-set, and so on). tag map-name (Optional) Show the crypto dynamic map set with the specified map-name. Note The crypto dynamic-map subcommands, such as match address, set peer, and set pfs are described with the crypto map command. If the peer initiates the negotiation and the local configuration specifies perfect forward secrecy (PFS), the peer must perform a PFS exchange or the negotiation will fail. If the local configuration does not specify a group, a default of group1 will be assumed, and an offer of either group1 or group2 will be accepted. If the local configuration specifies group2, that group must be part of the peer’s offer or the negotiation will fail. If the local configuration does not specify PFS, it will accept any offer of PFS from the peer. Usage Guidelines The sections that follow describe each crypto dynamic-map command. crypto dynamic-map The crypto dynamic-map command lets you create a dynamic crypto map entry. The no crypto dynamic-map command deletes a dynamic crypto map set or entry. The clear [crypto] dynamic-map removes all of the dynamic crypto map command statements. Specifying the name of a given crypto dynamic map removes the associated crypto dynamic map command statement(s). You can also specify the dynamic crypto map’s sequence number to remove all of the associated dynamic crypto map command statements. The show crypto dynamic-map command lets you view a dynamic crypto map set. Dynamic crypto maps are policy templates used when processing negotiation requests for new security associations from a remote IPSec peer, even if you do not know all of the crypto map parameters required to communicate with the peer (such as the peer’s IP address). For example, if you do not know about all the remote IPSec peers in your network, a dynamic crypto map lets you accept requests for new security associations from previously unknown peers. (However, these requests are not processed until the IKE authentication has completed successfully.) When a PIX Firewall receives a negotiation request via IKE from another peer, the request is examined to see if it matches a crypto map entry. If the negotiation does not match any explicit crypto map entry, it will be rejected unless the crypto map set includes a reference to a dynamic crypto map. The dynamic crypto map accepts “wildcard” parameters for any parameters not explicitly stated in the dynamic crypto map entry. This lets you set up IPSec security associations with a previously unknown peer. (The peer still must specify matching values for the “wildcard” IPSec security association negotiation parameters.) If the PIX Firewall accepts the peer’s request, at the point that it installs the new IPSec security associations it also installs a temporary crypto map entry. This entry is filled in with the results of the negotiation. At this point, the PIX Firewall performs normal processing, using this temporary crypto map entry as a normal entry, even requesting new security associations if the current ones are expiring (based upon the policy specified in the temporary crypto map entry). Once the flow expires (that is, all of the corresponding security associations expire), the temporary crypto map entry is removed. The crypto dynamic-map command statements are used for determining whether or not traffic should be protected. The only parameter required in a crypto dynamic-map command statement is the set transform-set. All other parameters are optional. The following is sample output from the how crypto dynamic-map command: show crypto dynamic-map The following partial configuration was in effect when the preceding show crypto dynamic-map command was issued: crypto ipsec security-association lifetime seconds 120 crypto ipsec transform-set t1 esp-des esp-md5-hmac crypto ipsec transform-set tauth ah-sha-hmac crypto dynamic-map dyn1 10 set transform-set tauth t1 crypto dynamic-map dyn1 10 match address 152 crypto map to-firewall local-address Ethernet0 crypto map to-firewall 10 ipsec-isakmp crypto map to-firewall 10 set peer 172.21.114.123 The following example shows output from the show crypto map command for a crypto map named “mymap”: pixfirewall(config)# show crypto map Note The crypto map set transform-set command is required for dynamic crypto map entries. crypto ipsec Create, view, or delete IPSec security associations, security association global lifetime values, and global transform sets. Syntax Description address (Optional) Show all of the existing security associations, sorted by the destination address (either the local address or the address of the remote IPSec peer) and then by protocol (AH or ESP). esp-aes Selecting this option means that IPSec messages protected by this transform are encrypted using AES with a 128-bit key. esp-aes-192 Selecting this option means that IPSec messages protected by this transform are encrypted using AES with a 192-bit key. esp-aes-256 Selecting this option means that IPSec messages protected by this transform are encrypted using AES with a 256-bit key. destination-address Specify the IP address of your peer or the remote peer. detail (Optional) Show detailed error counters. identity (Optional) Show only the flow information. It does not show the security association information. kilobytes kilobytes Specify the volume of traffic (in kilobytes) that can pass between IPSec peers using a given security association before that security association expires. The default is 4,608,000 kilobytes (10 megabytes per second for one hour). map map-name The name of the crypto map set. mode transport Specifies the transform set to accept transport mode requests in addition to the tunnel mode request. protocol Specify either the AH or ESP protocol. seconds seconds Specify the number of seconds a security association will live before it expires. The default is 28,800 seconds (eight hours). seq-num The number you assign). tag (Optional) Show only the transform sets with the specified transform-set-name transform-set-name. transform1 Specify up to three transforms. Transforms define the IPSec security transform2 protocol(s) and algorithm(s). Each transform represents an IPSec security transform3 protocol (ESP, AH, or both) plus the algorithm you want to use. transform-set-name Specify the name of the transform set to create or modify. Usage Guidelines The sections that follow describe each crypto ipsec command. To run the Known Answer Test (KAT), refer to the show crypto engine verify command. Shorter lifetimes can make it harder to mount a successful key recovery attack, because the attacker has less data encrypted under the same key to work with. However, shorter lifetimes require more CPU processing time for establishing new security associations. The lifetime values are ignored for manually established security associations (security associations installed using an ipsec-manual crypto map command entry). The security association (and corresponding keys) will expire according to whichever occurs sooner, either after the number of seconds has passed (specified by the seconds keyword) or after the amount of traffic in kilobytes has passed (specified by the kilobytes keyword). A new security association is negotiated before the lifetime threshold of the existing security association is reached, to ensure that a new security association is ready for use when the old one expires. The new security association is negotiated either 30 seconds before the seconds lifetime expires or when the volume of traffic through the tunnel reaches 256 kilobytes less than the kilobytes lifetime (whichever occurs first). If no traffic has passed through the tunnel during the entire life of the security association, a new security association is not negotiated when the lifetime expires. Instead, a new security association will be negotiated only when IPSec sees another packet that should be protected. Note If you make significant changes to an IPSec configuration, such as to access lists or peers, the clear [crypto] ipsec sa command does not enable the new configuration. In such a case, rebind the crypto map to the interface with the crypto map interface command. If the PIX Firewall is processing active IPSec traffic, we recommend that you only clear the portion of the security association database that is affected by the changes to avoid causing active IPSec traffic to temporarily fail. The clear [crypto] ipsec sa command only clears IPSec security associations; to clear IKE security associations, use the clear [crypto] isakmp sa command. The following example clears (and reinitializes if appropriate) all IPSec security associations at the PIX Firewall: clear crypto ipsec sa The following example clears (and reinitializes if appropriate) the inbound and outbound IPSec security associations established along with the security association established for address 10.0.0.1 using the AH protocol with the SPI of 256: clear crypto ipsec sa entry 10.0.0.1 AH 256 Note While entering the show crypto ipsec sa command, if the screen display is stopped with the More prompt and the security association lifetime expires while the screen display is stopped, then the subsequent display information may refer to a stale security association. Assume that the security association lifetime values that display are invalid. Output from the show crypto ipsec sa command lists the PCP protocol. This is a compression protocol supplied with the Cisco IOS software code on which the PIX Firewall IPSec implementation is based; however, the PIX Firewall does not support the PCP protocol. Note A transport mode transform can only be used on a dynamic crypto map, and the PIX Firewall CLI will display an error if you attempt to tie a transport-mode transform to a static crypto map. Tunnel mode is automatically enabled for a transform set, so no mode needs to be explicitly configured when tunnel mode is desired. The firewall uses tunnel mode except when it is talking to a Windows 2000 L2TP/IPSec client, with which it uses transport mode. Use the crypto ipsec transform-set trans_name mode transport command to configure the firewall to negotiate with a Windows 2000 L2TP/IPSec client. To reset the mode to the default value of tunnel mode, use the no crypto ipsec transform-set trans_name mode transport command. The crypto ipsec transform-set command defines a transform set. To delete a transform set, use the no crypto ipsec transform-set command. To view the configured transform sets, use the show crypto ipsec transform-set command. A transform set specifies one or two IPSec security protocols (either ESP or AH or both) and specifies which algorithms to use with the selected security protocol. During the IPSec security association negotiation, the peers agree to use a particular transform set when protecting a particular data flow. IPSec messages can be protected by a transform set using AES with a 128-bit key, 192-bit key, or 256-bit key. The following example uses the AES 192-bit key transform: pixfirewall(config)# crypto ipsec transform-set standard esp-aes-192 esp-md5-hmac Due to the large key sizes provided by AES, ISAKMP negotiation should use Diffie-Hellman group 5 instead of group 1 or group 2. This is done with the isakmp policy priority group 5 command. You can configure multiple transform sets, and then specify one or more of these transform sets in a crypto map entry. The transform set defined in the crypto map entry is used in the IPSec security association negotiation to protect the data flows specified by that crypto map entry’s access list. During the negotiation, the peers search for a transform set that is the same at both peers. When such a transform set is found, it is selected and is applied to the protected traffic as part of both peer’s IPSec security associations. When security associations are established manually, a single transform set must be used. The transform set is not negotiated. Before a transform set can be included in a crypto map entry, it must be defined using the crypto ipsec transform-set command. To define a transform set, you specify one to three “transforms”—each transform represents an IPSec security protocol (ESP or AH) plus the algorithm you want to use. When the particular transform set is used during negotiations for IPSec security associations, the entire transform set (the combination of protocols, algorithms, and other settings) must match a transform set at the remote peer. In a transform set you can specify the AH protocol or the ESP protocol. If you specify an ESP protocol in a transform set, you can specify just an ESP encryption transform or both an ESP encryption transform and an ESP authentication transform. Examples of acceptable transform combinations are as follows: • ah-md5-hmac • esp-des • esp-des and esp-md5-hmac • ah-sha-hmac and esp-des and esp-sha-hmac If one or more transforms are specified in the crypto ipsec transform-set command for an existing transform set, the specified transforms will replace the existing transforms for that transform set. If you change a transform set definition, the change is only applied to crypto map entries that reference. For more information about transform sets, refer to the Cisco PIX Firewall and VPN Configuration Guide. Examples The following example shortens the IPSec SA lifetimes. The time-out lifetime is shortened to 2700 seconds (45 minutes), and the traffic-volume lifetime is shortened to 2,304,000 kilobytes (10 megabytes per second for one half hour). crypto ipsec security-association lifetime seconds 2700 crypto ipsec security-association lifetime kilobytes 2304000 The following is sample output from the show crypto ipsec security-association lifetime command: show crypto ipsec security-association lifetime Security-association lifetime: 4608000 kilobytes/120 seconds The following configuration was in effect when the preceding show crypto ipsec security-association lifetime command was issued: crypto ipsec security-association lifetime seconds 120 This example defines one transform set (named “standard”), which is used with an IPSec peer that supports the ESP protocol. Both an ESP encryption transform and an ESP authentication transform are specified in this example. crypto ipsec transform-set standard esp-des esp-md5-hmac The following is sample output for the show crypto ipsec transform-set command: show crypto ipsec transform-set { esp-des } will negotiate = { Tunnel, }, The following configuration was in effect when the preceding show crypto ipsec transform-set command was issued: crypto ipsec transform-set combined-des-sha esp-des esp-sha-hmac crypto ipsec transform-set combined-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set t1 esp-des esp-md5-hmac crypto ipsec transform-set t100 ah-sha-hmac crypto ipsec transform-set t2 ah-sha-hmac esp-des The following is sample output from the show crypto ipsec sa command: show crypto ipsec sa interface: outside Crypto map tag: firewall-robin, local addr. 172.21.114.123 spi: 0x257A1039(628756537) transform: esp-des esp-md5-hmac , in use settings ={Tunnel, } slot: 0, conn id: 26, crypto map: firewall-robin sa timing: remaining key lifetime (k/sec): (4607999/90) IV size: 8 bytes replay detection support: Y inbound ah sas: outbound esp sas: spi: 0x20890A6F(545852015) transform: esp-des esp-md5-hmac , in use settings ={Tunnel, } slot: 0, conn id: 27, crypto map: firewall-robin sa timing: remaining key lifetime (k/sec): (4607999/90) IV size: 8 bytes replay detection support: Y outbound ah sas: crypto map Create, modify, view or delete a crypto map entry. Also used to delete a crypto map set. [no] crypto map map-name seq-num set security-association lifetime seconds seconds | kilobytes kilobytes [no] crypto map map-name seq-num set session-key inbound | outbound ah spi hex-key-string [no] crypto map map-name seq-num set session-key inbound | outbound esp spi cipher hex-key-string [authenticator hex-key-string] Syntax Description aaa-server-name The name of the AAA server that will authenticate the user during IKE authentication. The AAA server options available are TACACS+, RADIUS, or LOCAL. If LOCAL is specified and the local user credential database is empty, the following warning message appears: Warning:local database is empty! Use \Qusername' command to define local users. Conversely, if the local database becomes empty when LOCAL is still present in the command, the following warning message appears: Warning:Local user database is empty and there are still commands using LOCAL for authentication. acl_name Identify the named encryption access list. This name should match the name argument of the named encryption access list being matched. ah Set the IPSec session key for the AH protocol. Specify ah when the crypto map entry’s transform set includes an AH transform. AH protocol provides authentication via MD5-HMAC and SHA-HMAC. authenticator (Optional) Indicate that the key string is to be used with the ESP authentication transform. This argument is required only when the crypto map entry’s transform set includes an ESP authentication transform. cipher Indicate that the key string to use with the ESP encryption transform. dynamic (Optional) Specify that this crypto map entry is to reference a pre-existing dynamic crypto map. dynamic-map-name (Optional) Specify the name of the dynamic crypto map set to be used as the policy template. esp Set the IPSec session key for the ESP protocol. Specify esp when the crypto map entry’s transform set includes an ESP transform. ESP protocol provides both authentication and/or confidentiality. Authentication is done via MD5-HMAC, SHA-HMAC and NULL. Confidentiality is done via DES, 3DES, and NULL. group1 Specify that IPSec should use the 768-bit Diffie-Hellman prime modulus group when performing the new Diffie-Hellman exchange. group2 Specify that IPSec should use the 1024-bit Diffie-Hellman prime modulus group when performing the new Diffie-Hellman exchange. hex-key-string Specify the session key; enter in hexadecimal format. This is an arbitrary hexadecimal string of 16, 32, or 40 digits. If the crypto map's transform set includes the following: • DES algorithm, specify at least 16 hexadecimal digits per key. • MD5 algorithm, specify at least 32 hexadecimal digits per key. • SHA algorithm, specify 40 hexadecimal digits per key. Longer key sizes are simply hashed to the appropriate length. hostname Specify a peer by its IP address, or by its host name as defined by the PIX Firewall name command. inbound Set the inbound IPSec session key. (You must set both inbound and outbound keys.) initiate Indicate that the PIX Firewall will attempt to set IP addresses for each peer. interface Specify the identifying interface to be used by the PIX Firewall to identify interface-name itself to peers. If IKE is enabled, and you are using a certification authority (CA) to obtain certificates, this should be the interface with the address specified in the CA certificates. ip_address Specify a peer by its IP address. ipsec-isakmp Indicate that IKE will be used to establish the IPSec security associations for protecting the traffic specified by this crypto map entry. ipsec-manual Indicate that IKE will not be used to establish the IPSec security associations for protecting the traffic specified by this crypto map entry. Note Manual configuration of SAs is not supported on the PIX 501. kilobytes kilobytes Specify the volume of traffic (in kilobytes) that can pass between peers using a given security association before that security association expires. The default is 4,608,000 kilobytes. map map-name The name of the crypto map set. match address Specify an access list for a crypto map entry. outbound Set the outbound IPSec session key. (You must set both inbound and outbound keys.) respond Indicate that the PIX Firewall will accept requests for IP addresses from any requesting peer. seconds seconds Specify the number of seconds a security association will live before it expires. The default is 28,800 seconds (eight hours). seq-num The number you assign to the crypto map entry. set peer Specify an IPSec peer in a crypto map entry. set pfs Specify that IPSec should ask for perfect forward secrecy (PFS). With PFS, every time a new security association is negotiated, a new Diffie-Hellman exchange occurs. (This exchange requires additional processing time.) set Set the lifetime a security association will last in either seconds or kilobytes. security-association For use with either seconds or kilobyte keywords. lifetime set session-key Manually specify the IPSec session keys within a crypto map entry. set transform-set Specify which transform sets can be used). You can assign the same SPI to both directions and both protocols. However, not all peers have the same flexibility in SPI assignment. For a given destination address/protocol combination, unique SPI values must be used. The destination address is that of the PIX Firewall if inbound, the peer if outbound. tag map-name (Optional) Show the crypto map set with the specified map name. token Indicate a token-based server for user authentication is used. Usage Guidelines The sections that follow describe each crypto map command. Note If a crypto map map-name client configuration address initiate | respond command configuration exists on the firewall, then the Cisco VPN Client version 3.x uses it. Note Normally, when Xauth is enabled, an entry is added to the uauth table (as shown by the show uauth/clear uauth command) for the IP address assigned to the client. However, when using Xauth with the Easy VPN Remote feature in Network Extension Mode, the IPSEC tunnel is created from network to network, so the users behind the firewall cannot be associated with a single IP address. For this reason, a uauth entry cannot be created upon completion of Xauth. If AAA authorization or accounting services are required, you can enable the AAA authentication proxy to authenticate users behind the firewall. For more information on AAA authentication proxies, please refer to the aaa commands. You cannot enable Xauth or IKE Mode Configuration on a interface when terminating an L2TP/IPSec tunnel using the Microsoft L2TP/IPSec client v1.0 (which is available on Windows NT, Windows XP, Windows 98 and Windows ME OS). Instead, you can do either of the following: • Use a Windows 2000 L2TP/IPSec client, or • Use the isakmp key keystring address ip_address netmask mask no-xauth no-config-mode command to exempt the L2TP client from Xauth and IKE Mode Configuration. However, if you exempt the L2TP client from Xauth or IKE Mode Configuration, all the L2TP clients must be grouped with the same ISAKMP pre-shared key or certificate and have the same fully qualified domain name. The crypto map client token authentication command enables the PIX Firewall to interoperate with a Cisco VPN 3000 Client that is set up to use a token-based server for user authentication. The keyword token tells the PIX Firewall that the AAA server uses a token-card system and to prompt the user for username and password during IKE authentication. Use the no crypto map client token authentication command to restore the default value. Note If you use IKE Mode Configuration on the PIX Firewall, the routers handling the IPSec traffic must also support IKE Mode Configuration. Cisco IOS Release 12.0(6)T and higher supports the IKE Mode Configuration. Refer to the Cisco PIX Firewall and VPN Configuration Guide for more information about IKE Mode Configuration. The following examples show how to configure IKE Mode Configuration on your PIX Firewall: crypto map mymap client configuration address initiate crypto map mymap client configuration address respond Note While a new crypto map instance is being added to the PIX Firewall, all clear and SSH traffic to the firewall interface stops because the crypto peer/ACL pair has not yet been defined. To workaround this, use PIX Device Manager (PDM) to add the new crypto map instance or, through the PIX Firewall CLI, remove the crypto map interface command from your configuration, add the new crypto map instance and fully configure the crypto peer/ACL pair, and then reapply the crypto map interface command back to the interface. In some conditions the CLI workaround is not acceptable as it temporarily stops VPN traffic also. The use of the crypto map interface command re-initializes the security association database causing any currently established security associations to be deleted. The following example assigns the crypto map set “mymap” to the outside interface. When traffic passes through the outside interface, the traffic will be evaluated against all the crypto map entries in the “mymap” set. When outbound traffic matches an access list in one of the “mymap” crypto map entries, a security association (if IPSec) will be established per that crypto map entry’s configuration (if no security association or connection already exists). crypto map mymap interface outside The following is sample output from the show crypto map command: show crypto map The following configuration was in effect when the preceding show crypto map command was issued: crypto map firewall-robin 10 ipsec-isakmp crypto map firewall-robinrobin 10 set peer 172.21.114.67 crypto map firewall-robin 10 set transform-set t1 crypto map firewall-robin 10 match address 141 The following is sample output from the show crypto map command when manually established security associations are used: show crypto map The following configuration was in effect when the preceding show crypto map command was issued: crypto map multi-peer 20 ipsec-manual crypto map multi-peer 20 set peer 172.21.114.67 crypto map multi-peer 20 set session-key inbound ah 256 010203040506070809010203040506070809010203040506070809 crypto map multi-peer 20 set session-key outbound ah 256 010203040506070809010203040506070809010203040506070809 crypto map multi-peer 20 set transform-set t2 crypto map multi-peer 20 match address 120 Note The crypto map command without a keyword creates an ipsec-isakmp entry by default. After you define crypto map entries, you can use the crypto map interface command to assign the crypto map set to interfaces. Crypto maps provide two functions: filtering/classifying traffic to be protected, and defining the policy to be applied to that traffic. The first use affects the flow of traffic on an interface; the second affects the negotiation performed (via IKE) on behalf of that traffic. IPSec crypto maps link together definitions of the following: • What traffic should be protected • Which IPSec peer(s) the protected traffic can be forwarded to—these are the peers with which a security association can be established • Which transform sets are acceptable for use with the protected traffic • How keys and security associations should be used/managed (or what the keys are, if IKE is not used) A crypto map set is a collection of crypto map entries each with a different seq-num but the same map-name. Therefore, for a given interface, you could have certain traffic forwarded to one peer with specified security applied to that traffic, and other traffic forwarded to the same or a different peer with different IPSec security applied. To accomplish this you would create two crypto map entries, each with the same map-name, but each with a different seq-num.. Note Every static crypto map must define an access list and an IPsec peer. If either is missing, the crypto map is considered incomplete and any traffic that has not already been matched to an earlier, complete crypto map is dropped. Use the show conf command to ensure that every crypto map is complete. To fix an incomplete crypto map, remove the crypto map, add the missing entries, and reapply it. The following example shows the minimum required crypto map configuration when IKE will be used to establish the security associations: crypto map mymap 10 ipsec-isakmp crypto map mymap 10 match address 101 crypto map mymap set transform-set my_t_set1 crypto map mymap set peer 10.0.0.1 The following example shows the minimum required crypto map configuration when the security associations are manually established: crypto transform-set someset ah-md5-hmac esp-des crypto map mymap 10 ipsec-manual crypto map mymap 10 match address 102 crypto map mymap 10 set transform-set someset crypto map mymap 10 set peer 10.0.0.5 crypto map mymap 10 set session-key inbound ah 256 98765432109876549876543210987654 crypto map mymap 10 set session-key outbound ah 256 fedcbafedcbafedcfedcbafedcbafedc crypto map mymap 10 set session-key inbound esp 256 cipher 0123456789012345 crypto map mymap 10 set session-key outbound esp 256 cipher abcdefabcdefabcd Note The crypto access list is not used to determine whether to permit or deny traffic through the interface. An access list applied directly to the interface with the access-group command makes that determination. The crypto access list specified by this command is used when evaluating both inbound and outbound traffic. Outbound traffic is evaluated against the crypto access lists specified by the interface’s crypto map entries to determine if it should be protected by crypto, and if so (if traffic matches a permit entry), which crypto policy applies. (If necessary, in the case of static IPSec crypto maps, new security associations are established using the data flow identity as specified in the permit entry; in the case of dynamic crypto map entries, if no security association exists, the packet is dropped.) Inbound traffic is evaluated against the crypto access lists specified by the entries of the interface’s crypto map set to determine if it should be protected by crypto and, if so, which crypto policy applies. (In the case of IPSec, unprotected traffic is discarded because it should have been protected by IPSec.) The access list is also used to identify the flow for which the IPSec security associations are established. In the outbound case, the permit entry is used as the data flow identity (in general). In the inbound case, the data flow identity specified by the peer must be “permitted” by the crypto access list. The following example shows the minimum required crypto map configuration when IKE will be used to establish the security associations. (This example is for a static crypto map.) crypto map mymap 10 ipsec-isakmp crypto map mymap 10 match address 101 crypto map mymap 10 set transform-set my_t_set1 crypto map mymap 10 set peer 10.0.0.1 Note IKE negotiations with a remote peer may hang when a PIX Firewall has numerous tunnels that originate from the PIX Firewall and terminate on a single remote peer. This problem occurs when PFS is not enabled, and the local peer requests many simultaneous rekey requests. If this problem occurs, the IKE security association will not recover until it has timed out or until you manually clear it with the clear [crypto] isakmp sa command. PIX Firewall units configured with many tunnels to many peers or many clients sharing the same tunnel are not affected by this problem. If your configuration is affected, enable PFS with the crypto map mapname seqnum set pfs command. The following example specifies that PFS should be used whenever a new security association is negotiated for the crypto map “mymap 10”: crypto map mymap 10 ipsec-isakmp crypto map mymap 10 set pfs group2 The following example shows a crypto map entry for manually established security associations. The transform set “someset” includes both an AH and an ESP protocol, so session keys are configured for both AH and ESP for both inbound and outbound traffic. The transform set includes both encryption and authentication ESP transforms, so session keys are created for both using the cipher and authenticator keywords. crypto ipsec transform-set someset ah-sha-hmac esp-des esp-sha-hmac crypto map mymap 10 ipsec-manual crypto map mymap 10 match address 101 crypto map mymap 10 set transform-set someset crypto map mymap 10 set peer 10.0.0.1 crypto map mymap 10 set session-key inbound ah 300 9876543210987654321098765432109876543210 crypto map mymap 10 set session-key outbound ah 300 fedcbafedcbafedcbafedcbafedcbafedcbafedc crypto map mymap 10 set session-key inbound esp 300 cipher 0123456789012345 authenticator 0000111122223333444455556666777788889999 crypto map mymap 10 set session-key outbound esp 300 cipher abcdefabcdefabcd authenticator 9999888877776666555544443333222211110000 This command is required for all static and dynamic crypto map entries. For an ipsec-isakmp crypto map entry, you can list up to six transform sets with this command. List the higher priority transform sets first. If the local PIX Firewall initiates the negotiation, the transform sets are presented to the peer in the order specified in the crypto map command statement. If the peer initiates the negotiation, the local PIX Firewall accepts the first transform set that matches one of the transform sets specified in the crypto map entry. The first matching transform set that is found at both peers is used for the security association. If no match is found, IPSec will not establish a security association. The traffic will be dropped because there is no security association to protect the traffic. For an ipsec-manual crypto map command statement, you can specify only one transform set. If the transform set does not match the transform set at the remote peer’s crypto map, the two peers will fail to correctly communicate because the peers are using different rules to process the traffic. If you want to change the list of transform sets, respecify the new list of transform sets to replace the old list. This change is only applied to crypto map command statements that reference. Any transform sets included in a crypto map command statement must previously have been defined using the crypto ipsec transform-set command. Examples The following example shows how the crypto map client dealer 10.1.2.1-10.1.2.254 nat (inside) 0 access-list 80 aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ shows how the crypto map client token ip local pool dealer 10.1.2.1-10.1.2.254 nat (inside) 0 access-list 80 aaa-server RADIUS protocol radius aaa-server RADIUS token authentication RADIUS defines two transform sets and specifies that they can both be used within a crypto map entry. (This example applies only when IKE is used to establish security associations. With crypto maps used for manually established security associations, only one transform set can be included in a given crypto map command statement.) crypto ipsec transform-set my_t_set1 esp-des esp-sha-hmac crypto ipsec transform-set my_t_set2 ah-sha-hmac esp-des esp-sha-hmac crypto map mymap 10 ipsec-isakmp crypto map mymap 10 match address 101 crypto map mymap 10 set transform-set my_t_set1 my_t_set2 crypto map mymap set peer 10.0.0.1 10.0.0.2 In this example, when traffic matches access list 101 the security association can use either transform set “my_t_set1” (first priority) or “my_t_set2” (second priority), depending on which transform set matches the remote peer's transform sets. debug You can debug packets or ICMP tracings through the PIX Firewall. The debug command provides information that helps troubleshoot protocols operating with and through the PIX Firewall. [no] debug ospf [adj | database-timer | events |f lood | lsa-generation | packet | tree | retransmission | spf [external | internal |intra]] [no] debug ntp [adjust | authentication | events | loopfilter | packets | params | select | sync | validity] [no] debug packet if_name [src source_ip [netmask mask]] [dst dest_ip [netmask mask]] [[proto icmp] | [proto tcp [sport src_port] [dport dest_port]] | [proto udp [sport src_port] [dport dest_port]] [rx | tx | both] no debug all undebug all show debug level The level of debugging feedback. The higher the level number, the more information is displayed. The default level is 1. The levels correspond to the following events: • Level 1: Interesting events • Level 2: Normative and interesting events • Level 3: Diminutive, normative, and interesting events Refer to the “Examples” section at the end of this command page for an example of how the debugging level appears within the show debug command. loopfilter Displays NTP loop filter information. messages Displays debug information for MGCP messages. negotiation Equivalent of the error, uauth, upap and chap debug command options. netmask mask Network mask. packet Displays packet information. packets Displays NTP packet information. params Displays NTP clock parameters. parser Displays debug information about parsing MGCP messages. pdm history Turns on the PDM history metrics debugging information. The no version of this command disables PDM history metrics debugging. ppp Debugs L2TP or PPTP traffic, which is configured with the vpdn command. ppp error Displays L2TP or PPTP PPP virtual interface error messages. ppp io Display the packet information for L2TP or PPTP PPP virtual interface. ppp uauth Displays the L2TP or PPTP PPP virtual interface AAA user authentication debugging messages. pppoe error Displays PPPoE error messages. pppoe event Displays PPPoE event information. pppoe packet Displays PPPoE packet information. pptp Displays PPTP traffic information. proto icmp Displays ICMP packets only. proto tcp Displays TCP packets only. proto udp Displays UDP packets only. radius all Enables all RADIUS debug options. radius session Logs RADIUS session information and the attributes of sent and received RADIUS packets. ras asn Displays the output of the decoded PDUs. ras events Displays the events of the RAS signaling, or turn both traces on. route Displays information from the PIX Firewall routing module. rx Displays only packets received at the PIX Firewall. select Displays NTP clock selections. sessions Displays debug information for MGCP sessions. sip Debug the fixup Session Initiation Protocol (SIP) module. skinny Debugs SCCP protocol activity. (Using this option is system-resources intensive and may impact performance on high traffic network segments.) sport src_port Source port. See the “Ports” section in "Chapter 2, “Using PIX Firewall Commands” for a list of valid port literal names. sqlnet Debugs SQL*Net traffic. src source_ip Source IP address. ssh Debug information and error messages associated with the ssh command. ssl Debug information and error messages associated with the ssl command. standard Displays non-TurboACL access list information. sync Displays NTP clock synchronization. turbo Displays TurboACL access list information. tx Displays only packets that were transmitted from the PIX Firewall. upap Displays PAP authentication. user username Specifies to display information for an individual username only. validity Displays NTP peer clock validity. vpdn error Display L2TP or PPTP protocol error messages. vpdn event Display L2TP or PPTP tunnel event change information. vpdn packet Display L2TP or PPTP packet information about PPTP traffic. xdmcp Display information about the xdmcp negotiation Usage Guidelines The debug command lets you view debug information. The show debug command displays the current state of tracing. You can debug the contents of network layer protocol packets with the debug packet command. Note Use of the debug commands may slow down traffic on busy networks. Use of the debug packet command on a PIX Firewall experiencing a heavy load may result in the output displaying so fast that it may be impossible to stop the output by entering the no debug packet command from the console. You can enter the no debug packet command from a Telnet session. To let users ping through the PIX Firewall, add the access-list acl_grp permit icmp any any command statement to the configuration and bind it to each interface you want to test with the access-group command. This lets pings go outbound and inbound. To stop a debug packet trace command, enter the following command: no debug packet if_name Replace if_name with the name of the interface; for example, inside, outside, or a perimeter interface name. debug crypto When creating your digital certificates, use the debug crypto ca command to ensure that the certificate is created correctly. Important error messages only display when the debug crypto ca command is enabled. For example, if you enter an Entrust fingerprint value incorrectly, the only warning message that indicates the value is incorrect appears in the debug crypto ca command output. Output from the debug crypto ipsec and debug crypto isakmp commands does not display in a Telnet console session. debug dhcpc The debug dhcpc detail command displays detailed packet information about the DHCP client. The debug dhcpc error command displays DHCP client error messages. The debug dhcpc packet command displays packet information about the DHCP client. Use the no form of the debug dhcpc command to disable debugging. The debug dhcpd event command displays event information about the DHCP server. The debug dhcpd packet command displays packet information about the DHCP server. Use the no form of the debug dhcpd commands to disable debugging. debug h323 The debug h323 command lets you debug H.323 connections. Use the no form of the command to disable debugging. This command works when the fixup protocol h323 command is enabled. Note The debug h323 command, particularly the debug h323 h225 asn, debug h323 h245 asn, and debug h323 ras asn commands, might delay the sending of messages and cause slower performance in a real-time environment. debug icmp The debug icmp trace command shows ICMP packet information, the source IP address, and the destination address of packets arriving, departing, and traversing the PIX Firewall including pings to the PIX Firewall unit’s own interfaces. To stop a debug icmp trace command, enter the following command: no debug icmp trace debug mgcp The debug mgcp command displays debug information for Media Gateway Control Protocol (MGCP) traffic. Without any options explicitly specified, the debug mgcp command enables all three MGCP debug options. The no debug mgcp command, without any options explicitly specified, disables all MGCP debugging. debug ospf The debug ospf command enables all OSPF debugging options, and the no debug ospf command disables all OSPF debugging options. The debug ospf spf command enables all SPF options, and the no debug ospf spf command disables all SPF options. debug sqlnet The debug sqlnet command reports on traffic between Oracle SQL*Net clients and servers through the PIX Firewall. debug ssh The debug ssh command reports on information and error messages associated with the ssh command. debug pptp The debug pptp and debug vpdn commands provide information about PPTP traffic. PPTP is configured with the vpdn command. debug fover Table 5-1 lists the options for the debug fover command. Option Description cable Failover cable status fail Failover internal exception fmsg Failover message get IP network packet received ifc Network interface status trace lanrx LAN-based failover receive process messages lanretx LAN-based failover retransmit process messages lantx LAN-based failover transmit process messages lancmd LAN-based failover main thread messages open Failover device open put IP network packet transmitted rx Failover cable receive rxdmp Cable recv message dump (serial console only) rxip IP network failover packet received tx Failover cable transmit txdmp Cable xmit message dump (serial console only) txip IP network failover packet transmit verify Failover message verify switch Failover Switching status If a debug command does not use Trace Channel, each session operates independently, which means any commands started in the session only appear in the session. By default, a session not using Trace Channel has output disabled by default. The location of the Trace Channel depends on whether you have a simultaneous Telnet console session running at the same time as the console session, or if you are using only the PIX Firewall serial console: • If you are only using the PIX Firewall serial console, all debug commands display on the serial console. • If you have both a serial console session and a Telnet console session accessing the console, then no matter where you enter the debug commands, the output displays on the Telnet console session. • If you have two or more Telnet console sessions, the first session is the Trace Channel. If that session closes, the serial console session becomes the Trace Channel. The next Telnet console session that accesses the console will then become the Trace Channel. The debug commands, except the debug crypto commands, are shared between all Telnet and serial console sessions. Note The downside of the Trace Channel feature is that if one administrator is using the serial console and another administrator starts a Telnet console session, the serial console debug command output will suddenly stop without warning. In addition, the administrator on the Telnet console session will suddenly be viewing debug command output, which may be unexpected. If you are using the serial console and debug command output is not appearing, use the who command to see if a Telnet console session is running. Examples The following is partial sample output from the debug dhcpc packet and the debug dhcpc detail commands. The ip address dhcp setroute command was configured after entering the debug dhcpc commands to obtain debugging information. debug dhcpc packet debug dhcpc detail ip address outside dhcp setroute DHCP:allocate request DHCP:new entry. add to queue DHCP:new ip lease str = 0x80ce8a28 DHCP:SDiscover attempt # 1 for entry: Temp IP addr:0.0.0.0 for peer on Interface:outside Temp sub net mask:0.0.0.0 DHCP Lease server:0.0.0.0, state:1 Selecting DHCP transaction id:0x8931 Lease:0 secs, Renewal:0 secs, Rebind:0 secs Next timer fires after:2 seconds Retry count:1 Client-ID:cisco-0000.0000.0000-outside When you ping a host through the PIX Firewall from any interface, trace output displays on the console. The following example shows a successful ping from an external host (209.165.201.2) to the PIX Firewall unit’s outside interface (209.165.201.1). NO DEBUG ICMP TRACE ICMP trace off This example shows that the ICMP packet length is 32 bytes, the ICMP packet identifier is 1, and the ICMP sequence number. The ICMP sequence number starts at 0 and is incremented each time a request is sent. The following is sample output from the show debug command output: show debug debug ppp error debug vpdn event debug crypto ipsec 1 debug crypto isakmp 1 debug crypto ca 1 debug icmp trace debug packet outside both debug sqlnet check_isakmp_proposal: is_auth_policy_configured: auth 1 is_auth_policy_configured: auth 4 ISAKMP (0): Checking ISAKMP transform 1 against priority 8 policy ISAKMP: encryption 3DES-CBC ISAKMP: hash SHA ISAKMP: default group 5 ISAKMP: extended auth RSA sig ISAKMP: life type in seconds ISAKMP: life duration (VPI) of 0x0 0x20 0xc4 0x9b ISAKMP (0): atts are not acceptable. Next payload is 3 ISAKMP (0): Checking ISAKMP transform 2 against priority 8 policy ISAKMP: encryption 3DES-CBC ISAKMP: hash MD5 ISAKMP: default group 5 ISAKMP: extended auth RSA sig ISAKMP: life type in seconds ISAKMP: life duration (VPI) of 0x0 0x20 0xc4 0x9b The following example shows possible output for the debug mgcp parser command: 28: MGCP packet: RSIP 1 d001@10.10.10.11 MGCP 1.0 RM: restart The following example shows possible output for the debug mgcp sessions command: 91: NAT::requesting UDP conn for generic-pc-2/6166 [209.165.202.128/0] from dmz/ca:generic-pc-2/2427 to outside:generic-pc-1/2727 92: NAT::reverse route: embedded host at dmz/ca:generic-pc-2/6166 93: NAT::table route: embedded host at outside:209.165.202.128/0 94: NAT::pre-allocate connection for outside:209.165.202.128 to dmz/ca:generic-pc-2/6166 95: NAT::found inside xlate from dmz/ca:generic-pc-2/0 to outside:209.165.201.15/0 96: NAT::outside NAT not needed 97: NAT::created UDP conn dmz/ca:generic-pc-2/6166 <-> outside:209.165.202.128/0 98: NAT::created RTCP conn dmz/ca:generic-pc-2/6167 <-> outside:209.165.202.128/0 99: NAT::requesting UDP conn for 209.165.202.128/6058 [generic-pc-2/0] from dmz/ca:genericgeneric-pc-2/2427 to outside:generic-pc-1/2727 100: NAT::table route: embedded host at outside:209.165.202.128/6058 101: NAT::reverse route: embedded host at dmz/ca:generic-pc-2/0 102: NAT::pre-allocate connection for dmz/ca:generic-pc-2 to outside:209.165.202.128/6058 103: NAT::found inside xlate from dmz/ca:generic-pc-2/0 to outside:209.165.201.15/0 104: NAT::outside NAT not needed 105: NAT::created UDP conn dmz/ca:generic-pc-2/0 <-> outside:209.165.202.128/6058 106: NAT::created RTCP conn dmz/ca:generic-pc-2/0 <-> outside:209.165.202.128/6059 107: MGCP: New session Gateway IP generic-pc-2 Call ID 9876543210abcdef Connection ID 6789af54c9 Endpoint name aaln/1 Media lcl port 6166 Media rmt IP 209.165.202.128 Media rmt port 6058 108: MGCP: Expired session, active 0:06:05 Gateway IP generic-pc-2 Call ID 9876543210abcdef Connection ID 6789af54c9 Endpoint name aaln/1 Media lcl port 6166 Media rmt IP 209.165.202.128 Media rmt port 6058 You can debug the contents of packets with the debug packet command: | .. ....`...... --------- END OF PACKET --------- Related Commands.) dhcpd Configures the DHCP server. Syntax Description address ip1 [ip2] The IP pool address range. The size of the pool is limited to 32 addresses with a 10-user license and 128 addresses with a 50-user license on the PIX 501. The unlimited user license on the PIX 501 and all other PIX Firewall platforms support 256 addresses. If the address pool range is larger than 253 addresses, the netmask of the PIX Firewall interface cannot be a Class C address (for example, 255.255.255.0) and hence needs to be something larger, for example, 255.255.254.0. auto_config Enable PIX Firewall to automatically configure DNS, WINS and domain name values from the DHCP client to the DHCP server. If the user also specifies dns, wins, and domain parameters, then the CLI parameters overwrite the auto_config parameters. binding The binding information for a given server IP address and its associated client hardware address and lease length. code Specifies the DHCP option code, either 66 or 150. dns dns1 [dns2] The IP addresses of the DNS servers for the DHCP client. Specifies that DNS A (address) resource records that match the static translation are rewritten. A second server address is optional. domain domain_name The DNS domain name. For example, example.com. if_name Specifies the interface on which to enable the DHCP server. lease lease_length The length of the lease, in seconds, granted to DHCP client from the DHCP server. The lease indicates how long the client can use the assigned IP address. The default is 3600 seconds. The minimum lease length is 300 seconds, and the maximum lease length is 2,147,483,647 seconds. option 150 Specifies the TFTP server IP address(es) designated for Cisco IP Phones in dotted-decimal format. DHCP option 150 is site-specific; it gives the IP addresses of a list of TFTP servers. option 66 Specifies the TFTP server IP address designated for Cisco IP Phones and gives the IP address or the host name of a single TFTP server. outside The outside interface of the firewall. ping_timeout Allows the configuration of the timeout value of a ping, in milliseconds, before assigning an IP address to a DHCP client. server_ip(1,2) Specifies the IP address(es) of a TFTP server. server_ip_str Specifies the TFTP server in dotted-decimal format, such as 1.1.1.1, but is treated as a character string by the PIX Firewall DHCP server. server_name Specifies an ASCII character string representing the TFTP server. statistics Statistical information, such as address pool, number of bindings, malformed messages, sent messages, and received messages. wins wins1 [wins2] The IP addresses of the Microsoft NetBIOS name servers (WINS server). The second server address is optional. Usage Guidelines A DHCP server provides network configuration parameters to a DHCP client. Support for the DHCP server within the PIX Firewall means the PIX Firewall can use DHCP to configure connected clients. This DHCP feature is designed for the remote home or branch office that will establish a connection to an enterprise or corporate network. See the Cisco PIX Firewall and VPN Configuration Guide for information on how to implement the DHCP server feature into the PIX Firewall. You must specify an interface name, if_name, for all DHCP server commands when using PIX Firewall software Version 6.3. In earlier software versions, only the inside interface could be configured as the DHCP server so there was no need to specify if_name. Note The PIX Firewall DHCP server does not support BOOTP requests and failover configurations. The dhcpd address ip1[-ip2] if_name command specifies the DHCP server address pool. The address pool of a PIX Firewall DHCP server must be within the same subnet of the PIX Firewall interface that is enabled and you must specify the associated PIX Firewall interface with the if_name. In other words, the client must be physically connected to the subnet of a PIX Firewall interface. The size of the pool is limited to 32 addresses with a 10-user license and 128 addresses with a 50-user license on the PIX 501. The unlimited user license on the PIX 501 and all other PIX Firewall platforms support 256 addresses. Note When the PIX Firewall responds to a DHCP client request, it uses the IP address of the interface where the request was received as the default gateway in the response. It uses the subnet mask on that interface for the subnet mask in its response. Use caution with names that contain a “-” (dash) character because the dhcpd address command interprets the last (or only) “-” character in the name as a range specifier instead of as part of the name. For example, the dhcpd address command treats the name “host-net2” as a range from “host” to “net2”. If the name is “host-net2-section3” then it is interpreted as a range from “host-net2” to “section3”. The no dhcpd address command removes the DHCP server address pool you configured. The dhcpd dns command specifies the IP address(es) of the DNS server(s) for DHCP client. You have the option to specify two DNS servers. The no dhcpd dns command removes the DNS IP address(es) from your configuration. The dhcpd wins command specifies the addresses of the WINS server for the DHCP client. The no dhcpd dns command removes the WINS server IP address(es) from your configuration. The dhcpd lease command specifies the length of the lease in seconds granted to the DHCP client. This lease indicates how long the DHCP client can use the assigned IP address the DHCP granted. The no dhcpd lease command removes the lease length that you specified from your configuration and replaces this value with the default value of 3600 seconds. The dhcpd domain command specifies the DNS domain name for the DHCP client. For example, example.com. The no dhcpd domain command removes the DNS domain server from your configuration. The dhcpd enable if_name command enables the DHCP daemon to begin to listen for the DHCP client requests on the DHCP-enabled interface. The no dhcpd enable command disables the DHCP server feature on the specified interface. DHCP must be enabled to use this command. Use the dhcpd enable if_name command to turn on DHCP. Note The PIX Firewall DHCP server daemon does not support clients that are not directly connected to a firewall interface, and the interface must be configured to retrieve DHCP client information (with the dhcprelay enable client_ifc command). The dhcpd option 66 | 150 command retrieves TFTP server address information for Cisco IP Phone connections. When a dhcpd option command request arrives at the PIX Firewall DHCP server, the PIX Firewall places the value(s) specified by the dhcpd option 66 | 150 in the response. Use the dhcpd option code command as follows: • If the TFTP server for Cisco IP Phone connections is located on the inside interface, use the local IP address of the TFTP server in the dhcpd option command. • If the TFTP server is located on a less secure interface, create a group of NAT, global and access-list command statements for the inside IP phones, and use the actual IP address of the TFTP server in the dhcpd option command. • If the TFTP server is located on a more secure interface, create a group of static and access-list command statements for the TFTP server and use the global IP address of the TFTP server in the dhcpd option command. The show dhcpd command displays dhcpd commands, binding and statistics information associated with all of the dhcpd commands. The clear dhcpd command clears all of the dhcpd commands, binding, and statistics information. The debug dhcpd event command displays event information about the DHCP server. The debug dhcpd packet command displays packet information about the DHCP server. Use the no form of the debug dhcpd commands to disable debugging. Examples The following partial configuration example shows how to use the dhcpd address, dhcpd dns, and dhcpd enable if_name commands to configure an address pool for the DHCP clients and a DNS server address for the DHCP client, and how to enable the dmz interface of the PIX Firewall for the DHCP server function. dhcpd address 10.0.1.100-10.0.1.108 dmz dhcpd dns 209.165.200.226 dhcpd enable dmz The following partial configuration example shows how to define a DHCP pool of 253 addresses and use the auto_config command to configure the DNS, WINS, and DOMAIN parameters. Note that the dmz interface of the firewall is configured as the DHCP server, and the netmask of the dmz interface is 255.255.254.0: ip address dmz 10.0.1.1 255.255.254.0 dhcpd address 10.0.1.2-10.0.1.254 dmz dhcpd auto_config outside dhcpd enable dmz The following partial configuration example shows how to use three new features that are associated with each other: DHCP server, DHCP client, and PAT using interface IP to configure a PIX Firewall in a small office, home office (SOHO) environment with the inside interface as the DHCP server: ! use dhcp to configure the outside interface and default route ip address outside dhcp setroute ! enable dhcp server daemon on the inside interface ip address inside 10.0.1.2 255.255.255.0 dhcpd address 10.0.1.100-10.0.1.108 inside dhcpd dns 209.165.201.2 209.165.202.129 dhcpd wins 209.165.201.5 dhcpd lease 3600 dhcpd domain example.com dhcpd enable inside ! use outside interface IP as PAT global address nat (inside) 1 0 0 global (outside) 1 interface The following is sample output from the show dhcpd binding command: pixfirewall(config)# show dhcpd binding IP Address Hardware Address Lease Expiration Type 10.0.1.100 0100.a0c9.868e.43 84985 seconds automatic The following is sample output from the show dhcpd statistics command: show dhcpd statistics Address Pools 1 Automatic Bindings 1 Expired Bindings 1 Malformed messages 0 Message Received BOOTREQUEST 0 DHCPDISCOVER 1 DHCPREQUEST 2 DHCPDECLINE 0 DHCPRELEASE 0 DHCPINFORM 0 Message Sent BOOTREPLY 0 DHCPOFFER 1 DHCPACK 1 DHCPNAK 1 Related Commands ip address Configures the IP address and mask for an interface, or defines a local address pool. dhcprelay Configures the DHCP relay agent, which relays requests between the firewall interface of the DCHP server and DHCP clients on a different firewall interface. Syntax Description client_ifc The name of the interface on which the DHCP relay agent accepts client requests. dhcp_server_ip The IP address of the DHCP server to which the DHCP relay agent forwards client requests. enable Enables the DHCP relay agent to accept DHCP requests from clients on the specified interface. seconds The number of seconds allowed for DHCP relay address negotiation. server_ifc The name of the firewall interface on which the DHCP server resides. statistics The DHCP relay statistics, incremented until a clear dhcprelay statistics command is issued. Command Modes Configuration mode. The show dhcprelay commands are also available in privileged mode. Usage Guidelines Use the dhcprelay enable, dhcprelay server, and dhcprelay timeout commands to configure the DHCP relay agent to relay requests between the firewall interface of the DCHP server and DHCP clients on a different firewall interface. Note Use network extension mode for DHCP clients whose DHCP server is on the other side of an Easy VPN tunnel. Otherwise, if the DHCP client is behind a PIX Firewall VPN Easy Remote device connected to an Easy VPN Server using client mode, then the DHCP client will not be able to get a DHCP IP address from the DHCP server on the other side of the Easy VPN Server. dhcprelay enable For the firewall to start the DHCP relay agent with the dhcprelay enable client_ifc command, you must have a dhcprelay server command already in your configuration. Otherwise, the firewall displays an error message similar to the following: DHCPRA:Warning - There are no DHCP servers configured! No relaying can be done without a server! Use the 'dhcprelay server <server_ip> <server_ifc>' command The dhcprelay enable client_ifc command starts a DHCP server task on the specified interface. If this dhcprelay enable command is the first dhcprelay enable command to be issued, and there are dhcprelay server commands in the configuration, then the ports for the DHCP servers referenced are opened and the DHCP relay task starts. When a dhcprelay enable client_ifc command is removed with a no dhcprelay enable client_ifc command, the DHCP server task for that interface stops. When the dhcprelay enable command being removed is the last dhcprelay enable command in the configuration, all of the ports for the servers specified in the dhcprelay server commands are closed and the DHCP relay task stops. dhcprelay server Add at least one dhcprelay server command to your firewall configuration before you enter a dhcprelay enable command or the firewall will issue an error message. The dhcprelay server command opens a UDP port 67 on the specified interface for the specified server and starts the DHCP relay task as soon as a dhcprelay enable command is added to the configuration. If there is no dhcprelay enable command in the configuration, then the sockets are not opened and the DHCP relay task does not start. When a dhcprelay server dhcp_server_ip [server_ifc] command is removed, the port for that server is closed. If the dhcprelay server command being removed is the last dhcprelay server command in the configuration, then the DHCP relay task stops. dhcprelay setroute The dhcprelay setroute client_ifc command enables you to configure the DHCP Relay Agent to change the first default router address (in the packet sent from the DHCP server) to the address of client_ifc. That is, the DHCP Relay Agent substitutes the address of the default router with the address of client_ifc. If there is no default router option in the packet, the firewall adds one containing the address of client_ifc. This allows the client to set its default route to point to the firewall. When the dhcprelay setroute client_ifc command is not configured (and there is a default router option in the packet) it passes through the firewall with the router address unaltered. dhcprelay timeout The dhcprelay timeout command sets the amount of time, in seconds, allowed for responses from the DHCP server to pass to the DHCP client through the relay binding structure. no dhcprelay commands The no dhcprelay enable client_ifc command removes the DHCP relay agent configuration for the interface specified by client_ifc only. The no dhcprelay server dhcp_server_ip [server_ifc] command removes the DHCP relay agent configuration for the DHCP server and specified by dhcp_server_ip [server_ifc] only. show dhcprelay The show dhcprelay command displays the DHCP relay agent configuration, and the show dhcprelay statistics command displays counters for the packets relayed by the DHCP relay agent. The clear dhcprelay command clears all DHCP relay configurations. The clear dhcprelay statistics command clears the show dhcprelay statistics counters. Examples The following example configures the DHCP relay agent for a DHCP server with the IP address of 10.1.1.1 on the outside interface of the firewall and client requests on the inside interface of the firewall, and sets the timeout value to 90 seconds: pixfirewall(config)# dhcprelay server 10.1.1.1 outside pixfirewall(config)# show dhcprelay dhcprelay server 10.1.1.1 outside dhcprelay timeout 50 The following example shows how to disable the DHCP relay agent if there is only one dhcprelay enable command in the configuration: pixfirewall(config)# no dhcprelay enable pixfirewall(config)# show dhcprelay dhcprelay server 10.1.1.1 outside dhcprelay timeout 60 The following is sample output from the show dhcprelay statistics command: pixfirewall(config)# show dhcprelay statistics Packets Relayed BOOTREQUEST 0 DHCPDISCOVER 7 DHCPREQUEST 3 DHCPDECLINE 0 DHCPRELEASE 0 DHCPINFORM 0 BOOTREPLY 0 DHCPOFFER 7 DHCPACK 3 DHCPNAK 0 disable Exit privileged mode and return to unprivileged mode. enable disable Syntax Description enable Enter this at the PIX Firewall command-line interface prompt to enter privileged mode. disable Enter this at the PIX Firewall command-line interface prompt to exit privileged mode. Usage Guidelines Use the enable command to enter privileged mode. The disable command exits privileged mode and returns you to unprivileged mode. domain-name Change the IPSec domain name. domain-name name Usage Guidelines The domain-name command lets you change the IPSec domain name. Note The change of the domain name causes the change of the fully qualified domain name. Once the fully qualified domain name is changed, delete the RSA key pairs using the ca zeroize rsa command, and delete related certificates using the no ca identity ca_nickname command. dynamic-map View or delete a dynamic crypto map entry. To configure crypto dynamic map entries, see the crypto dynamic-map command. clear dynamic-map show dynamic-map Usage Guidelines The clear dynamic-map command removes dynamic-map commands from the configuration. The show dynamic-map command lists the dynamic-map commands in the configuration. Note The dynamic-map command is the same as the crypto dynamic-map command. Refer to the crypto dynamic-map command page for more information such as examples and other command options. eeprom This command applies only to PIX 525 models with serial numbers 44480380055 through 44480480044. Displays and updates the contents of the EEPROM non-volatile storage devices used for low-level Ethernet interface configuration information. eeprom update show eeprom Syntax Description eeprom update Modifies the EEPROM register settings, if necessary, after checking, are reset to the correct values. Usage Guidelines The eeprom commands added in Version 5.2(4) and higher fix a caveat (CSCds76768) involving corruption of the eeprom on the onboard Ethernet interfaces. For additional information, see the December 20, 2000 Field Notice, “Cisco Secure PIX Firewall: PIX-525 Ethernet EEPROM Programming Issue.” This field notice is available at the following website: The problem is summarized as follows: If you configure the onboard Ethernet interfaces (ethernet0 and ethernet1) on a PIX 525 with a serial number of 44480380055 through 44480480044 to full duplex, interface errors and throughput reductions may occur. If you configure the interfaces to half duplex or to auto-sense, the speed and duplex function normally without error. The eeprom command is designed to fix the problem and performs the same function as the "eedisk" utility without requiring access to the ROM monitor mode. The two variants of the eeprom command are the show eeprom command and eeprom update command. The eeprom update command performs the same function as the "eedisk" utility without requiring access to the ROM monitor mode, whereas the show eeprom command indicates whether the Ethernet EEPROM programming is correct or not. The show eeprom command displays the current EEPROM setting, and the eeprom update command modifies the settings if necessary. If the eeprom command does update the EEPROM settings, a reboot of the PIX Firewall is recommended. The eeprom command verifies the EEPROM register settings and updates them if they are not set to the recommended values. The eeprom command does not update the settings if they are correct and does not recommend a reboot unless the settings are changed. The eeprom update command checks causing CSCds76768, are reset to the correct values. Each register is 16 bits. The correct register values are as follows: Examples The show eeprom command will display the current EEPROM register settings: pix525# show eeprom eeprom settings for ifc0: reg0: 0x5000 reg1: 0xfe54 reg2: 0x65f6 reg3: 0x3 reg5: 0x201 reg6: 0x4702 reg10: 0x40c0 reg12: 0x8086 eeprom settings for ifc1: reg0: 0x5000 reg1: 0xfe54 reg2: 0x66f6 reg3: 0x3 reg5: 0x201 reg6: 0x4702 reg10: 0x40c0 reg12: 0x8086reg12: 0x8086 If the command is run on a unit that is not a PIX 525, the following will be seen: pix515# show eeprom This unit is not a PIX-525. Type help or '?' for a list of available commands. If the update needs to be run on the PIX 525, the eeprom update command returns the following: pix525# eeprom update eeprom settings on ifc0 are being reset to defaults: reg0: 0x5000 reg1: 0xfe54 reg2: 0x65f6 reg3: 0x3 reg5: 0x201 reg6: 0x4701 reg10: 0x40c0 reg12: 0x8086 eeprom settings on ifc1 are being reset to defaults: reg0: 0x5000 reg1: 0xfe54 reg2: 0x66f6 reg3: 0x3 reg5: 0x201 reg6: 0x4701 reg10: 0x40c0 reg12: 0x8086 *** WARNING! *** WARNING! *** WARNING! *** WARNING! *** The system should be restarted as soon as possible. *** WARNING! *** WARNING! *** WARNING! *** WARNING! *** If the update has been run successfully, the eeprom command output will appear as follows: pix525# eeprom update eeprom settings on ifc0 are already up to date: reg0: 0x5000 reg1: 0xfe54 reg2: 0x65f6 reg3: 0x3 reg5: 0x201 reg6: 0x4701 reg10: 0x40c0 reg12: 0x808 eeprom settings on ifc1 are already up to date: reg0: 0x5000 reg1: 0xfe54 reg2: 0x66f6 reg3: 0x3 reg5: 0x201 reg6: 0x4701 reg10: 0x40c0 reg12: 0x80866 enable Start privileged mode or access privilege levels. enable [priv_1evel] disable [priv_1evel] show enable Command Modes Unprivileged mode for enable, and configuration mode for enable password. Usage Guidelines The enable command starts privileged mode(s). The PIX Firewall prompts you for your privileged mode password. By default, a password is not required—press the Enter key at the Password prompt to start privileged mode. Use the disable command to exit privileged mode. Use the enable password command to change the password. The enable password command changes the privileged mode password, for which you are prompted after you enter the enable command. When the PIX Firewall starts and you enter privileged mode, the password prompt appears. There is not a default password (press the Enter key at the Password prompt). You can return the enable password to its original value (press the Enter key at prompt) by entering the following command: pixfirewall# enable password pixfirewall# Note If you change the password, write it down and store it in a manner consistent with your site’s security policy. Once you change this password, you cannot view it again. Also, ensure that all who access the PIX Firewall console are given this password. Use the passwd command to set the password for Telnet access to the PIX Firewall console. The default passwd value is cisco. See the passwd command page for more information. If no privilege level name is specified, then the highest privilege level is assumed. The show enable command displays the password configuration for privilege levels. Examples The following example shows how to start privileged mode with the enable command and then configuration mode with the configure terminal command. pixfirewall> enable Password: pixfirewall# configure terminal pixfirewall(config)# The following examples show how to start privileged mode with the enable command, change the enable password with the enable password command, enter configuration mode with the configure terminal command, and display the contents of the current configuration with the write terminal command: pixfirewall> enable Password: pixfirewall# enable password w0ttal1fe pixfirewall# configure terminal pixfirewall(config)# write terminal Building configuration... ... enable password 2oifudsaoid.9ff encrypted ... The following example shows how to configure enable passwords for levels other than the default level of 15: pixfirewall(config)# enable password cisco level 10 However, notice that defining privilege levels 10 and 12 does not change or remove the level 15 password. established Permit return connections on ports other than those used for the originating connection based on an established connection. [no] established dest_protocol [src_port] [permitto protocol port [-port]] [permitfrom protocol port[-port]] clear established show established Syntax Description dest_port Specifies the destination port to use for the established connection lookup. This is the originating traffic's destination port and may be specified as 0 if the protocol does not specify which destination port(s) will be used. Use wildcard ports (0) only when necessary. permitfrom Used to specify the return traffic's protocol and from which source port(s) the traffic will be permitted. permitto Used to specify the return traffic's protocol and to which destination port(s) the traffic will be permitted. src_port Specifies the source port to use for the established connection lookup. This is the originating traffic's source port and may be specified as 0 if the protocol does not specify which source port(s) will be used. Use wildcard ports (0) only when necessary. Usage Guidelines The established command allows outbound connections return access through the PIX Firewall. This command works with two connections, an original connection outbound from a network protected by the PIX Firewall and a return connection inbound between the same two devices on an external host. The first protocol, destination port, and optional source port specified are for the initial outbound connection. The permitto and permitfrom options refine the return inbound connection. Note We recommend that you always specify the established command with the permitto and permitfrom options. Without these options, the use of the established command opens a security hole that can be exploited for attack of your internal systems. See the “Security Problem” section that follows for more information. The permitto option lets you specify a new protocol or port for the return connection at the PIX Firewall. The permitfrom option lets you specify a new protocol or port at the remote server. The no established command disables the established feature. The clear established command removes all establish command statements from your configuration. Note For the established command to work properly, the client must listen on the port specified with the permitto option. You can use the established command with the nat 0 command statement (where there are no global command statements). Note The established command cannot be used with Port Address Translation (PAT). This command works as though it were written “If there exists a connection between two hosts using protocol A from src port B destined for port C, permit return connections through the PIX Firewall via protocol D (D can be different from A), if the source port(s) correspond to F and the destination port(s) correspond to E.” For example: established tcp 6060 0 permitto tcp 6061 permitfrom tcp 6059 In this case, if a connection is started by an internal host to an external host using TCP source port 6060 and any destination port, the PIX Firewall permits return traffic between the hosts via TCP destination port 6061 and TCP source port 6059. For example: established udp 0 6060 permitto tcp 6061 permitfrom tcp 1024-65535 In this case, if a connection is started by an internal host to an external host using UDP destination port 6060 and any source port, the PIX Firewall permits return traffic between the hosts via TCP destination port 6061 and TCP source port 1024-65535. Security Problem The established command has been enhanced to optionally specify the destination port used for connection lookups. Only the source port could be specified previously with the destination port being 0 (a wildcard). This addition allows more control over the command and provides support for protocols where the destination port is known, but the source port is not., external systems to which connections are made could make unrestricted connections to the internal host involved in the connection. The following are examples of potentially serious security violations that could be allowed when using the established command. For example: established tcp 0 4000 In this example, if an internal system makes a TCP connection to an external host on port 4000, then the external host could come back in on any port using any protocol: established tcp 0 0 (Same as previous releases established tcp 0 command.) Examples The following example occurs when a local host 10.1.1.1 starts a TCP connection on port 9999 to a foreign host 209.165.201.1. The example allows packets from the foreign host 209.165.201.1 on port 4242 back to local host 10.1.1.1 on port 5454. established tcp 9999 permitto tcp 5454 permitfrom tcp 4242 The next example allows packets from foreign host 209.165.201.1 on any port back to local host 10.1.1.1 on port 5454: established tcp 9999 permitto tcp 5454 XDMCP Support PIX Firewall now provides support for XDMCP (X Display Manager Control Protocol) with assistance from the established command. XDMCP is on by default, but will not complete the session unless the established command is used. For example: established tcp 0 6000 to tcp 6000 from tcp 1024-65535 This enables the internal XDMCP equipped (UNIX or ReflectionX) hosts to access external XDMCP equipped XWindows servers. UDP/177 based XDMCP negotiates a TCP based XWindows session and subsequent TCP back connections will be permitted. Because the source port(s) of the return traffic is unknown, the src_port field should be specified as 0 (wildcard). The destination port, dest_port, will typically be 6000; the well-known XServer port. The dest_port should be 6000 + n; where n represents the local display number. Use the following UNIX command to change this value. setenv DISPLAY hostname:displaynumber.screennumber The established command is needed because many TCP connections are generated (based on user interaction) and the source port for these connection is unknown. Only the destination port will be static. The PIX Firewall does XDMCP fixups transparently. No configuration is required, but the established command is necessary to accommodate the TCP session. Be advised that using applications like this through the PIX Firewall may open up security holes. The XWindows system has been exploited in the past and newly introduced exploits are likely to be discovered. exit Exit an access mode. exit enable Usage Guidelines Use the exit command to exit from an access mode. This command is the same as the quit command. Examples The following example shows how to exit configuration mode and then privileged mode: pixfirewall(config)# exit pixfirewall# exit pixfirewall> failover Enable or disable the PIX Firewall failover feature on a standby PIX Firewall. failover reset Syntax Description act_mac The interface MAC address for the active PIX Firewall. active Make a PIX Firewall the active unit. Use this command when you need to force control of the connection back to the unit you are accessing, such as when you want to switch control back from a unit after you have fixed a problem and want to restore service to the primary unit. Either enter the no failover active command on the secondary unit to switch service to the primary or the failover active command on the primary unit. detail Displays LAN-based failover configuration information. enable Enables LAN-based failover; otherwise, serial cable failover is used. if_name The interface name for the failover IP address. ip_address The IP address used by the standby unit to communicate with the active unit. Use this IP address with the ping command to check the status of the standby unit. This address must be on the same network as the system IP address. For example, if the system IP address is 192.159.1.3, set the failover IP address to 192.159.1.4. key Enables encryption and authentication of LAN-based failover messages between PIX Firewalls. key_secret The shared secret key. lan Specifies LAN-based failover. lan interface The name of the firewall interface dedicated to LAN-based failover. The interface lan_if_name name of a VLAN logical interface cannot be used for lan_if_name. link Specify the interface where a Fast Ethernet or Gigabit LAN link is available for Stateful Failover. A VLAN logical interface cannot be used. mif_name The name of the interface to set the MAC address. poll seconds Specify. primary Specifies the primary PIX Firewall to use for LAN-based failover. replicate http The [no] failover replicate http command allows the stateful replication of HTTP sessions in a Stateful Failover environment. The no form of this command disables HTTP replication in a Stateful Failover configuration. When HTTP replication is enabled, the show failover command displays the failover replicate http command configuration.. secondary Specifies the secondary PIX Firewall to use for LAN-based failover. stateful_if_name In addition to the failover cable, a dedicated Fast Ethernet or Gigabit LAN link is required to support Stateful Failover. The interface name of a VLAN logical interface cannot be used for stateful_if_name. stn_mac The interface MAC address for the standby PIX Firewall. Usage Guidelines The default failover setup uses serial cable failover. LAN-based failover requires explicit LAN-based failover configuration. Additionally, for LAN-based failover, you must install a dedicated 100 Mbps or Gigabit Ethernet, full-duplex VLAN switch connection for failover operations. Failover is not supported using a crossover Ethernet cable between two PIX Firewall units. Note The PIX 506/506E cannot be used for failover in any configuration. The primary unit in the PIX 515/515E, PIX 525, or PIX 535 failover pair must have an Unrestricted (UR) license. The secondary unit can have Failover (FO) or UR license. However, the failover pair must be two otherwise identical units with the same PIX Firewall hardware and software. For a Stateful Failover link, use the mtu command to set the interface maximum transmission unit (MTU) to 1500 bytes or greater. For serial cable failover, use the failover command without an argument after you connect the optional failover cable between your primary PIX Firewall and a secondary PIX. For LAN-based failover, use the failover lan commands. The show failover lan command displays LAN-based failover information (only), and show failover lan detail supplies debugging information for your LAN-based failover configuration. Note Refer to the Cisco PIX Firewall and VPN Configuration Guide for configuration information. For failover, the PIX Firewall requires that you configure any unused interfaces with one of the following methods: • Shutdown the interface and do not configure its IP or failover IP address. If these addresses are configured, use the no ip address and no failover ip address commands to remove the configuration. • Configure the interface like other interfaces but use a cross-over Ethernet cable to connect the interface to the Standby unit. Do not connect the interface to an external switch or hub device. Set the speed of the Stateful Failover dedicated interface to 100full for a Fast Ethernet interface or 1000fullsx for a Gigabit Ethernet interface. off line for maintenance. Because the standby unit does not keep state information on each connection, all active connections will be dropped and must be re-established by the clients. Use the failover link command to enable Stateful Failover. Enter the no failover link command to disable the Stateful Failover feature. If a failover IP address has not been entered, the show failover command will display 0.0.0.0 for the IP address, and monitoring of the interfaces will remain in “waiting” state. A failover IP address must be set for failover to work. The failover mac address command enables you to configure a virtual MAC address for a PIX Firewall failover pair. The failover mac address command sets the PIX Firewall to use the virtual MAC address stored in the PIX Firewall configuration after failover, instead of obtaining a MAC address by contacting its failover peer. This enables the PIX Firewall failover pair to maintain the correct MAC addresses after failover. If a virtual MAC address is not specified, the PIX Firewall failover pair uses the burned in network interface card (NIC) address as the MAC address. However, the failover mac address command is unnecessary (and therefore cannot be used) on an interface configured for LAN-based failover because the failover lan interface lan_if_name command does not change the IP and MAC addresses when failover occurs. When adding the failover mac address command to your configuration, it is best to configure the virtual MAC address, save the configuration to Flash memory, and then reload the PIX Firewall pair. If the virtual MAC address is added when there are active connections, then those connections will stop. Also, you must write the complete PIX Firewall configuration, including the failover mac address command, into the Flash memory of the secondary PIX Firewall for the virtual MAC addressing to take effect. The failover poll seconds command lets you determine. When a failover cable connects two PIX Firewall units, the no failover command now disables failover until you enter the failover command to explicitly enable failover. Previously, when the failover cable connected two PIX Firewall units and you entered the no failover command, failover would automatically re-enable after 15 seconds. You can also view the information from the show failover command using SNMP. Refer to the Cisco PIX Firewall and VPN Configuration Guide for more information on configuring failover. Usage Notes 1. LAN-based failover requires a dedicated interface, but the same interface can also be used for Stateful Failover. However, the interface needs enough capacity to handle both the LAN-based failover and Stateful Failover traffic; otherwise, use two separate dedicated interfaces. 2. If you reboot the PIX Firewall without entering the write memory command and the failover cable is connected, failover mode automatically enables. The Current IP Addresses are the same as the System IP Addresses on the failover active unit. When the primary unit fails, the Current IP Addresses become those of the standby unit. LAN-Based Failover To make sure LAN-based failover starts properly, follow these configuration steps: Step 1 Configure the primary PIX Firewall unit before connecting the failover LAN interface. Step 2 Save the primary unit configuration to Flash memory. Step 3 Configure the PIX Firewall secondary unit using the appropriate failover lan commands before connecting the LAN-based failover interface. Step 4 Save the secondary unit configuration to Flash memory. Step 5 Reboot both units and connect the LAN-based failover interfaces to the designated failover switch, hub, or VLAN. Step 6 If any item in a failover lan command needs to be changed, then disconnect the LAN-based failover interface, and repeat the preceeding steps. Note When properly configured, the LAN-based failover configurations for your primary and secondary PIX Firewall units should be different, reflecting which is primary and which is secondary. The following example outlines how to configure LAN-based failover between two PIX Firewall units. Primary PIX Firewall configuration: : pix(config)# nameif ethernet0 outside security0 pix(config)# nameif ethernet1 inside security100 pix(config)# nameif ethernet2 stateful security20 pix(config)# nameif ethenret3 lanlink security30 : pix(config)#interface ethernet0 100full pix(config)#interface ethernet1 100full pix(config)#interface ethernet2 100full pix(config)#interface ethenret3 100full pix(config)#interface ethernet4 100full : pix(config)# ip address outside 172.23.58.70 255.255.255.0 pix(config)# ip address inside 10.0.0.2 255.255.255.0 pix(config)# ip address stateful 10.0.1.2 255.255.255.0 pix(config)# ip address lanlink 10.0.2.2 255.255.255.0 pix(config)# failover ip address outside 172.23.58.51 pix(config)# failover ip address inside 10.0.0.4 pix(config)# failover ip address stateful 10.0.1.4 pix(config)# failover ip address lanlink 10.0.2.4 pix(config)# failover pix(config)# failover poll 15 pix(config)# failover lan unit primary pix(config)# failover lan interface lanlink pix(config)# failover lan key 12345678 pix(config)# failover lan enable : Secondary PIX Firewall configuration: Pix2(config)# nameif ethernet3 lanlink security30 pix2(config)# interface ethernet3 100full The following example illustrates how to use the failover mac address command: ip address outside 172.23.58.50 255.255.255.224 ip address inside 192.168.2.11 255.255.255.0 ip address intf2 192.168.10.11 255.255.255.0 failover failover ip address outside 172.23.58.51 failover ip address inside 192.168.2.12 failover ip address intf2 192.168.10.12 failover mac address outside 00a0.c989.e481 00a0.c969.c7f1 failover mac address inside 00a0.c976.cde5 00a0.c922.9176 failover mac address intf2 00a0.c969.87c8 00a0.c918.95d8 failover link intf2 ...: The output of the show failover command includes a section for LAN-based failover if it is enabled as follows: pix(config)# show failover Failover On Cable status: Unknown Reconnect timeout 0:00:00 Poll frequency 15 seconds Last Failover at: 18:32:16 UTC Mon Apr 7 2003 This host: Primary - Standby Active time: 255 (sec) Interface outside (192.168.1.232): Normal Interface inside (192.168.5.2): Normal Other host: Secondary - Active Active time: 256305 (sec) Interface outside (192.168.1.231): Normal Interface inside (192.168.5.1): Normal The show failover lan command displays only the LAN-based failover section, as follows: pix(config)# show failover lan Lan Based Failover is Active interface dmz (209.165.200.226): Normal, peer (209.165.201.1): Normal The show failover lan detail command is used mainly for debugging purposes and displays information similar to the following: pix(config)# show failover lan detail Lan Failover is Active This Pix is Primary Command Interface is dmz Peer Command Interface IP is 209.165.201.1 My interface status is 0x1 Peer interface status is 0x1 Peer interface downtime is 0x0 Total msg send: 103093, rcvd: 103031, droped: 0, retrans: 13, send_err: 0 Total/Cur/Max of 51486:0:5 msgs on retransQ ... LAN FO cmd queue, count: 0, head: 0x0, tail: 0x0 Failover config state is 0x5c Failover config poll cnt is 0 Failover pending tx msg cnt is 0 Failover Fmsg cnt is 0 : filter Enables, disables, or displays URL, Java, or ActiveX filtering. [no] filter ftp dest-port local_ip local_mask foreign_ip foreign_mask [allow] [interact-block] [no] filter url [http | port[-port]] local_ip local_mask foreign_ip foreign_mask [allow] [proxy-block] [longurl-truncate | longurl-deny] [cgi-truncate] [no] filter url port | except local_ip mask foreign_ip mask [allow] [proxy-block] [longurl-truncate | longurl-deny] [cgi-truncate] clear filter show filter Syntax Description activex Block inbound ActiveX, and other HTML <object> tags from outbound packets. allow filter url only: When the server is unavailable, let outbound connections pass through the firewall without filtering. If you omit this option, and if the N2H2 or Websense server goes off line, PIX Firewall stops outbound port 80 (Web) traffic until the N2H2 or Websense server is back on line. cgi_truncate Sends a CGI script as an URL. dest-port The destination port number. except filter url only; creates an exception to a previous filter condition. foreign_ip The IP address of the lowest security level interface to which access is sought. You can use 0.0.0.0 (or in shortened form, 0) to specify all hosts. foreign_mask Network mask of foreign_ip. Always specify a specific mask value. You can use 0.0.0.0 (or in shortened form, 0) to specify all hosts. ftp Enables File Transfer Protocol (FTP) filtering. Available with Websense URL filtering only. http Specifies port 80. You can enter http or www instead of 80 to specify port 80.) https Enables HTTPS filtering. Available with Websense URL filtering only. interact-block Prevents users from connecting to the FTP server through an interactive FTP program. java Specifies to filter out Java applets returning from an outbound connection. local_ip The IP address of the highest security level interface from which access is sought. You can set this address to 0.0.0.0 (or in shortened form, 0) to specify all hosts. local_mask Network mask of local_ip. You can use 0.0.0.0 (or in shortened form, 0) to specify all hosts. longurl-deny Denies the URL request if the URL is over the URL buffer size limit or the URL buffer is not available. longurl-truncate Sends only the originating host name or IP address to the Websense server if the URL is over the URL buffer limit. mask Any mask. port The port that receives Internet traffic on the PIX Firewall. Typically, this is port 80, but other values are accepted. The http or url literal can be used for port 80. proxy-block Prevents users from connecting to an HTTP proxy server. url Filter Universal Resource Locators (URLs) from data moving through the PIX Firewall. Usage Guidelines The clear filter command removes all filter commands from the configuration. filter activex. Note The <object> tag is also used for Java applets, image files, and multimedia objects, which will also be blocked by the filter activex command. If the <object> or </object> HTML tags split across network packets or if the code in the tags is longer than the number of bytes in the MTU, the PIX Firewall cannot block the tag. ActiveX blocking does not occur when users access an IP address referenced by the alias command. To specify that all outbound connections have ActiveX blocking, use the following command: filter activex 80 0 0 0 0 This command specifies that the ActiveX blocking applies to Web traffic on port 80 from any local host and for connections to any foreign host. filter java The filter java command filters out Java applets that return to the PIX Firewall from an outbound connection. The user still receives the HTML page, but the web page source for the applet is commented out so that the applet cannot execute. Use 0 for the local_ip or foreign_ip IP addresses to mean all hosts. Note If Java applets are known to be in <object> tags, use the filter activex command to remove them. To specify that all outbound connections have Java applet blocking, use the following command: filter java 80 0 0 0 0 This command specifies that the Java applet blocking applies to Web traffic on port 80 from any local host and for connections to any foreign host. filter url The filter url command lets you prevent outbound users from accessing World Wide Web URLs that you designate using the N2H2 or Websense filtering application. Note The url-server command must be configured before issuing the filter command for HTTPS and FTP, and if all URL servers are removed from the server list, then all filter commands related to URL filtering are also removed. The allow option to the filter command determines how the PIX Firewall behaves in the event that the N2H2 or Websense server goes off line. If you use the allow option with the filter command and the N2H2 or Websense server goes offline, port 80 traffic passes through the PIX Firewall without filtering. Used without the allow option and with the server off line, PIX Firewall stops outbound port 80 (Web) traffic until the server is back on line, or if another URL server is available, passes control to the next URL server. Note With the allow option set, PIX Firewall now passes control to an alternate server if the N2H2 or Websense server goes off line. The N2H2 or Websense server works with the PIX Firewall to deny users from access to websites based on the company security policy. Websense protocol Version 4 enables group and username authentication between a host and a PIX Firewall. The PIX Firewall performs a username lookup, and then Websense server handles URL filtering and username logging. The N2H2 server must be a Windows workstation (2000, NT, or XP), running an IFP Server, with a recommended minimum of 512 MB of RAM. Also, the long URL support for the N2H2 service is capped at 3 KB, less than the cap for Websense. Websense protocol Version 4 contains the following enhancements: • URL filtering allows the PIX Firewall to check outgoing URL requests against the policy defined on the Websense server. • Username logging tracks username, group, and domain name on the Websense server. • Username lookup enables the PIX Firewall to use the user authentication table to map the host's IP address to the username. Follow these steps to filter URLs: Step 1 Designate an N2H2 or Websense server with the appropriate vendor-specific form of the url-server command. Step 2 Enable filtering with the filter command. Step 3 If needed, improve throughput with the url-cache command. However, this command does not update Websense logs, which may affect Websense accounting reports. Accumulate Websense run logs before using the url-cache command. Step 4 Use the show url-cache stats and the show perfmon commands to view run information. Examples The following example filters all outbound HTTP connections except those from the 10.0.2.54 host: url-server (perimeter) host 10.0.1.1 filter url 80 0 0 0 0 filter url except 10.0.2.54 255.255.255.255 0 0 The following example blocks all outbound HTTP connections destined to a proxy server that listens on port 8080: filter url 8080 0 0 0 0 proxy-block fixup protocol Modifies PIX Firewall protocol fixups to add, delete, or change services and feature defaults. clear fixup show ctiqbe show fixup show h225 show h245 show h323-ras show mgcp show sip show skinny Syntax Description ctiqbe Enables the Computer Telephony Interface Quick Buffer Encoding (CTIQBE) fixup. Used with Cisco TAPI/JTAPI applications. dns Enables the DNS fixup. esp-ike Enables PAT for Encapsulating Security Payload (ESP), single tunnel. fixup protocol ils Provides support for Microsoft NetMeeting, SiteServer, and Active Directory products that use LDAP to exchange directory information with an ILS server. fixup protocol Modifies PIX Firewall protocol fixups to add, delete, or change services and protocol [protocol] feature defaults. [port[-port]] ftp Specifies to change the ftp port number. h323 h225 Specifies to use H.225, the ITU standard that governs H.225.0 session establishment and packetization, with H.323. H.225.0 actually describes several different protocols: RAS, use of Q.931, and use of RTP. h323 ras Specifies to use RAS with H.323 to enable dissimilar communication devices to communicate with each other. H.323 defines a common set of CODECs, call setup and negotiating procedures, and basic data transport methods. http [port[-port] The default port for HTTP is 80. Use the port option to change the HTTP port, or the port-port option to specify a range of HTTP ports. ils Specifies the Internet Locator Service. The default port is TCP LDAP server port 389. dns Specifies the maximum DNS packet length allowed. Default is 512 bytes. maximum-length length mgcp Enables the Media Gateway Control Protocol (MGCP) fixup. (Use the mgcp command to configure additional support for the MGCP fixup.) no Disables the fixup of a protocol by removing all fixups of the protocol from the configuration using the no fixup command. After removing all fixups for a protocol, the no fixup form of the command or the default port is stored in the configuration. port The port on which to enable the fixup (application inspection). You can use port numbers or supported port literals. The default ports are: TCP 21 for ftp, TCP LDAP server port 389 for ils, TCP 80 for http, TCP 1720 for h323 h225, UDP 1718-1719 for h323 ras, TCP 514 for rsh, TCP 554 for rtsp, TCP 2000 for skinny, TCP 25 for smtp, TCP 1521 for sqlnet, TCP 5060 for sip, and UDP 69 for TFTP. The default port value for rsh cannot be changed, but additional port statements can be added. See the “Ports” section in Chapter 2, “Using PIX Firewall Commands” for a list of valid port literal names. The port over which the designated protocol travels. port-port Specifies a port range. pptp Enables Point-to-Point Tunneling Protocol (PPTP) application inspection. The default port is 1723. protocol Specifies the protocol to fix up. protocol_name The protocol name. ras Registration, admission, and status (RAS) is a signaling protocol that performs registration, admissions, bandwidth changes, status, and disengage procedures between the VoIP gateway and the gatekeeper. sip Enable or change the port assignment for the Session Initiation Protocol (SIP) for Voice over IP TCP connections. UDP SIP is on by default and can be disabled and the port assignment is nonconfigurable. PIX Firewall Version 6.2 introduced PAT support for SIP. skinny Enable SCCP application inspection. The default port is 2000. SCCP protocol supports IP telephony and can coexist in an H.323 environment. An application layer ensures that all SCCP signaling and media packets can traverse the PIX Firewall and interoperate with H.323 terminals. Skinny is the short name form for Skinny Client Control Protocol (SCCP). strict Prevent web browsers from sending embedded commands in FTP requests. Each FTP command must be acknowledged before a new command is allowed. Connections sending embedded commands are dropped. tftp Enable TFTP application inspection. The default port is 69. upd Specifies the UDP port number. Command Modes All fixup protocol commands are available in configuration mode unless otherwise specified. The show fixup protocol mgcp command is available in privileged mode. Defaults The default ports for the PIX Firewall fixup protocols are as follows: (These are the defaults that are enabled on a PIX Firewall running software Version 6.3(2).) The fixup for MGCP is disabled by default. Usage Guidelines The fixup protocol commands let you view, change, enable, or disable the use of a service or protocol through the PIX Firewall. The ports you specify are those that the PIX Firewall listens at for each respective service. You can change the port value for every service except rsh. The fixup protocol commands are always present in the configuration and are enabled by default. The fixup protocol command performs the Adaptive Security Algorithm based on different port numbers other than the defaults. This command is global and changes things for both inbound and outbound connections, and cannot be restricted to any static command statements. The clear fixup command resets the fixup configuration to its default. It does not remove the default fixup protocol commands. You can disable the fixup of a protocol by removing all fixups of the protocol from the configuration using the no fixup command. After you remove all fixups for a protocol, the no fixup form of the command or the default port is stored in the configuration. The no fixup protocol ctiqbe 2748 command disables the CTIQBE fixup. The show ctiqbe command displays information of CTIQBE sessions established across the PIX Firewall. Along with debug ctiqbe and show local-host, this command is used for troubleshooting CTIQBE fixup issues. Note We recommend that you have the pager command configured before using the show ctiqbe command. If there are a lot of CTIQBE sessions and the pager command is not configured, it can take a while for the show ctiqbe command output to reach the end. The following is sample output from the show ctiqbe command under the following conditions. There is only one active CTIQBE session setup across the PIX Firewall. It is established between an internal CTI device (for example, a Cisco IP SoftPhone) at local address 10.0.0.99 and an external Cisco Call Manager at 172.29.1.77, where TCP port 2748 is the Cisco CallManager. The heartbeat interval for the session is 120 seconds. pixfirewall(config)# show ctiqbe Total: 1 LOCAL FOREIGN STATE HEARTBEAT --------------------------------------------------------------- 1 10.0.0.99/1117 172.29.1.77/2748 1 120 ---------------------------------------------- RTP/RTCP: PAT xlates: mapped to 172.29.1.99(1028 - 1029) ---------------------------------------------- MEDIA: Device ID 27 Call ID 0 Foreign 172.29.1.99 (1028 - 1029) Local 172.29.1.88 (26822 - 26823) ---------------------------------------------- The CTI device has already registered with the CallManager. The device’s internal address and RTP listening port is PATed to 172.29.1.99 UDP port 1028. Its RTCP listening port is PATed to UDP 1029. The line beginning with RTP/RTCP: PAT xlates: appears only if an internal CTI device has registered with an external CallManager and the CTI device’s address and ports are PATed to that external interface. This line does not appear if the CallManager is located on an internal interface, or if the internal CTI device’s address and ports are NATed to the same external interface that is used by the CallManager. The output indicates a call has been established between this CTI device and another phone at 172.29.1.88. The RTP and RTCP listening ports of the other phone are UDP 26822 and 26823. The other phone locates on the same interface as the CallManager because the PIX Firewall does not maintain a CTIQBE session record associated with the second phone and CallManager. The active call leg on the CTI device side can be identified with Device ID 27 and Call ID 0. The following is the xlate information for these CTIBQE connections: pixfirewall(config)# show xlate debug 3 in use, 3 most used Flags: D - DNS, d - dump, I - identity, i - inside, n - no random, o - outside, r - portmap, s - static TCP PAT from inside:10.0.0.99/1117 to outside:172.29.1.99/1025 flags ri idle 0:00:22 timeout 0:00:30 UDP PAT from inside:10.0.0.99/16908 to outside:172.29.1.99/1028 flags ri idle 0:00:00 timeout 0:04:10 UDP PAT from inside:10.0.0.99/16909 to outside:172.29.1.99/1029 flags ri idle 0:00:23 timeout 0:04:10 Set the maximum length for the DNS fixup as shown in the following example: pixfirewall(config)# fixup protocol dns maximum-length 1500 Note The PIX Firewall drops DNS packets sent to UDP port 53 that are larger than the configured maximum length. The default value is 512 bytes. A syslog message will be generated when a DNS packet is dropped. The no fixup protocol dns command disables the DNS fixup. The clear fixup protocol dns resets the DNS fixup to its default settings (512 byte maximum packet length). Note If the DNS fixup is disabled, the A-record is not NATed and the DNS ID is not matched in requests and responses. By disabling the DNS fixup, the maximum length check on UDP DNS packets can be bypassed and packets greater than the maximum length configured will be permitted. • Use caution when moving FTP to a higher port. For example, if you set the FTP port to 2021 by entering fixup protocol ftp 2021 all connections that initiate to port 2021 will have their data payload interpreted as FTP commands. The following is an example of a fixup protocol ftp command configuration that uses multiple FTP fixups: : : For a PIX Firewall with two interfaces : ip address outside 192.168.1.1 255.255.255.0 ip address inside 10.1.1.1 255.255.255.0 : : There is an inside host 10.1.1.15 that will be : exported as 192.168.1.15. This host runs the FTP : services at port 21 and 1021 : static (inside, outside) 192.168.1.15 10.1.1.15 : : Construct an access list to permit inbound FTP traffic to : port 21 and 1021 : access-list outside permit tcp any host 192.168.1.15 eq ftp access-list outside permit tcp any host 192.168.1.15 eq 1021 access-group outside in interface outside : : Specify that traffic to port 21 and 1021 are FTP traffic : fixup protocol ftp 21 fixup protocol ftp 1021 If you disable FTP fixups with the no fixup protocol ftp command, outbound users can start connections only in passive mode, and all inbound FTP is disabled. The strict option to the fixup protocol ftp command prevents web browsers from sending embedded commands in FTP requests. Each FTP command must be acknowledged before a new command is allowed. Connections sending embedded commands are dropped. The strict option only lets an FTP server generate the 227 command and only lets an FTP client generate the PORT command. The 227 and PORT commands are checked to ensure they do not appear in an error string. Additionally, fixup protocol h323 port becomes fixup protocol h323 h225 port. You can disable H.225 signaling or RAS fixup (or both) with the no fixup protocol h323 {h225 | ras} port [-port] command. PIX Firewall software Version 6.3 and higher supports H.323 v3 and v4 messages as well as the H.323 v3 feature Multiple Calls on One Call Signaling Channel. The show h225, show h245, and show h323-ras commands display connection information for troubleshooting H.323 fixup issues. Note Before using the show h225, show h245, or show h323-ras commands, we recommend that you configure the pager command. If there are a lot of session records and the pager command is not configured, it may take a while for the show output to reach its end. If there is an abnormally large number of connections, check that the sessions are timing out based on the default timeout values or the values set by you. If they are not, then there is a problem that needs to be investigated. The show h225 command displays information for H.225 sessions established across the PIX Firewall. Along with the debug h323 h225 event, debug h323 h245 event, and show local-host commands, this command is used for troubleshooting H.323 fixup issues. The following is sample output from the show h225 command: pixfirewall(config)# show h225 Total H.323 Calls: 1 1 Concurrent Call(s) for Local: 10.130.56.3/1040 Foreign: 172.30.254.203/1720 1. CRV 9861 Local: 10.130.56.3/1040 Foreign: 172.30.254.203/1720 0 Concurrent Call(s) for Local: 10.130.56.4/1050 Foreign: 172.30.254.205/1720 This output indicates that there is currently 1 active H.323 call going through the PIX Firewall between the local endpoint 10.130.56.3 and foreign host 172.30.254.203, and for these particular endpoints, there is 1 concurrent call between them, with a CRV (Call Reference Value) for that call of 9861. For the local endpoint 10.130.56.4 and foreign host 172.30.254.205, there are 0 concurrent Calls. This means that there is no active call between the endpoints even though the H.225 session still exists. This could happen if, at the time of the show h225 command, the call has already ended but the H.225 session has not yet been deleted. Alternately, it could mean that the two endpoints still have a TCP connection opened between them because they set “maintainConnection” to TRUE, so the session is kept open until they set it to FALSE again, or until the session times out based on the H.225 timeout value in your configuration. The show h245 command displays information for H.245 sessions established across the PIX Firewall by endpoints using slow start. (Slow start is when the two endpoints of a call open another TCP control channel for H.245. Fast start is where the H.245 messages are exchanged as part of the H.225 messages on the H.225 control channel.) Along with the debug h323 h245 event, debug h323 h225 event, and show local-host commands, this command is used for troubleshooting H.323 fixup issues. The following is sample output from the show h245 command: pixfirewall(config)# show h245 Total: 1 LOCAL TPKT FOREIGN TPKT 1 10.130.56.3/1041 0 172.30.254.203/1245 0 MEDIA: LCN 258 Foreign 172.30.254.203 RTP 49608 RTCP 49609 Local 10.130.56.3 RTP 49608 RTCP 49609 MEDIA: LCN 259 Foreign 172.30.254.203 RTP 49606 RTCP 49607 Local 10.130.56.3 RTP 49606 RTCP 49607 There is currently one H.245 control session active across the PIX Firewall. The local endpoint is 10.130.56.3, and we are expecting the next packet from this endpoint to have a TPKT header since the TPKT value is 0. (The TKTP header is a 4-byte header preceding each H.225/H.245 message. It gives the length of the message, including the 4-byte header.) The foreign host endpoint is 172.30.254.203, and we are expecting the next packet from this endpoint to have a TPKT header since the TPKT value is 0. The media negotiated between these endpoints have a LCN (logical channel number) of 258 with the foreign RTP IP address/port pair of 172.30.254.203/49608 and a RTCP IP address/port of 172.30.254.203/49609 with a local RTP IP address/port pair of 10.130.56.3/49608 and a RTCP port of 49609. The second LCN of 259 has a foreign RTP IP address/port pair of 172.30.254.203/49606 and a RTCP IP address/port pair of 172.30.254.203/49607 with a local RTP IP address/port pair of 10.130.56.3/49606 and RTCP port of 49607. The show h323-ras command displays information for H.323 RAS sessions established across the PIX Firewall between a gatekeeper and its H.323 endpoint. Along with the debug h323 ras event and show local-host commands, this command is used for troubleshooting H.323 RAS fixup issues. The following is sample output from the show h323-ras command: pixfirewall(config)# show h323-ras Total: 1 GK Caller 172.30.254.214 10.130.56.14 This output shows that there is one active registration between the gatekeeper 172.30.254.214 and its client 10.130.56.14. Note The no fixup protocol http command statement also disables the filter url command. The no fixup protocol mgcp command removes the MGCP fixup configuration. The show fixup protocol mgcp command displays the configured MGCP fixups. Please refer to the mgcp command for information on the show mgcp command. The PPTP fixup must be enabled for PPTP traffic to be translated by PAT. Additionally, PAT is only performed for a modified version of GRE (RFC2637) and only if it is negotiated over the PPTP TCP control channel. PAT is not performed for the unmodified version of GRE (RFC 1701 and RFC 1702). If you are using Cisco IP/TV, use RTSP TCP port 554 and TCP 8554: fixup protocol rtsp 554 fixup protocol rtsp 8554 PIX Firewall software Version 6.2 and higher supports PAT for SIP. In PIX Firewall software Version 6.3 and later, you can disable the SIP fixup for both UDP and TCP signaling with the commands no fixup protocol sip udp 5060 and no fixup protocol sip [port[-port] respectively. For additional information about the SIP protocol see RFC 2543. For additional information about the Session Description Protocol (SDP), see RFC 2327. The show sip command displays information for SIP sessions established across the PIX Firewall. Along with the debug sip and show local-host commands, this command is used for troubleshooting SIP fixup issues. Note We recommend that you configure the pager command before using the show sip command. If there are a lot of SIP session records and the pager command is not configured, it will take a while for the show sip command output to reach its end. This sample shows two active SIP sessions on the PIX Firewall (as shown in the Total field). Each call-id represents a call. The first session, with the call-id c3943000-960ca-2e43-228f@10.130.56.44, is in the state Call Init, which means the session is still in call setup. Call setup is not complete until a final response to the call has been received. For instance, the caller has already sent the INVITE, and maybe received a 100 Response, but has not yet seen the 200 OK, so the call setup is not complete yet. Any non-1xx response message is considered a final response. This session has been idle for 1 second. The second session is in the state Active, in which call setup is complete and the endpoints are exchanging media. This session has been idle for 6 seconds. Note If the address of an internal Cisco CallManager is configured for NAT or PAT to a different IP address or port, registrations for external Cisco IP Phones will fail because the PIX Firewall currently does not support NAT or PAT for the file content transferred via TFTP. Although the PIX Firewall does support NAT of TFTP messages, and opens a pinhole for the TFTP file to traverse the firewall, the PIX Firewall cannot translate the Cisco CallManager IP address and port embedded in the Cisco IP Phone's configuration files that are being transferred using TFTP during phone registration. If skinny messages are fragmented, then the firewall does not recognize or inspect them. Skinny message fragmentation can occur when a call is established that includes a conference bridge. The firewall tracks the skinny protocol to open conduits for RTP traffic to flow through, however, with the skinny messages fragmented, the firewall cannot correctly set up this conduit. The show skinny command displays information of Skinny (SCCP) sessions established across the PIX Firewall. Along with debug skinny and show local-host, this command is used for troubleshooting Skinny fixup issues. Note We recommend that you have the pager command configured before using the show skinny command. If there are a lot of Skinny sessions and the pager command is not configured, it can take a while for the show skinny command output to reach the end. The following is sample output from the show skinny command under the following conditions. There are two active Skinny sessions set up across the PIX Firewall. The first one is established between an internal Cisco IP Phone at local address 10.0.0.11 and an external Cisco CallManager at 172.18.1.33. TCP port 2000 is the CallManager. The second one is established between another internal Cisco IP Phone at local address 10.0.0.22 and the same Cisco CallManager. pixfirewall(config)# show skinny LOCAL FOREIGN STATE --------------------------------------------------------------- 1 10.0.0.11/52238 172.18.1.33/2000 1 MEDIA 10.0.0.11/22948 172.18.1.22/20798 2 10.0.0.22/52232 172.18.1.33/2000 1 MEDIA 10.0.0.22/20798 172.18.1.11/22948 The output indicates a call has been established between both internal Cisco IP Phones. The RTP listening ports of the first and second phones are UDP 22948 and 20798 respectively. The following is the xlate information for these Skinny connections: pixfirewall(config)# show xlate debug 2 in use, 2 most used Flags: D - DNS, d - dump, I - identity, i - inside, n - no random, o - outside, r - portmap, s - static NAT from inside:10.0.0.11 to outside:172.18.1.11 flags si idle 0:00:16 timeout 0:05:00 NAT from inside:10.0.0.22 to outside:172.18.1.22 flags si idle 0:00:14 timeout 0:05:00 Note During an interactive SMTP session, various SMTP security rules may reject or deadlock your Telnet session. These rules include the following: SMTP commands must be at least four characters in length; must be terminated with carriage return and line feed; and must wait for a response before issuing the next reply. As of PIX Firewall software Version 5.1 and higher, the fixup protocol smtp command changes the characters in the SMTP banner to asterisks except for the “2”, “0”, “0 ” characters. Carriage return (CR) and linefeed (LF) characters are ignored. In PIX Firewall software Version 4.4, all characters in the SMTP banner are converted to asterisks. Examples The following example enables access to an inside server running Mail Guard: static (inside,outside) 209.165.201.1 192.168.42.1 netmask 255.255.255.255 access-list acl_out permit tcp any host 209.165.201.1 eq smtp access-group acl_out in interface outside fixup protocol smtp 25 In this example, the static command sets up a global address to permit outside hosts access to the 10.1.1.1 mail server host on the dmz1 interface. (The MX record for DNS needs to point to the 209.165.201.1 address so that mail is sent to this address.) The access-list command lets any outside users access the global address through the SMTP port (25). The no fixup protocol command disables the Mail Guard feature. The following example shows how to enable the MGCP fixup on your firewall: pixfirewall(config)# fixup protocol mgcp 2427 pixfirewall(config)# fixup protocol mgcp 2727 pixfirewall(config)# show running-config : Saved : PIX Version 6.3 interface ethernet0 auto interface ethernet1 auto interface ethernet2 auto shutdown domain-name cisco.com The following example shows how to remove the MGCP fixup from your configuration: pixfirewall(config)# show fixup protocol mgcp fixup protocol mgcp 2427 fixup protocol mgcp 2727 pixfirewall(config)# no fixup protocol mgcp pixfirewall(config)# Related Commands debug Displays debug information for Media Gateway Control Protocol (MGCP) traffic..) flashfs Clear, display, or downgrade filesystem information. clear flashfs show flashfs Syntax Description downgrade 4.x Clear the filesystem information from Flash memory before downgrading to PIX Firewall software Version 4.0, 4.1, 4.2, 4.3, or 4.4. downgrade 5.0 | Write the filesystem to Flash memory before downgrading to the appropriate 5.1 PIX Firewall software Version 5.0 or higher. Usage Guidelines The clear flashfs and the flashfs downgrade 4.x commands clear the filesystem part of Flash memory in the PIX Firewall. Versions 4.n cannot use the information in the filesystem; it needs to be cleared to let the earlier version operate correctly. The flashfs downgrade 5.x command reorganizes the filesystem part of Flash memory so that information stored in the filesystem can be accessed by the earlier version. The PIX Firewall maintains a filesystem in Flash memory to store system information, IPSec private keys, certificates, and CRLs. It is crucial that you clear or reformat the filesystem before downgrading to a previous PIX Firewall version. Otherwise, your filesystem will get out of sync with the actual contents of the Flash memory and cause problems when the unit is later upgraded. Note When downgrading to PIX Firewall Versions 5.0 or 5.1, which support a maximum 4 MB of Flash memory, configuration files larger than 4 MB will be truncated and some configuration information will be lost. You only need to use the flashfs downgrade 5.x command if your PIX Firewall has 16 MB of Flash memory, if you have IPSec private keys, certificates, or CRLs stored in Flash memory, and you used the ca save all command to save these items in Flash memory. The flashfs downgrade 5.x command fails if the filesystem indicates that any part of the image, configuration, or private data in the Flash memory device is unusable. The clear flashfs and flashfs downgrade commands do not affect the configuration stored in Flash memory. The clear flashfs command is the same as the flashfs downgrade 4.x command. The show flashfs command displays the size in bytes of each filesystem sector and the current state of the filesystem. The data in each sector is as follows: • file 0—PIX Firewall binary image, where the .bin file is stored. • file 1—PIX Firewall configuration data that you can view with the show config command. • file 2—PIX Firewall datafile that stores IPSec key and certificate information. • file 3—flashfs downgrade information for the show flashfs command. • file 4—The compressed PIX Firewall image size in Flash memory. Examples The following is sample output from the show flashfs command: pixfirewall(config)# show flashfs flash file system: version:2 magic:0x12345679 file 0: origin: 0 length:1511480 file 1: origin: 2883584 length:3264 file 2: origin: 0 length:0 file 3: origin: 3014656 length:4444164 file 4: origin: 8257536 length:280 Use the following command to write the filesystem to Flash memory before downgrading to a lower version of software: pixfirewall(config)# flashfs downgrade 5.3 The origin values are integer multiples of the underlying filesystem sector size. floodguard Enable or disable Flood Guard to protect against flood attacks. floodguard enable floodguard disable clear floodguard show floodguard Usage Guidelines. LastAck 3. FinWait 4. Embryonic 5. Idle The floodguard command is enabled by default. Examples The following example enables the floodguard command and lists the floodguard command statement in the configuration: floodguard enable show floodguard floodguard enable fragment The fragment command provides additional management of packet fragmentation and improves compatibility with NFS. clear fragment Syntax Description chain Specifies the maximum number of packets into which a full IP packet can be fragmented. The default is 24. chain-limit The default is 24. The maximum is 8200. clear Resets the fragment databases and defaults. All fragments currently waiting for reassembly are discarded and the size, chain, and timeout options are reset to their default values. database-limit The default is 200. The maximum is 1,000,000 or the total number of blocks. interface The PIX Firewall interface. If not specified, the command will apply to all interfaces. seconds The default is 5 seconds. The maximum is 30 seconds. show • Displays the state of the fragment database: • Size—Maximum packets set by the size option. • Chain—Maximum fragments for a single packet set by the chain option. • Timeout—Maximum seconds set by the timeout option. • Queue—Number of packets currently awaiting reassembly. • Assemble—Number of packets successfully reassembled. • Fail—Number of packets which failed to be reassembled. • Overflow—Number of packets which overflowed the fragment database. size Sets the maximum number of packets in the fragment database. The default is 200. timeout Specifies the maximum number of seconds that a packet fragment will wait to be reassembled after the first fragment is received before being discarded. The default is 5 seconds. Usage Guidelines By default the PIX Firewall accepts up to 24 fragments to reconstruct a full IP packet.. If a large percentage of the network traffic through the PIX Firewall is NFS, additional tuning may be necessary to avoid database overflow. See system log message 209003 for additional information. In an environment where the MTU between the NFS server and client is small, such as a WAN interface, the chain option may require additional tuning. In this case, NFS over TCP is highly recommended to improve efficiency. Setting the database-limit of the size option to a large value can make the PIX Firewall more vulnerable to a DoS attack by fragment flooding. Do not set the database-limit equal to or greater than the total number of blocks in the 1550 or 16384 pool. See the show block command for more details. The default values will limit DoS due to fragment flooding to that interface only. The show fragment [interface] command displays the states of the fragment databases. If the interface name is specified, only displays information for the database residing at the specified interface. Examples For example, to prevent fragmented packets on the outside and inside interfaces enter the following commands: pixfirewall(config)# fragment chain 1 outside pixfirewall(config)# fragment chain 1 inside Continue entering the fragment chain 1 interface command for each additional interface on which you want to prevent fragmented packets. The following example configures the outside fragment database to limit a maximum size of 2000, a maximum chain length of 45, and a wait time of 10 seconds: pixfirewall(config)# pixfirewall(config)# fragment outside size 2000 pixfirewall(config)# fragment chain 45 outside pixfirewall(config)# fragment outside timeout 10 pixfirewall(config)# The clear fragment command resets the fragment databases. Specifically, all fragments awaiting re-assembly are discarded. In addition, the size is reset to 200; the chain limit is reset to 24; and the timeout is reset to 5 seconds. The show fragment command display the states of the fragment databases. If the interface name is specified, only the database residing at the specified interface is displayed. pixfirewall(config)# show fragment outside Interface:outside Size:2000, Chain:45, Timeout:10 Queue:1060, Assemble:809, Fail:0, Overflow:0 The preceding example shows that the "outside" fragment database has the following: • A database size limit of 2000 packets. • The chain length limit of 45 fragments. • A timeout of ten seconds. • 1060 packets is currently awaiting re-assembly. • 809 packets has been fully reassembled. • No failure. • No overflow. This fragment database is under heavy usage. The PIX Firewall also includes FragGuard for additional IP fragmentation protection. For more information refer to the Cisco PIX Firewall and VPN Configuration Guide. global Create or delete entries from a pool of global addresses. clear global show global Syntax Description clear Removes global command statements from the configuration. global_ip One or more global IP addresses that the PIX Firewall shares among its connections. If the external network is connected to the Internet, each global IP address must be registered with the Network Information Center (NIC). You can specify a range of IP addresses by separating the addresses with a dash (-). You can create a Port Address Translation (PAT) global command statement by specifying a single IP address. You can have more than one PAT global command statement per interface. A PAT can support up to 65,535 xlate objects. global_mask The network mask for global_ip. If subnetting is in effect, use the subnet mask; for example, 255.255.255.128. If you specify an address range that overlaps subnets, global will not use the broadcast or network addresses in the pool of global addresses. For example, if you use 255.255.255.224 and an address range of 209.165.201.1-209.165.201.30, the 209.165.201.31 broadcast address and the 209.165.201.0 network address will not be included in the pool of global addresses. if_name The external network where you use these global addresses. interface Specifies PAT using the IP address at the interface. nat_id A positive number shared with the nat command that groups the nat and global command statements together. The valid ID numbers can be any positive number up to 2,147,483,647. netmask Reserved word that prefaces the network global_mask variable. Usage Guidelines The global command defines a pool of global addresses. The global addresses in the pool provide an IP address for each outbound connection, and for those inbound connections resulting from outbound connections. Ensure that associated nat and global command statements have the same nat_id. Use caution with names that contain a “-” (dash) character because the global command interprets the last (or only) “-” character in the name as a range specifier instead of as part of the name. For example, the global command treats the name “host-net2” as a range from “host” to “net2”. If the name is “host-net2-section3” then it is interpreted as a range from “host-net2” to “section3”. The following command form is used for Port Address Translation (PAT) only: global [(if_name)] nat_id {{global_ip} [netmask global_mask] | interface} After changing or removing a global command statement, use the clear xlate command. Use the no global command to remove access to a nat_id, or to a Port Address Translation (PAT) address, or address range within a nat_id. The show global command displays the global command statements in the configuration. PAT You can enable the Port Address Translation (PAT) feature by entering a single IP address with the global command. PAT lets multiple outbound sessions appear to originate from a single IP address. With PAT enabled, the PIX. When a PAT augments a pool of global addresses, first the addresses from the global pool are used, then the next connection is taken from the PAT address. If a global pool address is available, 209.165.201.1-209.165.201.10 netmask 255.255.255.224 global (outside) 1 209.165.201.22 netmask 255.255.255.224 PAT does not work with H.323 applications and caching nameservers. Do not use a PAT when multimedia applications need to be run through the PIX Firewall. Multimedia applications can conflict with port mappings provided by PAT. The firewall does not PAT all ICMP message types; it only PATs ICMP echo and echo-reply packets (types 8 and 0). Specifically, only ICMP echo or echo-reply packets create a PAT xlate. So, when the other ICMP messages types are dropped, syslog message 305006 (on the PIX Firewall) is generated. PAT does not work with the established command. PAT works with DNS, FTP and passive FTP, HTTP, email, RPC, rshell, Telnet, URL filtering, and outbound traceroute. However, for use with passive FTP, use the fixup protocol ftp strict command statement with an access-list command statement to permit outbound FTP traffic, as shown in the following example: fixup protocol ftp strict ftp access-list acl_in permit tcp any any eq ftp access-group acl_in in interface inside nat (inside) 1 0 0 global (outside) 1 209.165.201.5 netmask 255.255.255.224 To specify PAT using the IP address of an interface, specify the interface keyword in the global [(int_name)] nat_id address | interface command. The following example enables PAT using the IP address at the outside interface in global configuration mode: ip address outside 192.150.49.1 nat (inside) 1 0 0 global (outside) 1 interface The interface IP address used for PAT is the address associated with the interface when the xlate (translation slot) is created. This is important for configuring DHCP, allowing for the DHCP retrieved address to be used for PAT. When PAT is enabled on an interface, there should be no loss of TCP, UDP, and ICMP services. These services allow for termination at the PIX Firewall unit's outside interface. To track usage among different subnets, you can specify multiple PATs using the following supported configurations: The following example maps hosts on the internal network 10.1.0.0/24 to global address 192.168.1.1 and hosts on the internal network 10.1.1.1/24 to global address 209.165.200.225 in global configuration mode. nat (inside) 1 10.1.0.0 255.255.255.0 nat (inside) 2 10.1.1.0 255.255.255.0 global (outside) 1 192.168.1.1 netmask 255.255.255.0 global (outside) 2 209.165.200.225 netmask 255.255.255.224 The following example configures two port addresses for setting up PAT on hosts from the internal network 10.1.0.0/16 in global configuration mode. nat (inside) 1 10.1.0.0 255.255.0.0 global (outside) 1 209.165.200.225 netmask 255.255.255.224 global (outside) 1 192.168.1.1 netmask 255.255.255.0 With this configuration, address 192.168.1.1 will only be used when the port pool from address 209.165.200.225 is at maximum capacity. A DNS server on a higher level security interface needing to get updates from a root name server on the outside interface cannot use PAT. Instead, a static command statement must be added to map the DNS server to a global address on the outside interface. For example, PAT is enabled with these commands: nat (inside) 1 192.168.1.0 255.255.255.0 global (inside) 1 209.165.202.128 netmask 255.255.255.224 However, a DNS server on the inside at IP address 192.168.1.5 cannot correctly reach the root name server on the outside at IP address 209.165.202.130. To ensure that the inside DNS server can access the root name server, insert the following static command statement: static (inside,outside) 209.165.202.129 192.168.1.5 The global address 209.165.202.129 provides a translated address for the inside server at IP address 192.168.1.5. Examples The following example declares two global pool ranges and a PAT address. Then the nat command permits all inside users to start connections to the outside network: global (outside) 1 209.165.201.1-209.165.201.10 netmask 255.255.255.224 global (outside) 1 209.165.201.12 netmask 255.255.255.224 Global 209.165.201.12 will be Port Address Translated nat (inside) 1 0 0 clear xlate The next example creates a global pool from two contiguous pieces of a Class C address and gives the perimeter hosts access to this pool of addresses to start connections on the outside interface: global (outside) 1000 209.165.201.1-209.165.201.14 netmask 255.255.255.240 global (outside) 1000 209.165.201.17-209.165.201.30 netmask 255.255.255.240 nat (perimeter) 1000 0 0 help Display help information. help command Syntax Description ? Displays all commands available in the current privilege level and mode. command Specifies the PIX Firewall command for which to display the PIX Firewall command-line interface (CLI) help. help If no command name is specified, displays all commands available in the current privilege level and mode; otherwise, displays the PIX Firewall CLI help for the command specified. Usage Guidelines The help or ? command displays help information about all commands. You can view help for an individual command by entering the command name followed by a “?”(question mark). If the pager command is enabled and when 24 lines display, the listing pauses, and the following prompt appears: <--- More ---> The More prompt uses syntax similar to the UNIX more command: • To view another screenful, press the Space bar. • To view the next line, press the Enter key. • To return to the command line, press the q key. Examples The following example shows how you can display help information by following the command name with a question mark: enable ? usage: enable password <pw> [encrypted] Help information is available on the core commands (not the show, no, or clear commands) by entering ? at the command prompt: ? aaa Enable, disable, or view TACACS+ or RADIUS user authentication, authorization and accounting … hostname Change the host name in the PIX Firewall command-line prompt. hostname newname Syntax Description newname Specifies a new host name for the firewall and is displayed in the firewall prompt. This name can be up to 63 characters, including alphanumeric characters, spaces or any of the following special characters: ‘ ( ) + - , . / : = ? Usage Guidelines The hostname command changes the host name label on prompts. The default host name is pixfirewall. Note The change of the host name causes the change of the fully qualified domain name. Once the fully qualified domain name is changed, delete the RSA key pairs with the ca zeroize rsa command and delete related certificates with the no ca identity ca_nickname command. http Enables the PIX Firewall HTTP server and specifies the clients that are permitted to access it. Additionally, for access, the Cisco PIX Device Manager (PDM) requires that the PIX Firewall have an enabled HTTP server. clear http show http Syntax Description clear http Removes all HTTP hosts and disables the server. http Relating to the Hypertext Transfer Protocol. http server enable Enables the HTTP server required to run PDM. if_name PIX Firewall interface name on which the host or network initiating the HTTP connection resides. ip_address Specifies the host or network authorized to initiate an HTTP connection to the PIX Firewall. netmask Specifies the network mask for the http ip_address. Defaults If you do not specify a netmask, the default is 255.255.255.255 regardless of the class of IP address. The default if_name is inside. Usage Guidelines Access from any host will be allowed if 0.0.0.0 0.0.0.0 (or 0 0) is specified for ip_address and netmask. The show http command displays the allowed hosts and whether or not the HTTP server is enabled. Examples The following http command example is used for one host: http 16.152.1.11 255.255.255.255 outside icmp Configure access rules for Internet Control Message Protocol (ICMP) traffic that terminates at an interface. clear icmp show icmp Usage Guidelines By default, the PIX Firewall denies all inbound traffic through the outside interface. Based on your network security policy, you should consider configuring the PIX Firewall to deny all ICMP traffic at the outside interface, or any other interface you deem necessary, by using the icmp command. The icmp command controls ICMP traffic that received by the firewall. If no ICMP control list is configured, then the PIX Firewall accepts all ICMP traffic that terminates at any interface (including the outside interface), except that the PIX Firewall does not respond to ICMP echo requests directed to a broadcast address. The icmp deny command disables pinging to an interface, and the icmp permit command enables pinging to an interface. With pinging disabled, the PIX Firewall cannot be detected on the network. This is also referred to as configurable proxy pinging. For traffic that is routed through the PIX Firewall only, you can use the access-list or access-group commands to control the ICMP traffic routed through the PIX Firewall. We recommend that you grant permission for ICMP unreachable message type (type 3). Denying ICMP unreachable messages disables ICMP Path MTU discovery, which can halt IPSec and PPTP traffic. See RFC 1195 and RFC 1435 for details about Path MTU Discovery. If an ICMP control list is configured, then the PIX Firewall uses a first match to the ICMP traffic followed by an implicit deny all. That is, if the first matched entry is a permit entry,.
https://ru.scribd.com/document/410282530/63-cmd-pdf
CC-MAIN-2020-10
refinedweb
50,914
53.81
On Tuesday 01 Aug 2006 14:42, Dragan Noveski wrote: > as i see, the vamp stuff is required, but can you perhaps tell me > which sort of vamp i should get/how to deal with the vamp-sdk stuff? You need the vamp-plugin-sdk package from the same SourceForge download page as Sonic Visualiser. Unpack it into the same directory as you put Sonic Visualiser, and build it in place (it just has a simple Makefile and no installer). You may want the oggz and fishsound libraries as well for Ogg file import (the qmake configuration reported it couldn't find them). Chris Received on Tue Aug 1 20:15:04 2006 This archive was generated by hypermail 2.1.8 : Tue Aug 01 2006 - 20:15:04 EEST
http://lalists.stanford.edu/lau/2006/08/0029.html
CC-MAIN-2015-27
refinedweb
130
68.4
As you might know, the Elastic APM Python agent currently has built-in support for two web frameworks: Django and Flask. But the ecosystem of Python web frameworks is teeming with dozens of frameworks. From the venerable elder statesmen like Zope, to the aforementioned big players like Django and Flask, and the new kids on the block like Falcon or Sanic, there’s something for everybody. While this rich assortment of options is a great sign for the health of the Python web ecosystem, it comes with a drawback for products like Elastic APM: it is virtually impossible to offer built-in support for all of them. But I’m about to let you in on a little known secret: building your own custom integration isn’t rocket science and can be done in less than 100 lines of code! The anatomy of a framework integration For an integration to be on par with our existing framework support like Django and Flask, there’s a few things it should do: - Call elasticapm.instrument()as early as possible during the startup of the framework. - Instantiate an elasticapm.Clientobject. - Begin and end transactions at the appropriate times during request handling. - Figure out a name and “result” for the transaction (e.g. the route name for the request, and the HTTP status code as the result). - Gather contextual information from the request and response objects. - Hook into the exception handling of the framework, so we can report any application errors to the Elastic APM Server. Building the integration To make this a bit less abstract, let’s randomly choose a web framework and build an integration for it. How about Pyramid? Cool? Cool. Pyramid, like many other Python web frameworks, is built on WSGI and supports WSGI middleware. A WSGI middleware is a wrapper around the app, and as such it sees any request going into the app, and the response coming out of the app. This is great for timing the request, but it has the drawback of being somewhat generic. For deep framework integration, we try to find something more “native” to the framework. As it happens, Pyramid has a concept called “tweens” that looks promising. To quote from the Pyramid docs: Tweens behave a bit like WSGI middleware, but they have the benefit of running in a context in which they have access to the Pyramid request, response, and application registry, as well as the Pyramid rendering machinery. That sounds exactly like what we need! As a starting point, we’ll take the class-based tween directly from the documentation: class simple_tween_factory(object): def __init__(self, handler, registry): self.handler = handler self.registry = registry # one-time configuration code goes here def __call__(self, request): # code to be executed for each request before # the actual application code goes here response = self.handler(request) # code to be executed for each request after # the actual application code goes here return response Checking our TODO list from above, it looks like step 2 can be done in __init__(), and steps 3 to 6 can be done in __call__. Let’s start with step 1 though. The call to elasticapm.instrument() has to happen as early as possible during startup of the app. The instrument() call instruments a lot of different modules, so we can trace SQL queries, external HTTP requests, and other shenanigans your code might be up to. Many frameworks offer hooks which can be used to execute code during startup. For Pyramid, we can subscribe to the ApplicationCreated event: from pyramid.events import ApplicationCreated, subscriber import elasticapm @subscriber(ApplicationCreated) def elasticapm_instrument(event): elasticapm.instrument() That looks about right. Back to our tween. We need to initialize an elasticapm.Client object and store it for later use on the tween. We simply instantiate the Client here, relying on any configuration being set via environment variables. A more feature-complete integration could try and get framework-specific configuration as well (in the case of Pyramid, this would probably be an .ini file). import pkg_resources import elasticapm class elasticapm_tween_factory(object): def __init__(self): self.handler = handler self.registry = registry self.client = elasticapm.Client( framework_name="Pyramid", framework_version=pkg_resources.get_distribution("pyramid").version ) We also set the framework name and version here. This is helpful information to have on errors and transactions. Step 3 is quick work as well: we wrap the call to the handler with the appropriate API calls to start and end the transaction: def __call__(self, request): self.client.begin_transaction('request') response = self.handler(request) self.client.end_transaction() return response Now, if you used that code as is, all transactions would be grouped together in the APM UI. That’s because we didn’t provide a name and result for the transaction. Conveniently, that’s step 4, which comes next! In a web framework, the convention is to use the parametrized route (e.g. /users/{id}) as the transaction name, and the HTTP status class as the result (“status class” in this case means that we bunch all 2xx requests together, all 3xx etc.). Avoid to use the full URL as the transaction name, as this would create a new entry in the transaction list for every variation of the same route (e.g. /users/1, /users/2, etc.). Consulting the relevant documentation, it looks like Pyramid provides the route pattern on the request object. Unsurprisingly, the HTTP status code is provided on the response object. Looks like we’re all set for step 4! def __call__(self, request): self.client.begin_transaction('request') response = self.handler(request) transaction_name = request.matched_route.pattern if request.matched_route else request.view_name # prepend request method transaction_name = " ".join((request.method, transaction_name)) if transaction_name else "" transaction_result = response.status[0] + "xx" self.client.end_transaction(transaction_name, transaction_result) return response Sweet! Readers who haven’t nodded off quite yet will have noticed that we’re falling back to request.view_name if request.matched_route is not set. Unfortunately, the real world is messy and we have to cope with that. We are also prepending the request method to the transaction name. This ensures that for example GET /users/{id} and DELETE /users/{id} get their own entry in the transaction list, as the code paths used by different HTTP methods can vary greatly. This brings us to step 5: contextual information. This can include information from the request (full URL, headers, cookies), response (full status code, content type), the logged in user (e.g. the user ID) and more. In this blog post, we’ll handle request and response as an example. We will put the code to gather the request / response data into reusable functions, because, SPOILER ALERT, we will be using them again in step 6. from elasticapm.utils import compat, get_url_dict def get_data_from_request(request): data = { "headers": dict(**request.headers), "method": request.method, "socket": { "remote_address": request.remote_addr, "encrypted": request.scheme == 'https' }, "cookies": dict(**request.cookies), "url": get_url_dict(request.url) } # remove Cookie header since the same data is in request["cookies"] as well data["headers"].pop("Cookie", None) return data def get_data_from_response(response): data = {"status_code": response.status_int} if response.headers: data["headers"] = { key: ";".join(response.headers.getall(key)) for key in compat.iterkeys(response.headers) } return data You might have noticed that we need to do some special handling for cookies and headers. This is because request.cookies, request.headers and response.headers might look like dictionaries, but in fact are special-purpose dict-like objects. If we just passed those objects along, our JSON serializer would throw a tantrum when trying to send the data to the APM Server. Having done that, we need to call these functions before ending the transaction, and use elasticapm.set_context() to attach it to the current transaction. To ensure that we only do all this work if the transaction is actually sampled, set_context() accepts a callable, which it will only call if the transaction is sampled. Let’s modify __call__ a bit by adding the necessary calls before self.client.end_transaction(): elasticapm.set_context(lambda: get_data_from_request(request), "request") elasticapm.set_context(lambda: get_data_from_response(response), "response") self.client.end_transaction(transaction_name, transaction_result) return response And there we have it! A Pyramid tween to instrument and measure HTTP requests, all in a couple dozen lines of code. If we lived in a perfect world and always wrote code without bugs, we would be done now. Alas… Capturing exceptions Capturing exceptions is an important part of APM. Your app might be super quick because half the requests raise exceptions, and you’ll never be the wiser if you don’t monitor it. The good news is that most Python web frameworks have some kind of hook or callback you can register for getting notified of exceptions. In the case of Pyramid, we can wrap the call to the handler in a try/except block, and capture the exception (if any). A bit simplified, it looks like this: import sys from pyramid.compat import reraise #... # in __call__(): try: response = self.handler(request) return response except Exception: self.client.capture_exception( context={ "request": get_data_from_request(request) }, handled=False, # indicate that this exception bubbled all the way up to the user ) reraise(*sys.exc_info()) finally: self.client.end_transaction(transaction_name, transaction_result) And that’s it! We’re done! High fives for everybody, and let’s call it a day. Step 7 Oh. What now? Right… we need to hook all of this up with our app! This part is very framework-specific, and there is often more than one way to do it. Looking at Pyramid, one option seems particularly nice: config.include. Using include, we can define an includeme function in our “integration” module, and then simply call config.include("elasticapm_integration") in the app config. Our includeme does two things: register the tween, and advising Pyramid to scan our module, which in turn should pick up our ApplicationCreated subscriber from way back in step 1. def includeme(config): config.add_tween('elasticapm_integration.elasticapm_tween_factory') config.scan('elasticapm_integration') Now that’s really it! I hope this example illustrates that creating a framework integration is no black magic. If you want to give it a try with your favorite framework and hit any troubles, come talk to us in our forum. We also run a survey for the Python agent, in which you can tell us which framework we should add next to our list of officially supported frameworks. You can find the complete code of this example integration on GitHub. That repository also includes a sample TODO-list application, originating from the Pyramid Community Cookbook.
https://www.elastic.co/blog/creating-custom-framework-integrations-with-the-elastic-apm-python-agent
CC-MAIN-2018-47
refinedweb
1,744
50.43
I had to spend some time to permit to a project of ours to use Elixir inside TG2. Maybe someone with more experience than me might have a better answer, but I have been able to make Elixir work this way: First of all I had to make Elixir use my TG2 metadata and session by adding to each model file that has a class inheriting from elixir.Entity this line: from project_name.model import metadata as __metadata__, DBSession as __session__ Then I had to switch to model __init__.py and add elixir.setup_all() to init_model function just after DBSession.configure. This is really important as makes Elixir create all the SQLAlchemy tables and without this you won’t see anything happen for your elixir based models. Also we can now import inside your model scope every elixir.Entity inherited class like we usually do for DeclarativeBase children.
http://blog.axant.it/archives/115
CC-MAIN-2017-13
refinedweb
148
64
Wii Nunchuck Arduino Spirit Level Introduction: Wii Nunchuck Arduino Spirit Level I have thought about how to use it useful and have tested with two servos, but I had no time to proceed that project which have sit on my desk for a long time. Recently, when I hang a picture frame on the wall at home, I got an idea to use the Wii Nunchuck more useful. That is the electronic level like a bubble level (or spirit level) using the accelerometer of the Wii Nunchuck and Arduino compatible board, JeonLab mini v1.3. Why JeonLab mini? Because 1) it is small enough to integrate into a small case; 2) once the program is loaded, the FTDI interface is not needed; and 3) since I'm going to use a small power supply, I don't need the 5V regulator circuit. It's a minimal Arduino, so you can easily make one on a small piece of prototype board. What this Arduino spirit level does is to show whether the surface or edge (picture frame or a table for example) is leveled or not by displaying LEDs. If it is leveled, the central red LED is lit, otherwise the left or the right LED will turn on depending on which side it is tilted. It works similar to a bubble level, so if it tilts left the right green LED will turn on. I like this way but if you don't like, you can modify the program and load it through the FTDI breakout board which can be plugged in at the top right corner of the JeonLab mini board as shown in the picture. In the later steps, you will find the entire circuit diagram and Arduino codes for this project. Step 1: Circuit Diagram Click on "i" button at the top left corner of the diagram for full size view. The circuit is quite simple. There is a minimal Arduino compatible board, JeonLab mini v1.3 on the left of the diagram. As you can see, if you have an ATmega328P chip with Arduino bootloader, a 16MHz ceramic resonator, and a few resistors and capacitors can replace it. There is no built in FTDI interface so you need an external FTDI breakout board or FTDI-USB cable to load the program. But that's not a big deal and good to reduce the whole size. The accelerometer board is from a broken Wii Nunchuck and it can communicate via I2C interface: 3.3V, GND, data pin (SDA) to Arduino analog input pin 4, and the clock pin (SCL) to Arduino analog input pin 5. The digital pins 5 to 9 are used to illuminate the LEDs to show which direction it is tilted. The digital pin 10 is normally pulled down through a 10k resistor and goes HIGH when the calibration switch is pressed and connect the pin to V+. After some trials, I decided to use a 12V A23 size battery and a 3.3V regulator to provide 3.3V to both the accelerometer and the Arduino. IMPORTANT NOTE ON THE POWER SUPPLY The power supply I initially thought was 3.0V battery, so I thought sharing the power should be fine. BUT I forgot the program upload through the FTDI. The accelerometer chip and the I2C interface need 3.3V (3.0-3.6V) and the ATmega328 on the JeonLab mini v1.3 (and other Arduino compatible boards as well) can work at 3-5V. The Nunchuck data reading header, nunchuck_funcs.h (from WiiChuckDemo by Tod E. Kurt) provides the settings for utilizing the analog pins 3 and 2 as power source for the Nunchuck board but this provides 5V, not 3.3V. The problem is that 5V supply to the Nunchuck board could damage the chip(s) either the accelerometer or the I2C chip or both. Actually, the first one I used had gotten unstable and noisy after several times of tests, so it had to be replaced with a new one. That’s when I decided to change the power source from the 3V battery to 12V battery with a 3.3V regulator and added a Schottky diode (1N5819) to protect the Nunchuck board from FTDI 5V supply. This way, when the FTDI is connected, the 5V from a USB port ONLY powers the ATmega328P and not the accelerometer board. Step 2: Parts As you can see, many parts are from broken electronics. "broken" means it is just not working as it is supposed to. There are still many parts you can use for other projects. Step 3: Assemble the Main Part (JeonLab Mini + Wii Nunchuck Accelerometer) Step 4: Assembly of the Power Supply Step 5: Assembly in the Case Step 6: Wii_Nunchuck_Level Arduino Sketch Load There are two Arduino sketches: 1) Wiichuck_range_test; and 2) Wii_Nunchuck_Level. First of all, you need to know what the Nunchuck accelerometer's each axis data range are. I have tested a few of them and all of them shows slight differences. But mostly, the ranges are 60-70 as minimum and 180-200 as maximum. In order for the best sensitivity of the Level, you need to load the Wiichuck_range_test first and find those values (minimum, center (at level), and maximum) for each axis and adjust the values in the main sketch if necessary. The range test sketch is simple and straight forward so I won't explain the detail here. The main program, Wii_Nunchuck_Level is shown below and also attached. The function getData() get the values for each axis and store them in byte array accl[] and find the orientation of the Level and returns 'orient.' The function orientation() get the current axis values and find the orientation of the Level. Each orientation gives two axes as middle (leveled, around 120-130) and one axis as either minimum or maximum depending on the gravity direction. Using this characteristics, it determines the orientation. The main loop() is quite simple. It monitors if the calibration button is pressed by seeing if the pin 10 is HIGH and otherwise it read current data and turn on LEDs. The program compares the current data to the stored (in EEPROM of the ATmega328P) calibration data. There are three stages of displaying: 1) If the value is within certain range, it will turn on the central red LED connected to the digital pin 7; 2) If the value is greater than the range (sens in the program) and less than 2 times of the range (sens), it will turn on the red LED and one green LED at the opposite (in order to simulate the bubble direction) side of the tilt; 3) If the value is greater than 2 times of the range (sens), then only the green LED is turned on. The calibration values are the neutral value of each axis reading when it is leveled on that axis. For example, the Nunchuck data reading is between 60-70 (these values are different from sensor to sensor) at -g (upside down) on that axis and is over 170 at g (up right). So the neutral (leveled) value of each axis is about 120-130. The calibration begins when the pin 10 goes HIGH by connected to V+ with a small push button switch pressed. One the calibration process begins, in order to give the user some time to put the device down on a flat surface, it waits until the central red LED blinks a few times. The actual calibration is done really quickly and followed by a few quicker blinks. /* * Wii_Nunchuck_Level * Feb-Mar 2012, Jinseok Jeon * * * Wii Nunchuck data read: * nunchuck_funcs.h from WiiChuckDemo by Tod E. Kurt, * */ #include <Wire.h> #include <EEPROM.h> #include "nunchuck_funcs.h" byte accl[3]; //accelerometer readings for x, y, z axis int calPin = 10; //calibration pin int sens = 1; //sensitivity int orient; void setup() { nunchuck_init(); for (int i=5;i<10;i++) { pinMode(i, OUTPUT); } //9 left, 8 up, 7 center, 6 down, 5 right pinMode(calPin, INPUT); delay(100); } void loop() { //if the calibration pin is pressed, jump to funcion calibrate() if (digitalRead(calPin) == HIGH) calibrate(); getData(); if (orient == 1 || orient == 2) { if (abs(accl[0]-EEPROM.read(0 + orient*10)) <= 2*sens) digitalWrite(7, HIGH); if (accl[0]-EEPROM.read(0 + orient*10) > sens) digitalWrite(5, HIGH); if (EEPROM.read(0 + orient*10)-accl[0] > sens) digitalWrite(9, HIGH); } if (orient == 3 || orient == 4) { if (abs(accl[1]-EEPROM.read(1 + orient*10)) <= 2*sens) digitalWrite(7, HIGH); if (accl[1]-EEPROM.read(1 + orient*10) > sens) digitalWrite(8, HIGH); if (EEPROM.read(1 + orient*10)-accl[1] > sens) digitalWrite(6, HIGH); } if (orient == 5 || orient == 6) { if (abs(accl[0]-EEPROM.read(0 + orient*10)) <= 2*sens && abs(accl[1]-EEPROM.read(1 + orient*10)) <= 2*sens) digitalWrite(7, HIGH); if (accl[0]-EEPROM.read(0 + orient*10) > sens) digitalWrite(5, HIGH); if (EEPROM.read(0 + orient*10)-accl[0] > sens) digitalWrite(9, HIGH); if (accl[1]-EEPROM.read(1 + orient*10) > sens) digitalWrite(8, HIGH); if (EEPROM.read(1 + orient*10)-accl[1] > sens) digitalWrite(6, HIGH); } delay(50); for (int i=5;i<10;i++) { //turn off all LEDs digitalWrite(i, LOW); } } void getData() { nunchuck_get_data(); accl[0] = nunchuck_accelx(); accl[1] = nunchuck_accely(); accl[2] = nunchuck_accelz(); orient = orientation(); //get orientation } void calibrate() { for (int i=0;i<3;i++) { digitalWrite(7, HIGH); delay(500); digitalWrite(7, LOW); delay(500); } getData(); for (int i=0;i<3;i++) { EEPROM.write(i + orient*10, accl[i]); } for (int i=0;i<3;i++) { digitalWrite(7, HIGH); delay(200); digitalWrite(7, LOW); delay(200); } } int orientation() { if (accl[0] > 125 && accl[0] < 145 && accl[2] > 110 && accl[2] < 140) { if (accl[1] > 170) orient = 1; //bottom on floor else if (accl[1] < 75) orient = 2; //top on floor } else if (accl[1] > 110 && accl[1] < 140 && accl[2] > 110 && accl[2] < 140) { if (accl[0] > 180) orient = 3; //left on floor else if (accl[0] < 90) orient = 4; //right on floor } else if (accl[1] > 110 && accl[1] < 140 && accl[0] > 125 && accl[0] < 145) { if (accl[2] > 170) orient = 5; //back on floor else if (accl[2] < 80) orient = 6; //front on floor } return orient; } Step 7: Tests I couldn't upload the video here. It kept given me HTTP error after 100% uploaded. So here is the video I uploaded to Youtube instead. Thanks the link gives me a readable image!! Ralph You're welcome. One tip on the sensitivity, you can adjust the variable "sens" to define the tilt angle sensitivity and also by changing the delay time in the program at the end of the loop() just before to turn off all the LEDs, you can change the response time. Here is what I am seeing!! Kinda fuzzy. Ralph That's strange. Did you click on "i" icon on the top left corner of the picture? Once you click on it, it will open Instructable image library page and there is a link for a full size picture just under the image. Here is the link: I'm using Firefox and have no problem. Well The Circuit Diagram is not readable even with the full size in my browser (Crome) or in the pdf. Ralph Anyway to get a readable Circuit Diagram? Also any idea of accuracy, how sensitive is it? Ralph The pictures in Instructable are resized automatically and there is "i" button on top-left corner of each picture. As I mentioned just under the circuit diagram, click on the "i" button for full size diagram. As for the accuracy, I have no quantitative data but as you can see in the video, slight (I don't know how many degrees) tilt changes the LED state. Neat. This is the kind of project I envisioned when I did my instuctable on making your own small arduino compatible. It would fit right in this project. You could even merge the diagrams and print a circuit board for it.
http://www.instructables.com/id/Wii-Nunchuck-Arduino-Spirit-Level/
CC-MAIN-2017-30
refinedweb
2,000
61.67
Talking Point: Could Linux Abandon Directories In Favour Of Tagging? For Wont work.. Sounds negative I know but as with every other indexing system it will always be incomplete until every possible indexing option is exhausted. This is a problem that people with computers have been trying to solve for 50 years, and librarians for hundreds. The problem is not with tagging, files, inodes or URL's, its the fact the human part of the human-computer interface is a bit imprecise most of the time, and when its precise Heisenberg pops up and your looking in the wrong place again... Not Tags vs Files but Tags AND Files I work with tagging professionally and tagging does rarely replace a file hierarchy -- it rather complements it. Tagging is a tool to quickly find related information across space (i.e. file placement) and time. Another aspect that some people in this discussion seem to forget is the time searching a huge repository takes. If I try to search my 1TB drive with documents spanning several decades of work, it will take almost forever. And what if I didn't use the right search terms? What if the search term was misspelled in the crucial document? Indexing is obviously a way around the time problem, but that does not solve the problems of spelling and furthermore introduces another problem of when and how to index, oh and the problems of space for the index, which can be rather substantial. For tagging to work, though, four requirements must be met: 1. Completeness 2. Consistency 3. Effectiveness 4. Ease 1. Completeness means that the tags must describe the contents completely (at least within the taxonomy chosen, which is one of the biggest problems in non-business tagging, as few people have any idea of what taxonomy to use -- even if they knew what a taxonomy is). If the tagging is not complete, it will be difficult to find the wanted information later and you may have to resort to searching, which means no savings, really. 2. Consistency means that two documents relating to the same subjects should use the same tags. This is extremely difficult with manual tagging, as practice has shown that no two humans would tag the same document in the same way if the document has the least bit of complexity to it. (And the same person would not tag the same document the same way at different times.) Consistency also means that there must be a tag repository containing the "approved" tags with a thorough explanation of their meaning and use, which obviously would be a problem in the average person's daily use as nobody would actually read the explanation. 3. A tagging system must be efficient, i.e. changes in the documents should immediately be reflected in their tags and changes in the underlying taxonomy (e.g. adding a new tag) should immediately be reflected also without taking too many system resources. This means that tagging should be considered dynamic and not static. 4. Tagging must be easy to use! If it takes almost as long to tag a document as to create its contents, nobody would do it. In my experience, tagging is difficult, error-prone, time consuming ... and necessary. I think it is important that the people working on tagging in different systems should get together and start defining a general taxonomy that would cover the most obvious everyday use. When that is done, the systems could begin to automatically scan the documents and tentatively assign tags that users could approve or possibly change. The users should obviously be able to add their own tags too, but these should probably be separate from the "official" tags. And yes, I know there would be numerous problems with this approach still. Oh, and tagging should be language independent, meaning that the tags should be represented in some language-independent format and presented in the user's chosen language, which means that I, with documents in several languages, could find my documents by tags, independently of their language. tagsistant The following actually implements what was described as a fuse fs... Dynamic Folder Trees The Dynamic Folder Trees The basic idea is that you would have a virtual file system created from the tag names and the file would show up everywhere that it's relevant. Application awareness would be built in because applications already know how to deal with files. The user just has to navigate the tags. The back end would have to do several things: 1. Manage real file names to prevent clashing names 2. Manage a database files and tags associated 3. Create a virtual file system based off the tags 4. Control after how many levels to start displaying files... Obviously most gui programs would lag out if you had all images/music withing the root tag folder for images/music... assuming a very large collection. The most major pitfall is naming the file and storing it on the real file system. The most logical seems to be: 1. Store the files in a one level deep tree with the levels based off file type/main tag (e.g. image, music, document). obviously browsing this real directory with gui programs may have lag... but at least the file is not lost if the database becomes corrupt. 2. Name the real file with a key_value-descriptive_title. Again so it's not lost if the database corrupts The great thing is that writes to the virtual file system using an application will automatically tag files. One odd thing to deal with in the back end is how to manage 'move' and 'copy' commands within the virtual file system... perhaps 1. move would erase all tags and repopulate them based on where it's moved too. 2. copy's function would have to remain the same for the user to not get confused... that being, a new real file is created with new tags and a new key value to where it was copied. 3. A new command would have to be created for cli users to actually append tags without actually duplicating the file. To my understanding a tag based system, similar to this, using a virtual file system is already possible with the current kernel. The daemon just needs to be written. Special guis could come later. Nepomuk I am surprised you do not mention Nepomuk. It appears to be exactly what you are wishing for and it is already here. You can tag all of your files, if you have the time and inclination (a huge undertaking). Unfortunately, there presently does not appear to be any way to save all of this work, so when you reinstall the operating system (or move a file?), the information would likely be lost and you would have to retag your archives all over again. Storedwares tabulation Access to the data/code hohlraum, (multiply)encrypted/NOT usually organised by a hierarchical file system with definite, (meaningful?) names is often supplemented by user tagging if one browses through a typical user space. As such tagging is omnipresent. One can only image what chaos would ensue after a storage failure. Look in lost+found after such an event, and recovery of tens of files/fragments (disk blocks?) (if possible) would be very time consuming. (Backup data!) One can also see that (language) translation would probably lead to unsatisfactory tagging. Professional "jargon" may also cause misinterpretations: In short, a real Tower of Babel. Best to stick with the traditional approach until we are all identical robots speaking a common language, having a uniform education and assigned to a beehive society without social mobility...usw.! Is one really looking at a REGISTRY? I was hoping to see that vanish once and for all. Use Both I have a feeling that for years to come the current method and the tagging method within meta data will be supported. I would think that most readers of LJ would be from the group that sticks with a hybrid method of put it in its correct place (e.g. Documents, Music, Photos) and then tag the file itself. While users that aren't so eager to get dirty and actually access system type files could be converted to taggers quicker. I like Renich's idea of having the manager do some of the tagging for you if you insert the file in directory Documents/linux/magazines the file gets tagged as the directory was spelled out. That would help with my laziness. I had a similar idea about 2 years ago with Music. I would like to have a music player that pulled tag info from .ogg files and automatically inserted files with certain tags into a playlist. Then I wouldn't have to build playlist or add songs to a playlist file but the player would notice tags and build the playlist for you. This is doable with ogg because of the openness of the tags and not mp3. God loves a working man, don't trust whitey, and see a doctor and get rid of it. Tagging is not enough Tagging is one way of finding things, but it has its drawbacks. It must be accompanied by a way of gathering objects into groups. For example, if I have a bunch of pictures from my last vacation, I wouldn't want to tag each and every one of them. Rather, I would expect to see them all in one location. I deliberately used the word "object" and not "file". The high level user, in my opinion, should be able to treat some kind of compound documents (think of a rich HTML e-mail, that comes with attached images), and leave the file abstraction to programmers. Only then can tagging and grouping work. Windows 7 MS is having another shot at it with Windows 7. The default file search/file explorer index unit is the "library" (the tag) rather than the folder. By default, this doesn't work for me, because the default tags are 'music' 'video' etc. I'd want 'projects' 'applications' 'virtual machines' 'code' etc. It does completely hide the 'folder' structure from the end user. I guess Desktop Linux lags behind the competition in this area. for a third party file manager perhaps i don't see the value in adding tags to a filesystem in anyway, but could see that some users might find value in a file manager that incorporates tagging. personally i would not want to label files, in the same way that i use device names or UUID rather than labels when referring to hardware. meta-data just adds a level of ambiguity that i don't need. Directories vs. tags I've had to use SharePoint at work for 2 years now, and if that's the way tags work, then I'm sticking with directories. There is no way to just drag-and-drop files around. We have inherited a pile of historical stuff that I could sort out quite quickly if I could just 'pull and push', but I just can't. We're in the process of dumping SharePoint, and spending a large sum of money and time to out all our files and folders back into the older format that we all know and can work with. Smart reduction of the tag-cloud How about the possibility to reduce the tag-search to the structure below a selectable folder? If I like to see the pictures to the tags "Birthday", "Marina" and "fire" I'd only select the ~/pictures folder and if I want to hear/view the music/video to the tags "AC/DC" and "Hell" I'd select the folder ~/music or ~/video. If I want to see the slideshow of the last vacation I'd select the real folder "~/pictures/own/20100815t0905 Scotland" via the file tools. So I can have the best of two worlds, the tag-cloud and a folder hierarchy. yes... I like the approach of moving etc. of files while only moving a camouflage symlink and not the file itself.This idea of a xml like file system is very much compatible with tagging. this is another this is another approach: It's a nice idea, but I feel It's a nice idea, but I feel it doesn't work for me. I've used tagging in web applications and image organizers and I don't like it. Whenever I had to search for something I spent more time figuring out which tag I used, than following a logical tree structure. I rarely have trouble finding files in directories. I for once would't like a filesytem based on tags I've done that ... I've implemented such a tag-based filesystem in my thesis. It replaces directories with tags, thus there are no directories anymore. So, as an example, if you save a file in /music/dance/mp3/, you will also find it in /dance/mp3/music/, /mp3/dance/music/, /mp3/music/, /music/, .... and so on (btw. it also has a new kind of metadata system). It's very nice to use, but it's not ready for serious usage because it's way too young... I think, most people don't see the most important issue with tagging: The amount of tags is growing significantly over time. While new directories were hidden in other directories, new tags are added to a large number of existing tags, so maybe a 'ls' will list you a few hundred tags someday. Another major issue is, when files are ambigious. For example one file called "file" is located in /a/b/c, an another with the same name is located in /a/b/d/, listing the contents of /a/b/ will result in two files with the same name. I solved these problems (as good as possible :)), but i think most people seem to forget about these issues when talking about a filesystem with tags. If someone wants to know more or wants to test my filesystem, feel free to ask me here or via mail: lucidfs (at) timmjati.de tags are already obsolete Three words: Beagle. Spotlight. Google Desktop Search. OK, so that's five words, so sue me. Why sit there laboriously adding tags to thousands of files? isn't that the kind of manual labour drudgery computers were supposed to liberate us from? Rather let the computer read the actual content of the file and create a database of files and what they contain. This development is still in its infancy, but for text-based files, at least, it works, I can find any text file on my Mac in seconds. If your text-based file does not contain the data you need to find it, then you may need extra tutoring in prose composition :-) Images are more of a challenge, but already there are applications that you can command to look for "the mostly red image that I created about two months ago": I'm not aware of any effort to apply this to music, but in principle it should be possible to analyze an mp3 and then ask for "a piece in 3/4 time at a slow tempo" or whatever musicians need to search for. Beagle does much the same for my PCLOS netbook. All of these work just fine with tags, but the tags are supplemental, not the main show. Let the machine do the work! symbolic links When I need to categorize items in more than one way, I use symbolic links. That method enables me to have a correlation between my file structure and the physical arrangement of my hard disk. If you adopt tagging, I hope that you offer it as an option that I can reject. Lobotomy Project The last idea about "everything is a specialized filemanager" is one of the pillars of the Lobotomy Project concept ( ). That was just an intellectual exercise, I never produced some effective running code about that (apart some primitive proof of concept), but many of the components adhere to your proposal. Tagging and those who can barely use a computer As I read this article I keep thinking of a certain person I know, who is not very computer savvy, that is constantly having problems finding saved documents. The concept of a hierarchical file system is completely beyond their grasp. If everything could be poured into a single heap with tags that the computer would then use to pull out likely candidates for the item being looked for this individuals time would be more efficiently used (as would mine by not having to search for mis-filled documents for this person). This would also allow for more compact data storage in one of my own projects as I have a large group of pdf documents that need to be referenced by catalog number or any one of several other identifying characteristics. I can see where a well developed tagging system would allow properly developed system software to begin to anticipate a users next need based on the contextual relation of the previously searched items. ED Not a new idea, but a good one I remember a presentation from Microsoft about the future Windows Vista. They intended to implement a new file system, not hierarchical, based on a database. The whole disk would be a database for storing the data (no more as files), and with some kind of tagging system. Later they abandoned this idea, I don't know why. The idea is good, but it certainly is not easy to implement, or it is not simple to be used by normal users, or not efficient. Maybe be it's time to have another look at it. directories as namespace Let's not forget that directories actually serve another important function of acting as a namespace for filenames. It allows us to have files with the same name but in different directories. Someone mentioned using UUIDs for filenames to avoid name collisions but humans aren't good at identifying stuff using UUIDs. So perhaps we can use a tag (perhaps even a special tag) to identify files. However doing so doesn't really solve the name collision issue (remember that we WANT them to be able to collide as a feature) unless there are further rules and restrictions on such usage of the tags. We'll probably end up implementing a directory hierarchy using tags. Why switch? Currently, with absolute paths, the representation of files and directories on the filesystem is simple and concise. If you write /home/user/Music/lost-in-space.ogg, you mean exactly that. There are no ambiguities. The vagueness of a tag based filesystem worries me, and I feel that if this were to be implemented, it should be at the filemanager level and be completely optional. Back in my M$ Windows days, I could write metadata for files with Explorer, but I never did because it was simply a waste of time as I never needed to search for files. And for the record, I dump all of my music in ~/Music, but I can see why that might be an issue for people with larger music collections. If I want a song, I just search for it using my audio player. This could be big! Imagine if Linux were to offer something Apple never even thought of, and M$ failed to deliver with WinFS, and even failed to execute elegantly with their briefcase, paperclip (wasn't that what it was called?), and 'Open an Office document' item on the Start menu. And it needn't require a database. We could just add a layer to an existing file-system, and expand on the use of the kinds of tags music and video files already use. That way we could eliminate the need to think of a unique name, or combination of path and name, for a file. I mean, clearly, people have trouble with this anyway, like thinking of a subject for an e-mail, so let's eliminate it altogether. Offer a description field/tag instead. Most people could cope with that without straining their creativity muscle too much. Locating an existing file could be reduced to establishing criteria via point-n-shoot instead of remembering filenames or navigating folders. We could even expand on the 'Recent Documents' menu idea, adding 'Documents by you', Spreadsheets, Images, Videos, etc. It already is "And it needn't require a database. We could just add a layer to an existing file-system, and expand on the use of the kinds of tags music and video files already use." We already did. They're called xattrs. Look down a bit in the comment thread... Not quite the same thing. xattrs are in the FS, not in the files themselves. Nor can they obviate filenames. M$ was trying to adapt the Pick file-system to Windows as WinFS for this very reason, but they couldn't pull it off for undisclosed reasons. Were Linux to succeed where M$ failed, it could raise some eyebrows. RE: Pick file system The DOS based DBMS called "Advanced Revelation" used the Pick AMV system and I used it professionally for several years. It could easily combine other systems "parent-child" table paradigms into a single table, where violations of the 3rd Normal Form were handled with mult-valued fields. Each table had a "dictionary" which stored the field names and symbolic definitions. Keeping the dictionary and its table synchronized, along with the multitude of indexing hashes needed to circumvent the speed issue gave AREV problems when it was applied to larger or more complicated data sets. Even with the indexes speed was always a problem. AREV evolved into a GUI product called "Open Insight", but it was considerably less stable, especially in a Windows environment. I immediately abandon it for better tools. Open Insight is still around but occupies a niche market space. I can understand why Microsoft abandon it. So did I. I can essentially do the same thing with PostgreSQL now and enjoy both speed and stability. > xattrs are in the FS, not > xattrs are in the FS, not in the files themselves. Right. Unix treats al files as "bag of bytes" and this concept is so central to it that I very much doubt you could use anything else but xattrs. I can't really see doing away I can't really see doing away with directory structure. Sure it's fine when all you need is a file browser, but when you want to actually manage files, copy, delete, backup, compare, need to find something when you are stuck at the command line, etc... Instead of asking if we can do away with directory structure, we should be asking why programs that handle files of a particular type (image, video, etc...) don't provide an organize feature where you set up the hierarchy you want based on relevant tags you use, then choose the directory to use for them. Similar to the way you tell Amarok directories where you store your music (/some/directory/cds_i_own/' '/some/directory/legal_downloads/', etc..) which then show up as collections which at the time you choose to import/organize a selection of files you then choose the collections where you want to place it. I use Amarok as an example because while most music management programs seemed to have figured this hierarchy thing out, Amarok is the only one I can think of off the top of my head the lets you specify multiple directories and to choose which to move the files to when you choose import/organize. Somehow F-Spot and other image programs in that same category all seem to be screwed up in some way or another screwing up the exif data without asking first, organization (or lack of), lack of an option to leave files in their original location, screwy importing, unable to view files without importing them first, etc... As Zeitgeist, Nepomuk, and related technologies start to become more widely used maybe what to do about organization can be revisited, but sane organization needs to be there in the background even when tags, dates, file types, etc.. are used in apps and file browsers to sort/filter the display of files, if for no other reason so that when something happens you have a sane way to get to the stuff that's actually important to you instead of having a big F'ed up mess to wade through. Later, Seeker Actually, if you think that a Actually, if you think that a file belongs to multiple folders, you should make a static link (ln without '-s', which stands for symbolic, opposit of static). In fact original placement of file is a static link numero one. You may now call these link "tags", if you so choose. Old news... Electronic Document Management (EDM) and/or Document Management Systems (DMS) have been around a long long time (see:). A core feature of most of these systems is to abstract the underlying file system paradigm from users and allow for tagging (i.e. multiple logical locations for documents). What you describe is an integration of DMS into the desktop and can be handled in software. There is no need to change the underlying OS file system structure. I completely agree with this, I completely agree with this, and I think this is the way things will develop, people must learn to use these tools, once I used to have to go through directories to find PDF documents, of which I have thousands, e-books and reference papers, now I use an e-book manager called Calibre and I can tag to my heart's content and find things in a flash, and still get to keep my folders, there is no need to replace them, in fact Calibre can automatically organise the underlying folder structure according to the database information, some Media players can do some of what Calibre does, this is the case for most photo managers as well. usefulness, examples, implementation The most widely used application today that showcases tagging is Gmail. Unfortunately, most Gmail users still mentally treat Gmail's labels as folders. Indeed, I still give most of my archived emails only a single tag/label, but when I do give a mail message multiple tags, it's quite useful later. Tagging, as a general facility, would be a great addition to a filesystem. Although there are several background apps for Linux that implement tags in user-space, the problem is that these solutions are inherently brittle. Likewise, brute-force search has problems -- its indexes quickly get stale, and it's periodic re-indexing doesn't scale for large amounts of data. (I've got a 1TB NAS at home, which isn't that unusual nowadays.) The true solution is to implement tags/labels as an extension to POSIX filesystems in the kernel. Someone has suggested using xattr, but that doesn't work on its own -- you need to be able to query the filesystem to find not only (a) what tags does this file have, but also (b) what files have this given tag (or tags). And trying to solve (b) using the `find` command isn't viable because it doesn't scale either. Gmail/GoogleDocs Actually Ithink a better example would be GoogleDocs. Speaking for myself I rarely give tag an email, if I did it would be two at the most. I use GoogleDocs as a backup of all my important documents (I use Picasa for the images, easier to resize). I find that I use multiple tags for documnets there. I agree with the author that tags are the way to go but I think the browser shouldn't do away with hierarchy but complement it. The biggest problem that I see though is that every filetype would have to implement IPTC or something similar as anything based on a database would be too fragile, you would have to backup both your files and the database. IMO files should be OS (& database) independent. JPEGs are a good example of this. You can write all your tags to the file and have it read by any program that supports IPTC. So I can tag in Picasa and have it read by DigiKam, Lightroom, Flickr etc. I would suggest as software gets better at reading the contents of a file and facial recognition, the software could suggest tags based on content. The software could read the document and suggest that JohnSmith & Invoice as tags based on content. It could also suggest that you tag a photo BrianMurphy, EmmaCullen etc based on previous tags and recognising faces e.g. facial tagging in Facebook. Windows Vista/7 browser touches on this slightly... when you go from documents into the pictures you are automatically in a photo organiser. Great article and even better idea. folders vs. tagging -- a case for both I'm no expert in "Library Science" or "information science" or similar, but I feel certain that those folks have good things to say about both approaches to information storage and retrieval. In an office (business) or similar setting, there is much to endorse a document --> folder --> drawer --> cabinet or similar paradigm. For example, my latest tax forms would likely be stored this way. In contrast, a tag saying, "this year's (2010) taxes" would not only locate those tax forms, but would likely locate a huge number of receipts, transaction records, and other working documents that were used in preparation of those same tax forms ... as well as the forms themselves. When coupled with a "tag cloud" such an approach is powerful during analysis. THEREFORE -- This reader (writer) believes that there is a place for both systems of information categorization and storage and retrieval. Respectfully, ~~~ 0;-Dan waiting for years Ive been waiting for half my life for it to dawn on someone besides myself that this hierarchical folder structure is not a very good paradigm for data storage. Yet each new file system that comes out repeats the same tired old formula, as if no one has any real imagination anymore. ButterFS and ZFS are still based on this paradigm as far as I know. The same old tired and hackneyed paradigms persist on the Desktop too - all the desktops (KDE, Gnome, etc) are based on the same old basic WIMP paradigm of yore. When will someone do something genuinely innovative in IT? It *was* done. Years ago. The concept of tagging files was thought of, and implemented years ago, in the late nineties, in a filesystem called BFS. The OS was called BeOS. It worked. Very well. Alas, BeOS and Be, Inc., fell by the wayside. However, a group of enthusiasts has been resurrecting BeOS, implementing it as open source. It's called Haiku and can be found at. It's currently at Alpha-2 and is, by all reports, surprisingly useful and keeps getting better. I used BeOS as my primary desktop system until it was just too far behind. Fortunately Debian had evolved enough to nearly replace it; I've been using Debian since for all my work. But as soon as Haiku is about ready for prime time, I will probably switch back. Indexing/attributing/tagging were part of BFS from the beginning; part of the idea was to incorporate a database in the FS. I could create a search for certain parameters and save it. Every time I opened the search, it would have the up-to-date info in it. It still used the traditional hierarchical directory structure (THDS). THDS makes sense, performance-wise. Imagine storing everything in a single directory (as the original MSDOS did). I should take a lot of disk I/O to get a file's info to open it and execute or edit it. The THDS is still a good way to store files. Finding files is a different matter. That's where indexing and tagging come into play. If you had an MP3 that fit five different genres, you would add all five tags. Search for a specific genre and all files fitting that genre would be found. BFS and its indexing worked very well. It was one of the neater features of BeOS. They did it back in the nineties. And it's coming back to life as Haiku. Folders vs Tags There is something in tagging that makes it interesting. I sometimes prefer tags instead of folders. If I have many items, that fit to multiple folders, it is difficult to decide, where to put them. I can assign many tags to one item, for instance the same file is 'private' and 'important' and 'project vacation'. The key is to have a predefine set of tags. The tags might even create a hierarchy. In my thunderbird, I have totally abandoned the use of folders and use tags instead. With the use of small add-on (tag-toolbar), and my predefined set of tags, it is easy and very quick. trouble with gags The trouble with tags is that you need to be quite careful to consistently label like files with like tags. This is a difficulty I encounter when blogging, using library thing, or even tagging my mp3s. I don't really want this hassle on my Desktop, nor do I want to maintain some kind of authority file telling me what tags I can use, which is the only way of overcoming this hassle that I see. A nice way that a lot of A nice way that a lot of websites handle that is through drop down list of previously used tags, and a fall back to creating a new one. I think it works really well because you keep the consistency (you'd always select a tag from the list before you'd bother creating a new one) and also the flexibility (you can create a new tag if you need it). Why? "Why is it that web based technology such online bookmarking makes far greater use of tagging than the Linux desktop does?" Because it's easier to write an article about what should happen than it is to write the code to make it happen. (As previous commenters have pointed out, various groups have already been working on this for years, but it's not a simple problem to solve. The Semantic Desktop effort is probably the furthest along.) Filenames are tags enough for Filenames are tags enough for me. Takes some discipline to find appropriate names, but that is the same discipline needed for making tags. Why exactly invent a new system where the solution demands just as much systematic attention to detail as the current system? tagging for files -- very cool This is a very cool idea. I don't know how it would work in practice at a file system level. But with my limited experience with blogging, tag clouds make it easy for me to find what I am looking for. I have actually setup a wordpress blog for my internal network at the house. I store tidbits and hints I've download from various places on the internet. I find it much easier searching through the tag cloud that having to search all over my hard drive with various find/grep incancations. Obviously, I don't do my photos like this. I have use picasa for that. Picasa also lets me tag my photos so I can search on tags. Tags integrated into the filesystem would useful, at least for user files. Business is the place with Business is the place with the most need for tagging... in a directory system, a document out of the proper folder is lost. Therefore all documents need to have a unique identifier of some type, and that usually includes a project or theme name, a date, some description of the particular document, etc. Therefore, most businesses have already tagged every document... but then they duplicate efforts and abandon the benefits of tagging by reverting to the directory hierarchy for storing the documents. Worst of all is to then use a pre-defined set of subdirectories for all projects with overlapping or ambiguous names. The problem with that is that you cannot tell by looking at a folder whether it contains files or not and certainly cannot tell if it contains the files you want, so: click, click, click, click. What a time-waster! The answer is BOTH! Keep the file system. It is efficient and clean for everything except finding files. Add tags to find files, or even better, don't. Search. The answer is to keep using the file system, as it has been refined over the last few decades. If you only care about tags, and do not like the naming structure, give the option to generate a file name (UUID) and drop it in the big bile of data directory under your home. Then use a full /home/user search, with matches on tags coming before matches on full text. The best of both worlds, and without re-writing a single application or OS utility. Just improve the search and indexing within the file browser, and the job is done. Erm, a lot of the comments Erm, a lot of the comments seem to have missed the OP's point that the tags would only apply to visible files in the /home directory. Also, the directory structure wouldn't be gotten rid of. Just enhanced. So, I'd say it's a good idea. As long as it is implemented well, and not forced on us, then it could be extremely useful. No, we didn't miss the TITLE No, we didn't miss the TITLE that said "could Linux *ABANDON* directories in favor of tagging" (emphasis mine). He did not say augment, he did not say enhance, implement along side of, or any such statement. That title implies a complete replacement of directories with tags. What would work for both is to implement something like OS/2's "extended attributes"; probably the one thing left from OS2/Workplace Shell that would still be useful to open source. too many trees When I first ventured into the world wide web, I did not really have a clue what to do with all the information that is just there. Being a self education individual, I got most of my information from the web, books, online fora and articles. Wanting to keep track of the vast knowledge on the web I started resorting to bookmarks, to the point that I now have several hundred of them in my firefox profile. Like regular files, I decided to structure them in seperate folders and over time this became a huge forest where it was difficult to see the trees, until I considered tagging my bookmarks, which made it a lot easier to find them again later or even be aware of them after a long time. I definitely see a lot of merit in the tagging system for personal files, as it would make searching the forest a lot easier and less time consuming. Delicious Bookmarks I use to do the same but now I store all my bookmarks in Delicious and use tagging there. I found it better as I would have diferent bookmarks saved on my home and work computers, much easier to find. Harken to 1932 Funny thing is that the issues as described by this article with current direcotory/file structures existed before computers. Example, where would you look for a memo concerning 1930's construction techniques for dam building? In a project paper-folder for the TVA or perhaps the "Internal Memoranda" folder file by author? The difference then was that companies employed professionals to sort out that mess. Personally, I grew-up with the Dewy Decimal system for libraries. Seems that is a better model overall. That system already has a "cross reference" component built-in. Log onto your local library's online catalogue system. It usually is regional and you can generally find your targeted-media-of-interest in a few seconds or a couple of minutes. I at the opposite end can spend up to 40minutes hunting and pecking on my pc for an excel spreadsheet.
http://www.linuxjournal.com/content/talking-point-could-linux-abandon-directories-favour-tagging?quicktabs_1=1
CC-MAIN-2014-35
refinedweb
6,741
70.13
Not logged in Log in now Weekly Edition Recent Features LWN.net Weekly Edition for May 24, 2012 A uTouch architecture introduction LWN.net Weekly Edition for May 17, 2012 Tasting the Ice Cream Sandwich Highlights from the PostgreSQL 9.2 beta First, a correction: > Tcl: The Convenient Language was coined by Will Duquette, not Reinhard Max, as I originally thought, and (mis)reported last week. QOTW - "[W]hy have a computer do all this automatically when you can waste a human's time[?]" "Relatively few people have a deep understanding of the entire language--but that's true of any language." Will Duquette "For parsing XML, the right answer is almost always, 'Use a real XML parser.'" Joe English Though it has been going on since the 26th of Sept, "CANTCL open for business" has been discussing the on-going need for some sort of central web depository and now has quite a few interesting posts bearing on this important topic: Donal provides a good refresher course in namespaces, variables, and how one relates to the other: Bob Techentin brings up issues in using itcl with safe interpreters: Kevin Kenny clarifies reentrancy issues: The default font chooser leaves some cold: Battling paradigms: C has "static", how does Tcl do it? BWidgets is still around, and Mark Saye is looking for input on where to take it: "Selling Tcl and scripting" contains good advice on advocacy issues or "How to Convince A PHB to Let You Do the Right Thing": Starkits (the packaging system formerly known as Tclkits) continue to develop: Announcements: Cris Fugate announces his new FrameSets and FrameAgents packages. Like the recently announced Snit, FrameSets are a delegating paradigm, allowing one to build object-like "things" that are actually implemented with the help of previously-written code. FrameAgents allow one to use FrameSets to build mobile agents. Peter Baum announces the latest Gnocl: Early release of "Tk_Bugz", which Jeff Godfrey descibes as an incomplete (but developing) version of the old favorite "Galaxian": Would it be nice to eliminate all the middleware and just build the entire system in Tcl? Yes? How about we start with a processor just for Tcl: And thanks for Arjen Markus for his help reporting on Wiki action: Tcl User Groups: Rotating a photo image on a canvas: Scripting compilers? Concurrency concepts: Examples using TclHttpd: Tcl and Other Languages: What IS a "scripting language" really? Games at the Wiki: - It is still a bit rough, but now packed up into a starkit, easier than ever, Tk_Bugz is a rewrite in Tcl/Tk of the game "Galaxian", consult <>. Lots of discussion on how to pack games too as a bonus. - Simple and sophisticated strategies with this game, known under a legion names, Dots and Boxes either requires two humans, a pen and paper or a computer and a starkit. Just read at <> Tutorials for the experienced Tcl/Tk-programmer: - Exposes on various Tk subjects: - <> describes how to handle "keystate": the shift/alt/control buttons. - Internationalisation can be fun, espcially if you need more than a few basic tricks. As when dealing with Japanese, <> - Exposes on various Tcl subjects: - Safe interpreters are explained in detail at <> - Not sure about the commands uplevel and upvar? Take a break with <> - Oh, oh, yet another page on objects, this time a collection, <> (it links to two new pages) Pieces to complete the puzzle: - Interacting with databases, you just need to know with which DBMS and what you want to do, <> - Regular expressions keep us amazed and amused, <> - When on Windows, ever wanted to bridge the gap to COM? Now you can, <> - Dealing with external programs and you can not use Expect for some reason? Do not worry, <> may help you create the right interface.
http://lwn.net/Articles/12076/
crawl-003
refinedweb
625
53.04
On Thu, Aug 16, 2001 at 02:48:50PM +0200, Neal H Walfield wrote: > > > If so, then I fail to see the continued utility > > > of ped_device_open/ped_device_close. Actually, as far as I can tell, > > > right now, they really are not open and close meathods, but rather, a > > > reference counting system that you use to safely free nonvolatile > > > resources (e.g. you can get another file descriptor since you save the > > > path to the block device). > > > > Correct. It allows us to free resources, if they aren't being used > > by libparted. However, we can't free these special resources (and > > get them back), so they should only be destroyed on ped_device_destroy(). > > Free resources, what does this mean? You are trying to do reference > counting and yet, you are calling it opening. It's not ref-counting, it's use-counting. This isn't really the same thing as opening, agreed. > Look at the currnt scheme > in do_select, you do a ped_device_new (via command_line_get_device).A ped_device_get() (minor detail...) > Next, you explicitly add a reference (i.e. open_status ++) using > ped_device_open. You then ped_device_close the old device (to drop the > user reference). There is no complement to ped_device_new. You mean to ped_device_get(). This is intentional. I wanted there to be a list of all devices available. > And this in > the only time that you use ped_device_open (except in _choose_device, > but that is just a modified do_select); this is really an internal > function. In fact, you do not even need to call ped_device_open here, > this is only a speed optimization and, therefore, a hack -- you are > using the internal reference counting system. Hmmm... maybe it should be moved into do_*(), and I should make all parted commands atomic. (I.e. re-read partition table, flush cache, etc.) > In my opinion, what you > should really be doing is something like the following: > > do_select () > { > ... > > new = command_line_get_device () /* i.e. ped_device_new wrapper */ > if (! new) > return 0; > > ped_device_destroy (*dev); > *dev = new; > > ... > } > > ped_device_destroy would then drop the _user_ reference (i.e. not the > libparted reference) to 0. Maybe names such as ped_device_use and > ped_device_done would make the intent clearer. I have no objections to that. > Either way, I really think that you are trying to implement unix style > vtables and are just missing this because there is no (or easy > globalized) peropen state. Nah... peropen state is bad IMHO. It is a vtable, but not unix style, intentionally. > Consider the following: you are using a private name space that is > maintained via the _device_register and _device_unregister functions > (which turn system filenames into local inodes). You access these > filenames using ped_device_{get,new}. The PedDevice names the resource. > And then, you stop there and allow the kernel space (libparted) and user > space (parted) resources to mingle and confuse the issue. I'm not trying to model Unix here. We don't need per-open metadata. > Allow me to propose to you the following: I want a peropen hook for > whatever reason (and I think that there are a variety of good reasons to > do this). I would like to be able to do something like this: > > PedDeviceName *dev; > PedDeviceOpen *po; > > /* Map filename from global to local namespace. */ > dev = ped_device_get ("/dev/hda"); > if (! dev) > return NULL; > > /* Open it so that we can call functions */ > po = ped_device_open (dev); > if (! po) > return NULL; > > /* Set local state. */ > po->hook = ... > > Now, PedDeviceName is completely opaque (current public data should now > be accessed via meathods). Then, PedDeviceOpen might look like this: > > struct PedDeviceOpen > { > PedDeviceName *dev; > void *hook; > }; > > And PedDeviceName can do user reference counting, i.e. the number of > user opens. Additionally, this scheme makes libparted reference > counting (mostly) obsolete because, you cannot pass a PedDeviceOpen that > you have called ped_device_close on. But maybe, we do not want to do > that and allow a user to open a resource, launch a thread, and then > close the resource and let the thread continue. This would be like > creating a file, opening and deleteing the file. The vnode still > exists, however, the file is now gone. What's the motivation for all this? What's wrong with ped_device_open() as it stands? Do you actually plan to use PedDeviceOpen.hook? > > Anyway, I like ped_device_close() being equivalent to ped_device_destroy() > > is an even worse change (read: violation) of semantics. > > Huh? s/^Anyway, I/Anyway, I don't/. Sorry. Andrew
https://lists.gnu.org/archive/html/bug-parted/2001-08/msg00103.html
CC-MAIN-2022-05
refinedweb
716
67.55
I previously wrote an introductory post about React Hooks called Playing Hooky with React that explored useState and useEffect, eschewing the need for class components. I also wrote a follow up Playing More Hooky with React exploring why I prefer Hooks going forward for any React or React Native projects. As projects get more complex and stateful logic is being used among different components, custom Hooks can come to your rescue. As my blog title indicates, I want to take a deeper dive into the whys and hows of custom Hooks so follow along with me! Why Use A Custom Hook Custom Hooks are useful for when you want to share stateful logic between components. Keep in mind that state itself is not shared between these components, as state in each call to a Hook is completely independent. That means you can use the same custom Hook more than once in a given component. In the past, the most common ways to share stateful logic between components was with render props and higher-order components. Now with custom Hooks, it solves this problem without adding more components to your tree. Custom Hooks can cover a wide range of use cases like form handling, animation, mouse and scroll events, timers, and lots more. Along with separating out related logic in your components, custom Hooks can help conceal complex logic behind a simple interface. An Example of Using a Custom Hook An example, albeit a contrived one, of when it's useful to extract stateful logic into a custom Hook is if you want to show your user an indicator of how far they've scrolled on a page or progress read in an article. That logic could live in its own custom Hook and be reused in the components where you want to show a progress meter or percentage showing the progress via scrolling (like a home page or article component). Below is an example Article component that gets the window's scroll position in order to show the progress made via a progress meter. import React, { useState, useEffect } from 'react'; import ProgressMeter from './ProgressMeter'; function Article() { const [scrollPosition, setScrollPosition] = useState(null); useEffect(() => { function handleWindowScrollPosition(e) { setScrollPosition(window.scrollY); } window.addEventListener('scroll', handleWindowScrollPosition); return () => window.removeEventListener('scroll', handleWindowMouseMove); }, []); return ( <div> <ProgressMeter scrollPosition={scrollPosition} /> // .. code here for sweet article render </div> ) } How to Build Your Own Hook On the surface, a custom Hook is just like a typical JavaScript function. But there are some conventions that turn a normal function into your brand spanking new custom Hook, such as naming your function to start with use and the ability to call other Hooks. You can think of these conventions as governed by a set of rules. The React docs indicate that the rules of Hooks are enforced by an ESLint plugin that React provides. The rules are: 1. Only call Hooks from React functions - call Hooks from React function components - call Hooks from custom Hooks 2. Only call Hooks at the top level of your function - never call a Hook inside loops, nested functions, or conditions Side Note on ESLint Plugin The ESLint plugin that enforces the Hook rules is eslint-plugin-react-hooks. If you create your project using create-react-app the plugin will be included by default. Otherwise, you can add it to your project with: npm install eslint-plugin-react-hooks --save-dev Name Starts with use It's convention to name your Hook starting with use. And as you may tell where this is going, the ESLint plugin will assume that a function starting with "use" and a capital letter immediately after is a Hook. Repeat after me, always start your custom Hook's name with use! function useWindowScrollPosition() { // ... } Calling Other Hooks Though you may be wondering, "Couldn't I just have a regular JavaScript function that would have that functionality instead of building my own Hook?", the answer is sure you can, but then you would not have access to Hooks within that function. Per the rules of React, there are only two places where you can call a Hook: a React function component and from a custom Hook. When calling other Hooks in your custom Hook, or even in a React function component, you want to keep it at the top level of the component. This will ensure that the order of Hooks being called remain in order. Below, I've extracted the stateful logic from the above Article component into a custom Hook for reuse in other components. // useWindowScrollPosition.js import React, { useState, useEffect } from 'react'; export default function useWindowScrollPosition() { const [scrollPosition, setScrollPosition] = useState(null); useEffect(() => { function handleWindowScrollPosition(e) { setScrollPosition(window.scrollY); } window.addEventListener('scroll', handleWindowScrollPosition); return () => window.removeEventListener('scroll', handleWindowMouseMove); }, []); return scrollPosition; } Using Your Custom Hook Now that I've built my custom Hook, it's easy to use it. You just call it and can save it as a variable in your components. import React from 'react'; import useWindowScrollPosition from './useWindowScrollPosition'; import ProgressMeter from './ProgressMeter'; function Article() { const position = useWindowScrollPosition(); return ( <div> <ProgressMeter position={position} /> // .. code here for sweet article render </div> ) } React Hooks, whether built-in or custom, are a great addition to making your components more reusable and composable. Happy coding! Resources React - Building Your Own Hooks Discussion (2) Nice article Katherine. I was just about to write a post on custom hooks and I can see you beat me to it. Keep it up! Thanks! Glad you found it helpful!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/katkelly/getting-acquainted-with-react-custom-hooks-2p1
CC-MAIN-2021-43
refinedweb
911
52.6
So I want to approximate the area of the integral using the trapezoidal method. The function has limits from 1 to 2 and the function is sqrt(x^3-1)dx. So the number of rectangles this function has is 10 with the width of 1/10. This is what I have so far in my code. I have calculated from 1 to 2 with this formula and used the while loop. width/2[f(x)+2f(x1)+2f(x2)+2f(x3)...+f(n) and so on) so since the width is 1/10, It's just increasing by .1 so it's going from 1,1.1,1.2,1.3 all the way to 2 Now that I have those values calculated. All I have to do is add all those values and divide it by width/2 which in this case is 1/20. How would I do that? Code:#include "stdafx.h" #include <iostream> #include <math.h> using namespace std; int main() { double x; double number; number = 2.0; for(x = 1 ; x < number ; x=x+0.1) { cout<< "\n\n\tThe value is" << 2*sqrt(x*x*x-1); } cout<<"\n\n\n"; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/154637-trapezoidal-rule-adding-values.html
CC-MAIN-2017-13
refinedweb
200
85.59
Introduction: Deer Chaser Protect your garden with the Deer Chaser Mk I. Guaranteed to protect your precious crops from voracious deer. All weather operation. 100% effective. What you will need: Arduino Mega 2560 Motor shield LEDs Parallax Ultrasonic sensor Servo Various wires Ribbon cable Cheap Tupperware USB wall charger Arduino cable (3 ft) Step 1: Wiring We will begin wiring the ultrasonic sensor and the LEDs. You will be connecting the wires to these pins: 20 21 26 36 38 40 GND GND Step 2: Wiring (contd) Now we will wire the servo to the motor shield. Step 3: Ribbon Cable The great thing about ribbon cable is that you can use it as a substitute for a breadboard. Ribbon cables are far cheaper than breadboards, yet they can perform the same functions. We will be using a ribbon cable as a flexible breadboard for this project. Step 4: Attachments - Servo Now that we have our ribbon cable wired to our Arduino, it is time to make it useful. Gather your servo, ultrasonic, and LEDs because it's time to get them hooked up. As the title suggests, we'll be starting with the servo. Step 5: Attachments - LED We'll be hooking up the LEDs now. Fairly simple operation. If you are wondering where I got the LEDs, I got them from here -> It is a kit that works very nicely with this project. Step 6: Attachments - Ultrasonic Now we will attach the most important part of this whole set up: the ultrasonic Ping sensor from Parallax. You can get a Ping sensor from here -> Step 7: Tupperware We must protect the Arduino from the elements. This is designed to be used outside after all. Feel free to use anything you want, but I found that the best thing to use was cheap Tupperware. I was able to get 5 containers (with lids) for around $3.00. This is good because we'll be cutting the containers anyway. We will need to make two cuts. One for the ribbon cable, and the other for the cable that will power the Arduino. Step 8: Coding Time Now it is time to write the code that will allow everything to work together. Fortunately for you, I saved you the trouble of doing it yourself and went ahead and prepared it for you. Before you begin coding, make sure to choose the correct board from the compiler. We are using an Arduino Mega 2560. This code will only work with said board, nothing else. It will not work with an Uno or any of the many other Arduino boards. To change the board in the Arduino compiler go to Tools -> Board -> Arduino Mega 2560 or Mega ADK. Feel free to modify the code as you see fit. /* Deer Chaser This sketch reads a PING))) ultrasonic rangefinder and returns the distance to the closest object in range. It then lights up a series of LEDs when the max distance is changed. Additionally, it starts a servo moving: July 28, 2013 by: Brian J. Mays modified by: */ #include <Servo.h> Servo myservo; // create servo object to control the servo const int pingPin = 26;// pin number of the sensor's output //adds in the lights int RedLED = 36; // LED connected to digital pin 9 (pwm pin) int GrnLED = 38; // LED connected to digital pin 10 (pwm pin) int BluLED = 40; // LED connected to digital pin 11 (pwm pin) int LED[3]={RedLED, GrnLED,BluLED}; //an array to make it easier to cycle though the LED colors int deer_counter = 0; // Set the deer counter to zero void setup() { // initialize serial communication: Serial.begin(9600); //Set pins as output pins (lights) pinMode(RedLED, OUTPUT); pinMode(GrnLED, OUTPUT); pinMode(BluLED, OUTPUT); } void loop() { // establish variables for duration of the ping, // and the distance result in inches and centimeters: long duration, inches, maxInches; //); // change for the distance to protect maxInches = 72; //Serial.print(inches); //Serial.print("in, "); //Serial.println(); if (inches < maxInches) { deer_counter = deer_counter + 1; Serial.print(" Deer Counter "); Serial.print(deer_counter); // turn on servo myservo.attach(9); // attaches the servo on pin 9 to the servo object delay(15); // waits 15ms for the servo to reach the position //Serial.print("Detach the Servo"); // Flash the lights for(int fade=255; fade>=0; fade-=5){ analogWrite(RedLED, fade); analogWrite(GrnLED, fade); analogWrite(BluLED, fade); delay(50); } } // Detach servo to turn off after IF statement myservo.detach();; } Step 9: Check Work All of the major components have been installed, the device has been assembled, and the code has been uploaded. Before we go any further, it is a good idea to make sure we have wired everything correctly. If you modified the code, make sure it works. If it doesn't, then I suggest using the given code for the sake of time. There will always be another chance to change it at a later date. But your plants cannot wait any longer. Step 10: Enclosure I will leave this final step up to you. There are many different ways to create an enclosure for this project. And everyone's garden situation is different. For mine, I used a simple wooden box, stuffed it with plastic bags, and lashed it to a metal pole. Step 11: Deployment It is time for the deployment of your Deer Chaser. Once you have completed this step, your garden will be protected from pesky deer and other, easily startled animals. Congratulations on completing the project, and good luck in your never ending fight against pests. Participated in the Arduino Contest Participated in the Microcontroller Contest Be the First to Share Recommendations 5 Discussions 1 year ago It would've been great if, in the intro, you'd actually described what it DOES. Not its purpose - what it does: i.e. "when it senses movement (presumably a deer), it lights up some LEDs and triggers a servo motor, to which you can mount whatever you want to scare a deer away". Even after reading through the entire project, you don't ever quite say even that much. So, nice project (I think) - but let's just say if you were a company, you'd be all engineering, no marketing. >;-) (And "marketing" does not mean sales - I'm thinking particularly of the part called marcom - marketing communication....) 7 years ago on Step 3 The BAD thing is, you are using a 40 pin connector, 80 conductor cable here. The concept is cool, BUT not all the pins on the connectors are actually connected in an 80 conductor 40 pin cable. If you were using a 40 conductor 40 pin cable, you could wire to your heart's content without worry. I just thought that needed to be pointed out! 7 years ago on Introduction You give a list of the pins you connected but you don't say what you connected them to? Could you add a schematic/wiring diagram. Thanks! Reply 7 years ago on Introduction Also from the pictures I assume you are using the servo to move/ rattle the "pringles" can. Am I correct? Reply 7 years ago on Introduction It does rattle the can. I'll add a schematic soon. I didn't realize that it is a bit confusing just looking at a picture of the wires.
https://www.instructables.com/Deer-Chaser/
CC-MAIN-2020-50
refinedweb
1,216
71.75