text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Update scripts (from donut onwards) are written in a new little scripting language (“edify”) that is superficially somewhat similar to the old one (“amend”). This is a brief overview of the new language. The entire script is a single expression. All expressions are string-valued. String literals appear in double quotes. \n, \t, ", and \ are understood, as are hexadecimal escapes like \x4a. String literals consisting of only letters, numbers, colons, underscores, slashes, and periods don't need to be in double quotes. The following words are reserved: if then else endif They have special meaning when unquoted. (In quotes, they are just string literals.) When used as a boolean, the empty string is “false” and all other strings are “true”. All functions are actually macros (in the Lisp sense); the body of the function can control which (if any) of the arguments are evaluated. This means that functions can act as control structures. Operators (like “&&” and “||”) are just syntactic sugar for builtin functions, so they can act as control structures as well. “;” is a binary operator; evaluating it just means to first evaluate the left side, then the right. It can also appear after any expression. Comments start with “#” and run to the end of the line. Some examples: There's no distinction between quoted and unquoted strings; the quotes are only needed if you want characters like whitespace to appear in the string. The following expressions all evaluate to the same string. “a b” a + " " + b “a” + " " + “b” “a\x20b” a + “\x20b” concat(a, " ", “b”) “concat”(a, " ", “b”) As shown in the last example, function names are just strings, too. They must be string literals, however. This is not legal: (“con” + “cat”)(a, " ", b) # syntax error! The ifelse() builtin takes three arguments: it evaluates exactly one of the second and third, depending on whether the first one is true. There is also some syntactic sugar to make expressions that look like if/else statements: ifelse(something(), “yes”, “no”) if something() then yes else no endif if something() then “yes” else “no” endif The else part is optional. if something() then “yes” endif # if something() is false, # evaluates to false ifelse(condition(), "", abort()) # abort() only called if # condition() is false The last example is equivalent to: assert(condition()) The && and || operators can be used similarly; they evaluate their second argument only if it's needed to determine the truth of the expression. Their value is the value of the last-evaluated argument: file_exists(“/data/system/bad”) && delete(“/data/system/bad”) file_exists(“/data/system/missing”) || create(“/data/system/missing”) get_it() || “xxx” # returns value of get_it() if that value is # true, otherwise returns “xxx” The purpose of “;” is to simulate imperative statements, of course, but the operator can be used anywhere. Its value is the value of its right side: concat(a;b;c, d, e;f) # evaluates to “cdf” A more useful example might be something like: ifelse(condition(), (first_step(); second_step();), # second ; is optional alternative_procedure())
https://android.googlesource.com/platform/bootable/recovery/+/refs/tags/android-cts-8.0_r15/edify/
CC-MAIN-2021-25
refinedweb
494
53.41
XMonad.Layout.MultiColumns Description This layout tiles windows in a growing number of columns. The number of windows in each column can be controlled by messages. Usage You can use this module with the following in your ~/.xmonad/xmonad.hs: import XMonad.Layout.MultiColumns Then edit your layoutHook by adding the multiCol layout: myLayouts = multiCol [1] 4 0.01 0.5 ||| etc.. main = xmonad defaultConfig { layoutHook = myLayouts } Or alternatively: myLayouts = Mirror (multiCol [1] 2 0.01 (-0.25)) ||| etc.. main = xmonad defaultConfig { layoutHook = myLayouts } The maximum number of windows in a column can be controlled using the IncMasterN messages and the column containing the focused window will be modified. If the value is 0, all remaining windows will be placed in that column when all columns before that has been filled. The size can be set to between 1 and -0.5. If the value is positive, the master column will be of that size. The rest of the screen is split among the other columns. But if the size is negative, it instead indicates the size of all non-master columns and the master column will cover the rest of the screen. If the master column would become smaller than the other columns, the screen is instead split equally among all columns. Therefore, if equal size among all columns are desired, set the size to -0.5. For more detailed instructions on editing the layoutHook see:
http://hackage.haskell.org/package/xmonad-contrib-0.10/docs/XMonad-Layout-MultiColumns.html
CC-MAIN-2015-22
refinedweb
237
56.35
09 February 2010 00:12 [Source: ICIS news] By: Hellen Berger SAO PAULO (ICIS news)--Brazil’s petrochemical market could grow by over 12% in ?xml:namespace> “We project According to analysts, the petrochemical market historically grows by twice the GDP rate. Brendler said in Portuguese that In emerging markets, the petrochemicals sector can even reach growth levels of up to three the GDP and be consistent, according to the analyst. “ Domestic markets such as Brendler said imports to Also, the down cycle of the petrochemical industry is expected to continue to around 2011-2012, Brendler said. For producers, that is the best time for investments and production capacity expansions. “Plants need to be prepared for when demand and price levels go back to normal,” he added. According to Brendler and other petrochemicals analysts, that is why we are seeing a strong consolidation process in The Brazilian petrochemicals sector is undergoing a period of mergers and acquisitions (M&A) in order to compete more effectively in the global markets. Major producer Braskem recently acquired rival Quattor and announced the purchase of Sunoco assets. “The idea is to strongly increase capacity and lower costs to compete globally,” said Brendler. However, the price factor could definitely be a challenge, according to analysts. “We believe that when the sustainable growth and volumes are resumed, following the down cycle phase, prices will tend to normalise due
http://www.icis.com/Articles/2010/02/09/9332771/brazils-petchem-market-sees-consistent-growth.html
CC-MAIN-2013-48
refinedweb
232
50.67
On Wed, Dec 30, 2009 at 02:00:25PM +0100, Diego Biurrun wrote: > On Sat, Dec 26, 2009 at 06:22:42PM +0100, Diego Biurrun wrote: > > $subject, as attached > > > > --- libavcodec/h263.c (revision 20925) > > +++ libavcodec/h263.c (working copy) > > @@ -3019,13 +3008,6 @@ > > > > -static inline void memsetw(short *tab, int val, int n) > > -{ > > - int i; > > - for(i=0;i<n;i++) > > - tab[i] = val; > > -} > > - > > > > @@ -3248,13 +3228,6 @@ > > > > -#if 0 > > - /* clean DC */ > > - memsetw(s->dc_val[0] + l_xy, 1024, l_wrap*2+1); > > - memsetw(s->dc_val[1] + c_xy, 1024, c_wrap+1); > > - memsetw(s->dc_val[2] + c_xy, 1024, c_wrap+1); > > -#endif > > I just noticed that this is an exact duplicate of msmpeg4_memsetw. > This should be a good reason to remove it. the memsetw() yes but not the other hunk. Also please dont remove the memsetw() and then argue that the other should be removed because it doesnt compile: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-December/063194.html
CC-MAIN-2014-42
refinedweb
145
69.31
08 February 2007 16:23 [Source: ICIS news] By John Richardson SINGAPORE (ICIS news)--The debate about whether or not ?xml:namespace> There are 88 coal-to-liquids (CTL) projects that have gained approval and a further 20-30 where approvals are pending. The projects involve either indirect or direct liquefaction of coal (indirect involves a gasification step first), followed by in some cases integration through to methanol to olefins and olefins to polymers production. This is reminiscent of the investment splurge in Investors, many of whom had no experience in the polyester sector, didn’t bother to properly assess the risks because of cheap or even free capital. The end result was a long period of low operating rates, shuttered plants and abysmal profitability. But one could argue that there are major differences between the polyester and CTL sectors. Firstly, Shell, Dow Chemical and Sasol are involved in this investment wave. Proper due diligence should therefore have been conducted on at least their projects. But secondly, and more importantly, the processes could enjoy a huge feedstock advantage because of Nevertheless, the three overseas majors are only involved in a small number of the projects. This means that most of the cash pouring into the sector is local, low cost and as a result highly speculative. And what kind of advantage will cheap coal deliver to final margins when logistics have been taken into account? Most of the projects are located in western Another key component of the sector’s competitiveness will be the oil price. CTL production costs of plants close to coal mines are estimated at equivalent to $27-35/bbl, rising to $45-50/bbl when distribution costs and taxes are included. Some consultants believe that freight costs will prohibit commodity grade production, forcing a focus on speciality grades and lower-volume plants. Other consultants, however, point to the successful economics of moving coal-to-acetylene based vinyl chloride monomer (VCM) to make polyvinyl chloride (PVC) on the eastern coast. Much of this PVC is then being exported at highly competitive prices. Acetylene-based capacity increases are so big that China is expected to become a net PVC exporter by 2011-12. But the This is certainly not the case with linear-low density polyethylene (PE) and high-density PE. Huge quantities of both the polymers are already produced in the The This leaves low-density PE (LDPE) as an opportunity because the In addition, and perhaps most fundamentally, doubts are still being expressed over the commercial viability of methanol to olefins technologies. And you cannot assume that the crackdown on investments in this emerging sector will prove effective. Implementing central government legislation is always a challenge in China. In the end, though, whoever is right, fuel markets may consume most of the methanol – the intermediate product in these processes. The main reason for the investments in coal technology is greater energy security through substituting oil-based fuels. But what
http://www.icis.com/Articles/2007/02/08/9005382/insight-china-coal-chems-challenge-mid-east.html
CC-MAIN-2014-41
refinedweb
493
50.67
How do I map an enum with a field in it? public enum MyEnum{ HIGHSCHOOL ("H"), COLLEGE("C") private int value; public MyEnum getInstance(String value){...} } @Entity public class MyEntity { @Enumerated(...) private MyEnum eduType; } Hello! I would like to use an enum as datatype for a field and i already found some basic instructions on how to do this on google. I don't want to use annotations, so the xml way is the only one i can go. I have found a class GenericEnumUserType on which probably works as i need it. My xml ... Hi All, I've been looking around the net to no avail. I'm pretty sure im missing something simple here. I have a field in db, that is of type int. I'm trying to properly map, using annotations the enum property. Code: @Entity @Table(name = "MyTable") public class myClass implements Serializable { @Id ...
http://www.java2s.com/Questions_And_Answers/JPA/Field/enum.htm
CC-MAIN-2018-39
refinedweb
149
76.42
[Solved] Item::mapToItem(): wrong returned values UPDATE 2: Solved. I was using the function in a wrong way! Solution: UPDATE: Read next post for a minimal example code. Hi all! I'd like to write my very first message in this forum since the acquisition by Digia and creation of the Qt Project. My problem is that Item::mapToItem(null, x, y) is giving wrong values. As I'm a beginner about the new Qt Quick 2, and QML in general, it is most probably due to a misuse but I cannot find it. A maybe-not-so-minimal working example to show my case follows. It creates a couple of rectangles with a Repeater, and fails at finding the central point for the second one (stored in posX, posY properties). Being squares of size 25, I'd expect the second one to stay at (0, 25), with its center at (0, 37.5), but mapToItem() says that it is placed at (0, 50), so the center calculations go for (0, 67.5) instead. I would bet the culprit is the reparenting done to the items, but that's the only way I found to have a clean list of Rectangles: if the Repeater was placed inside the Column, as most Qt examples show, then the Column's children would include that Repeater, which is not nice for code abstraction (other parts of the code shouldn't have to deal with that detail). @ import QtQuick 2.0 Item { id: root width: buttonSize height: (2 * buttonCount + 1) * buttonSize property int buttonCount: 2 property real buttonSize: 25 property alias buttons: container.children Component.onCompleted: { console.log("root: onCompleted") for (var i = 0; i < buttons.length; ++i) { console.log("root:", buttons[i].buttonId, "x", buttons[i].x, "y", buttons[i].y) buttons[i].posUpdate() } } Column { id: container anchors { fill: parent // topMargin: buttonSize } // spacing: buttonSize } Repeater { model: buttonCount Item { // Use a dummy Item to wrap our Socket element. // The wrapper will become child of the Repeater's parent, // while the Socket will be reparented to our container. id: wrapper Rectangle { id: button width: buttonSize; height: buttonSize; radius: buttonSize color: "red" property string buttonId: "button_" + index property real posX property real posY function posUpdate() { console.log("button: posUpdate", buttonId, "button.x", x, "button.y", y, "mapToItem().x", mapToItem(null, x, y).x, "mapToItem().y", mapToItem(null, x, y).y) posX = button.mapToItem(null, x, y).x + (buttonSize / 2.0) posY = button.mapToItem(null, x, y).y + (buttonSize / 2.0) console.log("button: posUpdate", buttonId, "posX", posX, "posY", posY) } Component.onCompleted: { parent = container } } } } } @ Any help would be appreciated :-) OK I have a new, minimal example to show this problem: @ import QtQuick 2.0 Item { id: window width: 25; height: 50 Rectangle { id: button_1 x: 0; y: 0 width: 25; height: 25 color: "red" Component.onCompleted: { var obj = mapToItem(null, x, y) console.log("[button_1]", "x:", x, "y:", y, "mapToItem().x:", obj.x, "mapToItem().y", obj.y) } } Rectangle { id: button_2 x: 0; y: 25 width: 25; height: 25 color: "pink" Component.onCompleted: { var obj = mapToItem(null, x, y) console.log("[button_2]", "x:", x, "y:", y, "mapToItem().x:", obj.x, "mapToItem().y", obj.y) } } } @ Expected output: @[button_2] x: 0 y: 25 mapToItem().x: 0 mapToItem().y 25@ Obtained output: @[button_2] x: 0 y: 25 mapToItem().x: 0 mapToItem().y 50@ Responding to a weird issue, that may even be the core problem; @ "if the Repeater was placed inside the Column [] then the Column’s children would include that Repeater which is not nice for code abstraction (other parts of the code shouldn’t have to deal with that detail)." @ This looks weird, the idea behind QML and scenegraphs in general is that this kind of details are irrelevant to the rest of the codebase. Just because there is a repeater in between should have zero effect on other code. You use the name (id: foo) of an element. Maybe you are trying to fetch an item from a C++ code using the QObject:child() method, I would strongly suggest you do not do that ;) Maybe you can explain exactly what the problem is that you ran into that made you reparent? Its likely that its the one that should be solved first. Note that in my second post I included a minimal example which shows the same wrong results on Item::mapToItem(), without any reparenting. Replying to your question: My project is pure QML + JavaScript so far; I just want to process a list of items all in the same way, which means all must be of the same type. I'm using a Canvas item which paints lines; these lines are parametrized by properties of some items, previously instantiated inside a Column. Ideally, the JS code on the Canvas::onPaint() method would use a "for" loop over all of the Colum's children, but that approach doesn't work because one of the children is a Repeater, thus not being of the expected type and not having the expected properties. Maybe I'm not doing this the "QML way"? But I didn't find a better way... - chrisadams Passing in null as the first parameter of mapToItem is currently broken. There are some changes in codereview to address this issue, but currently the script engine returned is null. In the future, this function should map it from top left of the top-level application window; currently the behaviour is undefined. Cheers, Chris. I assumed that there was no need for a bug report because this issue is already a work in progress, but after searching in the Qt bugreports site for "mapToItem" and "mapFromItem", I cannot find any proper report about this issue. So I'm adding one for being able to track this bug. Thank you all for the ideas and chrisadams for the explanation Hi, parameters x,y are not required in maptoItem. Just put it as 0 for both x and y and then try. Regards ansif [quote author="ansifpi" date="1360752153"]Hi, parameters x,y are not required in maptoItem. Just put it as 0 for both x and y and then try. Regards ansif [/quote] Thank you very much!! Thanks to this comment I realized that the mapToItem() function is working properly, it's just that I was misusing it in my examples!! The cause of the problem is a conceptual error. Note that in both examples I was using @mapToItem(null, x, y)@ which would map to the Window the point (x, y) inside the rectangle. In my second example, the point (0, 25) inside of the second rectangle would effectively correspond to the point (0, 50) in the Window. Correct way of using the function in my sample would be: @mapToItem(null, 0, 0)@ And for my specific need of mapping the center point: @mapToItem(null, width / 2.0, height / 2.0)@ Thanks everybody for the help. Good luck!!!!!!!!!
https://forum.qt.io/topic/24097/solved-item-maptoitem-wrong-returned-values
CC-MAIN-2017-47
refinedweb
1,149
65.12
Source JythonBook / ModulesPackages.rst Chapter 8: Modules and Packages for Code Reuse Up. Imports for Reuse is a very simple program that we can use to discuss imports. breakfast.py import search.scanner as scanner import sys' results in the breakfast module containing a __name__ of 'breakfast'. 'import breakfast' again, resulting in no output. Most of the time, we wouldn't want a module to execute print statements when imported. To avoid this, but allow the code to execute when it is called directly, we typically check the __name__ property. If the __name__ property is '_!),__: $ jython breakfast.py spam spam spam and eggs! The Import Statement In 'blah' had existed, the definition of foo would have been taken from there and we would have seen 'imported normally' printed out. Because no such module existed, foo was defined in the except block, 'defining foo in except block' was printed, and when we called foo, the 'hello from backup foo' string was returned. An Example Program. greetings.py print "in greetings.py" import greet.hello g = greet.hello.Greeter() g.hello_all() greet/__init__.py print "in greet/__init__.py" greet/hello.py print "in greet/hello.py" import greet.people as people class Greeter(object): def hello_all(self): for name in people.names: print "hello %s" % name greet/people.py print "in greet/people.py" names = ["Josh", "Jim", "Victor", "Leo", "Frank"] Trying Out the Example Code 'in greetings.py.' Next it imports greet.hello: import greet.hello Because this is the first time that the greet package has been imported, the code in __init__.py is executed, printing 'in greet/__init__.py'. Then the greet.hello module is executed, printing out 'in greet/hello.py.' The greet.hello module then imports the greet.people module, printing out 'in greet/people.py.' Now all of the imports are done, and greetings.py can create a greet.hello.Greeter class and call its hello_all method. Types of Import Statements 'greet.hello' and not just 'hello' in your code. import greet.hello as foo The 'as foo' part of the import allows you to relabel the 'greet.hello' module as 'foo' to make it more convenient to call. The example program uses this method to relabel 'greet.hello' as 'hello.' Note that it is not important that 'hello' Import Statements from module import name This form of import allows you to import modules, classes or functions nested in other modules. This allows you to import code like this: from greet import hello In this case, it is important that 'hello' is actually a submodule of greet. This is not a relabeling but actually gets the submodule named 'hello' 'greet' Aliasing Import Statements Any of the above imports can add an . Hiding Module Names 'from module import *'. Understanding Jython's process of locating, compiling, and loading packages and modules is very helpful in getting a deeper understanding of how things really work in Jython. execution section of the Java language specification:. 'com.mysql' is a Java package that is found in mysql-connector-java-5.1.6.jar. the code they use in powerful ways. For example, users expect to be able to call dir() on Java packages to see what__'] And the same can be done on Java classes to see what they contain: >>> import java.util.zip >>>'] Making Appendix A). The two properties are: python.packages.paths python.packages Appendix A for more. If you only use full class imports, you can skip the package scanning altogether. Set the system property python.cachedir.skip to true or (again) pass in your own postProperties to turn it off. Compilation Despite the popular belief that Jython is 'interpreted,. Python Modules and Packages versus, we is a module or a package. In the case of acme and company xyz, you might start your. Advanced Import Manipulation This section describes some advanced tools for dealing with the internal machinery of imports. It is pretty advanced stuff that is rarely needed, but when you need it, you really need it. Import Hooks To understand the way that Jython imports Java classes you have to understand a bit about the Python import protocol. We 'com.mysql' is imported, as evidenced by the line starting with sys-package-mgr. Upon import, the JavaImporter scanned the new jar and allowed the import to succeed.,? Summary.
https://bitbucket.org/idalton/jythonbook/src/c559df498a7e/ModulesPackages.rst?at=tip
CC-MAIN-2015-18
refinedweb
725
69.48
Only a few hours until Festivus. Oh, how I enjoy the airing of grievances. I hope that your Festivus pole is up already. Oh, the excitement is building.................. PS. Uncle Kirk called me last week and wished me a Merry Festivus. Looks like Intel is going to be shipping out new processors in January The Quad CPU, which is really two Core 2 Duos glued together: (January 7) New Core 2 Duos: (January 21) Cross posted some: Just wanted to throw out the one major change that I have seen in the ASP.NET 2.0 AJAX Release Candidate that I have seen so far. The namespaces have changed. What was previously Microsoft.Web has become System.Web. I think that this was a good to change to go ahead and move the code into what will most likely be the final namespace in the Orcas timeframe. You probably won't even notice this change unless you are programming against the actual classes themselves instead of the controls. Note that you will also need to change the web.config of your app to match the new namepsaces. Or you can just dump the new web.config in your app. If you haven't heard of it, CodeMash is coming up. It is a two day event in January in Sandusky, OH (right outside of Cleveland). There will be a number of big name folks there, including Scott Guthrie. Here is a small blurb from their. Who would have thought that you could be right off of Lake Erie in January and go to a waterpark. Very cool indeed. According to this story on eWeek, SP1 has been released to manufacturing and will be available sometime next week, maybe Monday. Its been an interesting last month or so. I've been watching traffic to the ASP.NET Podcast XML feed go way up. I've been trying to figure out why? Is it my sparking good looks, cute smile, sterling voice, or something else? Well, I think I have figured out why. Here is what I have found: Wow, I did not even know that this existed. Microsoft has released a 1.0 of the Microsoft Robotics Studio. It looks like they can connect up and program against a variety of these small robotic toys. Hmm, wonder what I will be getting my kids for Christmas(or is that me).................. Robotics Studio Download: General Info: I noticed that Lego is one of their partners. This is a link to some Lego products on Amazon:
http://weblogs.asp.net/wallym/archive/2006/12.aspx?PageIndex=2
CC-MAIN-2014-15
refinedweb
423
85.28
How to use "Qt Desktop Components" on Qt5 I am not sure how to build the Qt Desktop Components. It says on the "wiki": that you should "qmake" and then "make install". I opened the Qt5 command line and ran qmake; all fine. When I run "make install" it says it does not recognize the command. What do I need to do/set/install to make the desktop components. You are using Windows, right? There is no "make" on Windows, you need to use nmake/ jom (if your compiler is MSVC) or mingw-make if you are using MinGW. I'm not sure Desktop Components are ready for Qt5 and windows anyway. Most Qt developers work on linux. Hello, Does someone has tried to use QtDesktopComponents on windows? Thanks! This project is now integrated into mainline Qt. You can get it by simply downloading Qt 5.1. Search the documentation for QtQuick.Controls module to learn more. Hi, I am unsure how to create a hybrid Cpp + QtQuick 2 + QtQuck Controls project. In QtCreator 2.8 there are two project templates of interest: A) "QtQuick 2 UI with Controls" and B) "QtQuick 2 Application - (Built-in types)". A) This type of projcect can consist only of QML files, so no C++ here AFAIK. B). Consists of a main.cpp, main.qml as well as a qtquick2applicationviewer[.h/.cpp] file. In B) Doing an 'import Qt.QuickControls 1.0' inside main.qml is not enough, since according to the documentation found here: we ought to change the 'main.cpp' file to support them. Changing my main.cpp file, from: @#include <QApplication> #include <QQmlApplicationEngine> int main(int argc, char *argv[]) { QApplication app(argc, argv); QtQuick2ApplicationViewer viewer; viewer.setMainQmlFile(QStringLiteral("qml/untitled/main.qml")); viewer.showExpanded(); return app.exec(); }@ to: @#include <QApplication> #include <QQmlApplicationEngine> int main(int argc, char *argv[]) { QApplication app(argc, argv); QQmlApplicationEngine engine("qml/untitled/main.qml"); return app.exec(); }@ fails to compile with: 'QApplication: No such file or directory' I don't also know what is supposed to happen with QtQuick2ApplicationViewer which i just have removed from main.cpp and it's corresponding files (header, source). It seems to contain a lot of bootstrap logic. Not to mention there's some specific configuration added to the end of the .pro file that is related to it. I'd be very glad if anyone could outline how to start a hybrid C++/QtQuick2 with Controls project. You are wrong in your assumption that you need to change anything in main.cpp. Controls are just another QtQuick 2 module, they work out of the box, at least for me. Remember to bump import statements to version 2.1: @ import QtQuick 2.1 @ Controls can be easily mixed with standard Quick components. See "this": if you are in doubt :) As for QtQuick2ApplicationViewer: it's just a thin wrapper around QQuickView. You can safely use the base class, or the new appEngine if you prefer. At least it's reasurring that not much needs to be changed :). Ok, so I create a project via the 'QtQuick 2 Application with Built-in types' button. I change main.qml content from: Rectangle { width: 360 height: 360 Text { text: qsTr("Hello World") anchors.centerIn: parent } MouseArea { anchors.fill: parent onClicked: { Qt.quit(); } } }@ to: @import QtQuick 2.1 import QtQuick.Controls 1.0 ApplicationWindow { id: window height:500 width:500 Button { text: "hello" anchors.centerIn: parent onClicked: { Qt.quit(); } } }@ Unfortunately when I start the application I get an empty white window without any child controls. I am on windows, using this Qt 5.1 SDK build (Qt 5.1.0 for Windows 32-bit (MinGW 4.8, OpenGL, 666 MB)). The window is empty on QtCreator 2.7.2 that comes with Qt 5.1 as well as the new QtCreator 2.8. A project with the exact same main.qml file created via QtQuick 2 UI with Controls displays everything fine :( Change ApplicationWindow into a Rectangle. Should not the root object be an ApplicationWindow? It's the whole point of having QtQuick Controls. I can't even for some reason run which is an example of C++/QML integration with QtQuick.Controls since it can't find the main.qml file. I am going to experiment with installing different Qt 5.1 SDKs as something is definitely wrong with the MinGW build I have. QQuickView starts the window for you, so there is no need for it. So I finally figured it out. There's no QtQuick 2 template with C++ integration that supports QtQuick.Controls in QtCreator out of the box as of now (version 2.8). We will have to create an empty Qt project from scratch: - File -> New File or Project -> Other Project -> Empty Qt Project let's name it 'mytestproject' Modify mytestproject.pro by adding this: @ TEMPLATE = app TARGET = mytestproject QT += qml quick widgets @ Add main.qml file to the project @ import QtQuick 2.1 import QtQuick.Controls 1.0 ApplicationWindow { id: rootWindow height: 500 width: 500 visible: true // <-- this is important because the window is hidden by default Button { text: "Quit app" anchors.centerIn: parent onClicked: { Qt.quit(); } } }@ - Add main.cpp file to the project @ #include <QApplication> #include <QQmlApplicationEngine> int main(int argc, char** argv) { QApplication app(argc, argv); QQmlApplicationEngine engine("main.qml"); app.exec(); } @ - Disable Shadow build QtCreator creates a shadow directory by default where it builds our application. i.e if our app is located at: /home/foo/mytestproject then it will create a debug and release directory at: /home/foo/mytestproject-Debug-Desktop /home/foo/mytestproject-Release-Desktop This means that inside 'main.cpp' the QQmlApplicationEngine won't be able to find our 'main.qml' file that we want to load. I don't know any other workaround, so let's just disable the shadow directory feature altogether and let's build our application inside it's folder. To do this click on the large Projects icon on the lefthand side of QtCreator and untick 'Shadow build'. - Build Run and Enjoy Now the project should be building and we should be seeing a window with a buton.
https://forum.qt.io/topic/23448/how-to-use-qt-desktop-components-on-qt5
CC-MAIN-2022-40
refinedweb
1,019
69.38
Type Inference from Assignment Context Generally generic method infer its type from its argument. For example, in the following generic method showAndGetV( ) calls look like normal method calls. But what if the type variable isn’t used in any of the arguments or the method has no arguments? Suppose the method only has a parametric return type. The Java compiler is smart enough to look at the context in which the method is called. Specifically, if the result of the method is assigned to a variable, the compiler tries to make the type of that variable the parameter type. For example, in the following example, the generic method makeGen( ) has no arguments and it is like a factory for our Gen objects : The compiler has, as if by magic, determined what kind of instantiation of Gen we want based on the assignment context. Before you get too excited about the possibilities, there’s not much you can do with a plain type parameter in the body of that method. For example, we can’t create instances of any particular concrete type T, so this limits the usefulness of factories. Furthermore, the inference only works on assignment to a variable. Java does not try to guess the parameter type based on the context if the method call is used in other ways, such as to produce an argument to a method or as the value of a return statement from a method. In those cases, the inferred type defaults to type Object. Program Program Source class Gen<T> { private T ty; void setT(T t) { ty=t; } T GetT() { return ty; } } public class Javaapp { static <T> Gen<T> makeGen() { return new Gen<T>(); } public static void main(String[] args) { Gen<Integer> gobj1 = makeGen(); Gen<String> gobj2 = makeGen(); gobj1.setT(50); gobj2.setT("Fifty"); System.out.println("gobj1 T : "+gobj1.GetT()); System.out.println("gobj2 T : "+gobj2.GetT()); } }
https://hajsoftutorial.com/java-type-inference-assignment-context/
CC-MAIN-2019-47
refinedweb
316
60.14
: We have to include this directory into the build, so we edit the CMakeLists.txt in the effects directory. We just add the following line to the section marked as "Common effects": include( resize/CMakeLists.txt ) If it were an OpenGL only effect we would place this line in the section marked as "OpenGL-specific effects". So at this point we are finished with the preparation. So let's start looking at the files. First the desktop file: [Desktop Entry] Name=Resize Window Icon=preferences-system-windows-effect-resize Comment=Effect to outline geometry while resizing a window Type=Service X-KDE-ServiceTypes=KWin/Effect X-KDE-PluginInfo-Author=Martin Gräßlin X-KDE-PluginInfo-Email=kde@martin-graesslin.com X-KDE-PluginInfo-Name=kwin4_effect_resize X-KDE-PluginInfo-Version=0.1.0 X-KDE-PluginInfo-Category=Window Management X-KDE-PluginInfo-Depends= X-KDE-PluginInfo-License=GPL X-KDE-PluginInfo-EnabledByDefault=false X-KDE-Library=kwin4_effect_builtins X-KDE-Ordering=60 Most of it is self explaining and just needed for the "All effects" tab in the compositing kcm. The most important value is the "X-KDE-PluginInfo-Name". This is the name used to load the effect and has to start with "kwin4_effect_" followed by your custom effect name. This last part will be needed in the source code. Each effect is a subclass of class "Effect" defined in kwineffects.h and implements some of the virtual methods provided by Effect. There are methods for almost everything the window manager does. So by implementing those methods you can react on change of desktop or on opened/closed windows. In this effect we are interested in resize events so we have to implement method "windowUserMovedResized( EffectWindow *w, bool first, bool last )". This method is called whenever a user moves or resizes the given window. The two boolean values indicate if it is the first, last or an intermediate resize event. But there are more methods we have to implement. The effect should paint the changed geometry while resizing. So we have to implement the methods required for custom painting. KWin's painting pass consists of three stages: These stages are executed once for the complete screen and once for every window. All effects are chained and each effect calls the stage for the next effect. How this works we will see when looking at the implementation. You can find a good documentation in the comments of scene.cpp Now it's time to have a look at the header file: namespace KWin { class ResizeEffect : public Effect { public: ResizeEffect(); ~ResizeEffect(); virtual void prePaintScreen( ScreenPrePaintData& data, int time ); virtual void paintWindow( EffectWindow* w, int mask, QRegion region, WindowPaintData& data ); virtual void windowUserMovedResized( EffectWindow *w, bool first, bool last ); private: bool m_active; EffectWindow* m_resizeWindow; QRegion m_originalWindowRect; }; } We see that there are three member variables. The boolean is used to indicate if there is a window being resized, that is if we have to do some painting. The EffectWindow is a pointer on the window being resized and the QRegion stores the windows's geometry before the start of resizing. So now we can have a look at the implementation. I will split the code in small parts and explain the code. So first let's look at the includes: As our effect should support both XRender and OpenGL we have to include the headers for both. As it is possible that the effect is compiled on a system which does not support one of both we use ifdef. We can be sure that at least one of both is available or the effects wouldn't be compiled at all. If you write an OpenGL only effect you do not have to bother about such things. Also if you only use KWin's high level API you don't need to include those headers. But we want to paint on the screen using OpenGL or XRender directly. So let's have a look at the next part: namespace KWin { KWIN_EFFECT( resize, ResizeEffect ) ResizeEffect::ResizeEffect() : m_active( false ) , m_resizeWindow( 0 ) { reconfigure( ReconfigureAll ); } ResizeEffect::~ResizeEffect() { } Here we see the use of a macro. This has to be included or your effect will not load (it took me ten minutes to notice I forgot to add this line). The first value is the second part of X-KDE-PluginInfo-Name - I told you we will need it again. The second value is the class name. Following is constructor and deconstructor. So let's look at the pre paint screen stage: void ResizeEffect::prePaintScreen( ScreenPrePaintData& data, int time ) { if( m_active ) { data.mask |= PAINT_SCREEN_WITH_TRANSFORMED_WINDOWS; } effects->prePaintScreen( data, time ); } Here we extend the mask to say that we paint the screen with transformed windows when the effect is active. That's not completely true - we don't transform a window. But this flag indicates that the complete screen will be repainted, so we eliminate the risk of artefacts. We could also track the parts which have to be repainted manually but this would probably be more work for the CPU than the complete repaint for the GPU. At this point we see the chaining for the first time. The effects->prePaintScreen( data, time ); will call the next effect in the chain. effects is a pointer on the EffectsHandler and a very useful helper. So now we start looking at the heart of the effect: void ResizeEffect::paintWindow( EffectWindow* w, int mask, QRegion region, WindowPaintData& data ) { effects->paintWindow( w, mask, region, data ); if( m_active && w == m_resizeWindow ) { QRegion intersection = m_originalWindowRect.intersected( w->geometry() ); QRegion paintRegion = m_originalWindowRect.united( w->geometry() ).subtracted( intersection ); float alpha = 0.8f; QColor color = KColorScheme( QPalette::Normal, KColorScheme::Selection ).background().color(); We first continue the paint window effect chain - this will paint the window on the screen. Now we check if we are in resizing mode (m_active) and if the currently painted window is the window which is repainted. In that case we calculate the region which has to be painted. We just subtract the intersection of current geometry with saved geometry from the union of those two. The next two lines are for the color definition. We use the background color of a selection with 80 % opacity. Now we have to do a little bit OpenGL. In most effects where you just transform windows you don't have to write OpenGL at all. There is a nice high level API which allows you to translate, scale and rotate windows or the complete screen. Also transforming single quads can be completely done without knowing anything about OpenGL. if( effects->compositingType() == OpenGLCompositing) { glPushAttrib( GL_CURRENT_BIT | GL_ENABLE_BIT ); glEnable( GL_BLEND ); glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA ); glColor4f( color.red() / 255.0f, color.green() / 255.0f, color.blue() / 255.0f, alpha ); glBegin( GL_QUADS ); foreach( const QRect &r, paintRegion.rects() ) { glVertex2i( r.x(), r.y() ); glVertex2i( r.x() + r.width(), r.y() ); glVertex2i( r.x() + r.width(), r.y() + r.height() ); glVertex2i( r.x(), r.y() + r.height() ); } glEnd(); glPopAttrib(); } We check if KWin uses OpenGL as a backend. We enable blending in the OpenGL state machine (needed to have translucent colors) and set the color for our rects. OpenGL clamps colors in the range [0,1] that's why we can't use the values from QColor directly. Last but not least we just paint one quads for each rect of our regin. Now just the XRender part is missing. This part is taken from show paint effect - I don't know anything about XRender ;-) if( effects->compositingType() == XRenderCompositing) { XRenderColor col; col.alpha = int( alpha * 0xffff ); col.red = int( alpha * 0xffff * color.red() / 255 ); col.green = int( alpha * 0xffff * color.green() / 255 ); col.blue= int( alpha * 0xffff * color.blue() / 255 ); foreach( const QRect &r, paintRegion.rects() ) XRenderFillRectangle( display(), PictOpOver, effects->xrenderBufferPicture(), &col, r.x(), r.y(), r.width(), r.height()); } } } This does the same as the OpenGL part just with XRender. Last but not least we have to track the window resizing: void ResizeEffect::windowUserMovedResized( EffectWindow* w, bool first, bool last ) { if( first && last ) { // not interested in maximized return; } if( first && w->isUserResize() && !w->isUserMove() ) { m_active = true; m_resizeWindow = w; m_originalWindowRect = w->geometry(); w->addRepaintFull(); } if( m_active && w == m_resizeWindow && last ) { m_active = false; m_resizeWindow = NULL; effects->addRepaintFull(); } } } // namespace So and that's all. When a resize event is started we activate the effect and trigger a repaint of the window (probably not needed, but doesn't hurt). And when the resizing is finished we deactivate the effect and trigger another repaint of the complete screen just to make sure that there are no artefacts left. The CMakeLists.txt could just be taken from any other effect and be adjusted. So here's the example: ####################################### # Effect # Source files set( kwin4_effect_builtins_sources ${kwin4_effect_builtins_sources} resize/resize.cpp ) # .desktop files install( FILES resize/resize.desktop DESTINATION ${SERVICES_INSTALL_DIR}/kwin ) ####################################### # Config Now you can compile and try your new effect.
https://techbase.kde.org/index.php?title=User:Mgraesslin/Effects&oldid=50787
CC-MAIN-2015-22
refinedweb
1,461
57.16
What is the best way (or what would you suggest) to do the following thing in Pyhton: I run several simulations in Python and I want to store some results (each simulation in a new sheet, or new txt file). Example: for model in simmodels: result = simulate(model) speed = result["speed"] distance = result["distance"] Please check link - CSV Module CSV files can be opened in Excel also. Please check the below code : import csv speed = [10,20] dist = [100,200] f = open("out.csv", 'wb') try: writer = csv.writer(f) writer.writerow( ('Speed','Distance') ) for i in range(2): writer.writerow( (speed[i],dist[i]) ) finally: f.close() Content of out.csv file : Speed,Distance 10,100 20,200 Also, if you want exact excel format, you can set dialect as below: writer = csv.writer(f,dialect='excel') Please check dialect section in shared link for more info. Finally can be use as : import csv f = open("out.csv", 'wb') writer = csv.writer(f,dialect='excel') writer.writerow( ('Speed','Distance') ) for model in simmodels: result = simulate(model) speed = result["speed"] distance = result["distance"] writer.writerow( (speed,distance) ) f.close()
https://codedump.io/share/utwhnFEqPYrh/1/python-write-data-to-file-excel-or-txt-with-field-names
CC-MAIN-2016-44
refinedweb
189
59.8
Hi all, Been several years since I've posted on here after a break from any AVR/programming work. So, after much searching - I can't seem to find quite the same problem in the forums, despite many threads surrounding string corruption. I've been working on a large project for several weeks, and in the last few days have run into one problem in particular I can't seem to get around. I'm using strings in my application, written out onto a SPI TFT screen - where I'm displaying fixed text, variable text, changing colours etc all of which is working just fine. These strings are actually strings as pointers. My string is declared in a .c file as: char *str; and defined in a header file as: extern char *str; The string is then used as below, the functions display_goto_xy() and display_write_string() functions are my functions which deal with all the low level SPI routines to write data out to the display. Wherever I use the below, and whatever I place in "xxxxxxxxxxx" gets written out onto my display with no problems at all. Note I haven't written in any PROGMEM functionality yet, before I get bashed for RAM usage of fixed strings, it's on the todo list. display_goto_xy(3, cursor_y); str = "xxxxxxxxxxx"; display_write_string(str, BLACK); However, problems occur when I need to write a variable out, in particular when I need to convert an integer to string and then put this onto the display. So below, I'm clearing the previously displayed text by writing a string of spaces out onto the display (which works just fine in the above code without using itoa). Then I'm using itoa to convert a variable into a string, and push that out onto the display. display_goto_setpoint(x,y); str = " "; display_write_string(str, BLACK); display_goto_setpoint(x,y); itoa(variable, str, 10); display_write_string(str, BLACK); However, the problem I'm getting, is the displayed string seems to still contain old data. For example, if I write a value of 12 to the display (which displays fine), then 1023 (which displays fine), then 0, what I see on the display is 0023, despite having already cleared the previously displayed data. The display is clearing fine, if I pause before writing the converted integer data, the screen is indeed displayed - so the routines writing to the display are working just fine. What am I possibly doing wrong? When I write directly into the string, I don't have to 'clear' the contents of the array first, when using itoa(), do I have to empty out the array first? Note I'm using using dtstrf() in place of itoa() where I need to convert float data to strings, and I get exactly the same issue as above... dtostrf(float_data, 5, 1, str); // convert degrees C float into integer Many thanks for your help in advance, I've gotten nothing but good replies in the past here. EDIT - I should add, this is on a ATMega1284p @ 16MHz (large RAM requirement application for ADC data buffers before writing to SD), custom board. Entire application written in C, using avr-gcc Toolchain inside AS7. Thanks Life is less confusing if you declare your buffer as an array instead of an initialised pointer. You can predict the maximum width needed for an int16_t or int32_t but f-p is not so easy. If you exceed the array bounds of your buffer, sh*t happens. David. Top - Log in or register to post comments The problem is you're not providing any storage for where itoa wites to. This doesn't declare a string. It declares (and defines) a pointer to a char. It doesn't define anywhere for the storage of a string. the"xxxxxxxxxxxx" string literal creates a null terminated string, stored somewhere in memory, and returns a pointer to the start of this string. So you now have the start of this string in str. So far so good. Although you could just as well write display_write_string("xxxxxxxxxxx", BLACK); This writes the result of itoa starting at wherever str is pointing. But at this point, str is pointing to the last string literal that you assigned to str, and so will be trying to write to that location. It is undefined to try and overwrite a string literal. Imagine if your string literal was located somewhere read only (like in flash), what is supposed to happen then? It can't work. If the string literal is in RAM then it might actually work, assuming it doesn't overwrite something else important, but it is still wrong. You need to provide some other storage for itoa to write to. char buf[10]; //make sure it is big enough for whatever wil be put in itoa(variable, buf, 10); display_write_string(buf, BLACK); You don't need to initialise or clear buf before itoa. EDIT bear in mind that if you have the same string literal dotted throughout the code str = " "; /* stuff */ str = " "; /* stuff */ etc. then the compiler can be smart and only create one copy of this string literal, so each use of it will refer to the same address. So if your code modifes the contents, the next time that string literal is used it may no longer be containing what you expect. I suspect that is what is happening in your case.. Top - Log in or register to post comments You are using- str = " "; as if that is using the initialized data that was in that string. Not necessarily so, at least not after the first use. That char string is in a data location (ram). Has an address. You are simply assigning that address to your pointer. Fine. You clear the lcd with this string. Fine. You now use itoa on that string. Fine. Send to lcd. Fine. Now you do the same thing again, but your assignment of str to the string of spaces 'reuses' that initialized data string address, and you think you are clearing the lcd with spaces, but all you are doing is writing your last itoa value that also used that string. The compiler takes that " " and puts in data as any other var, but also will reuse its address if you do another assignment to the same thing. Add another space to the second assignment and you will probably just get a reuse of the previous string in addition to another char (prepended to original string, so address is -1 of previous string). #include <stdlib.h> char* str; int main(){ str = " "; //data address happens to be 0x100 in this example, and can see the initial data- 0x20 0x20 ... 0x00 //breakpoints on the nop's asm("nop"); itoa(1023, str, 10); asm("nop"); //str = 0x31 0x30 0x32 0x33 0x00 0x20 0x20 0x00 itoa(0, str, 10); asm("nop"); //str = 0x30 0x00 0x32 0x33 0x00 0x20 0x20 0x00 str = " "; //now we think we are getting a string of spaces //now notice *str is NOT a string of spaces, but is simply pointing to the previous string (address) //in data/ram- which has already been modified by itoa, you are no longer getting your string of spaces asm("nop"); //str = 0x30 0x00 0x32 0x33 0x00 0x20 0x20 0x00 } Top - Log in or register to post comments I'd point out, "you now use itoa on that string" is not actually fine. That's writing to a literal and is undefined behavior. On a lot of systems, that will fail. It might kill the program. It might create VERY strange behaviors later. Who knows? Top - Log in or register to post comments Well I just had to test that on Windows; with this result: I guess the_real_seebs is right; that would occur on a lot of systems. Top - Log in or register to post comments >I'd point out, "you now use itoa on that string" is not actually fine. It has a defined behavior in this case even though its undefined, and the results are fine in the limited sense that itoa is happy to use that chunk of ram in this case. Without a compiler moving a string to flash in an mcu (not likely unless you specify), it may work but there are too many variables in play to count on anything. The compiler may also combine strings when it can, so you can also end up with overlapping strings that you think are separate entities. In a pc, it will most likely get moved to the read-only section, and is the cause of problems when trying to write to its address. The bottom line, since there are much better ways to do these things there is no need to mess around with trying to write to string literal addresses. The first item to add on the lcd function list would be a function to just clear a line, clear display, clear whatever. No need to keep a string/array of spaces in any form to do this. The next step is to just create/use a local buffer (stack) for itoa/sprintf/whatever use. No need for a global char pointer, and no need to try writing to string literals. Top - Log in or register to post comments I must be getting old. Why wouldn't you do this the way we've been doing it for the last 3 or 4 decades?... or maybe just: The only reason I can think of for creating a separate pointer to the storage area is in the scenario: Top - Log in or register to post comments Yes, it might work with some compilers on some targets It seems foolish to rely on one particular AVR compiler putting anonymous strings into SRAM. Especially when it is so straightforward to just declare str[8] as a local (or a global) It makes no difference to how you actually use itoa() or dtostrf() But whatever you do, you must be sure that the buffer array is big enough to contain all the letters with a terminating NUL. David. Top - Log in or register to post comments Hi gents, Thanks very much for your help and apologies for the late reply, the day job and Christmas has kept me most busy. I've since purchased K&R, refreshed on strings, read and re-read your replies and re-written my above code. Removing strings as pointers, and just using regular char arrays has a) simplified the code significantly and b) entirely resolved the issue I was having. I had no idea that in effect I was potentially attempting to write to read-only memory. Ref the comments about writing out strings of spaces to clear previously lines, this is only because I haven't written the function to clear out a set number of chars on a given line yet - I won't be clearing with spaces forever, but thanks for the feedback. Thanks all Top - Log in or register to post comments Ah, Clawson - I know what I meant to ask on this one... Your example of string literal assignment won't compile for me, due to the empty index brackets. Am I missing something, or was that just a typo? Above results in 'expected expression before ']' token' compiler error. EDIT - Entirely just answered my own question reading back through K&R. The above line defines the array of chars, char str[], brackets being empty as str will just be sized appropriately for the string between quotes, only that you can't then write to str with another string literal. Any further write to str must be either zeroed, or by index. Top - Log in or register to post comments Yes, this is a specific use of a string literal where it is used to provide the initialiser list for the array char str[] = "abcd"; is same as char str[] = {'a', 'b', 'c', 'd', '\0'}; Not directly, no, str is just an array of char, and you can't assign another value to the array as a whole. You could use a function like strcpy strcpy(str, "1234"); Top - Log in or register to post comments Top - Log in or register to post comments The latter is what failed to compile. Initialization is not the same as assignment. Iluvatar is the better part of Valar. Top - Log in or register to post comments Indeed, initialization compiles just fine - only my poor and illegal attempt at assignment that wouldn't compile. Having reworked everything to work as I expected, I seem to however have re-introduced a similar problem somehow. I've written my own library to display large scaled integers (to avoid floating point math) as if they were really float values, which works very nicely with an enormous reduction in clock cycles to achieve the same result. However, all my other displayed values mirror the first sampled and converted value, but only 1 second later (which is the refresh period of the display), so I'm sure I've simply made a far more fundamental mistake somewhere... If I'm truly stuck I'll post something here. Thanks again all Top - Log in or register to post comments
https://www.avrfreaks.net/comment/2809686
CC-MAIN-2020-05
refinedweb
2,198
66.17
30 July 2010 10:37 [Source: ICIS news] (adds comments from traders)", ?xml:namespace> "It’s not mandatory to shut the cracker in August,” he said. At this juncture, “If the turnaround of cracker No 2 were to be postponed, this will be bullish news for the market,” said a trader in On signs of an improving market, the spread between first-half September and October contracts narrowed to -50 cents/tonne on Friday from -$6/tonne on the previous week, ICIS data showed. Meanwhile, The refinery houses three 180,000bbl/day crude units. “We have restarted a crude unit. There is no output yet,” Lin said, adding that another 180,000bbl/day crude unit in Mailiao would be restarted next week. The company would also restart its two 84,000bbl/day residual fluid catalytic crackers and an unaffected 80,000bbl/day desulphuriser at the site, Lin said. An oil leakage led to a fire on 25 July
http://www.icis.com/Articles/2010/07/30/9380735/taiwans-formosa-to-restart-no-1-cracker-in-end-sepearly-oct.html
CC-MAIN-2014-10
refinedweb
158
70.13
The question who are you sounds pretty simple, right? Well, possibly not where philosophy is concerned, and neither is it where databases are concerned either. But user management is essential for anyone managing databases. In this tutorial, learn how SQL server user management works – and how to configure it in the right way. SQL Server user management: the authentication process During the setup procedure, you have to select a password which actually uses the SQL Server authentication process. This database engine comes from Windows and it is tightly connected with Active Directory and internal Windows authentication. In this phase of development, SQL Server on Linux only supports SQL authentication. SQL Server has a very secure entry point. This means no access without the correct credentials. Every information system has some way of checking a user’s identity, but SQL Server has three different ways of verifying identity, and the ability to select the most appropriate method, based on individual or business needs. When using SQL Server authentication, logins are created on SQL Server. Both the user name and the password are created by using SQL Server and stored in SQL Server. Users connecting through SQL Server authentication must provide their credentials every time that they connect (user name and password are transmitted through the network). Note: When using SQL Server authentication, it is highly recommended to set strong passwords for all SQL Server accounts. As you’ll have noticed, so far you have not had any problems accessing SQL Server resources. The reason for this is very simple. You are working under the sa login. This login has unlimited SQL Server access. In some real-life scenarios, sa is not something to play with. It is good practice to create a login under a different name with the same level of access. Now let’s see how to create a new SQL Server login. But, first, we’ll check the list of current SQL Server logins. To do this, access the sys.sql_logins system catalog view and three attributes: name, is_policy_checked, and is_expiration_checked. The attribute name is clear; the second one will show the login enforcement password policy; and the third one is for enforcing account expiration. Both attributes have a Boolean type of value: TRUE or FALSE (1 or 0). - Type the following command to list all SQL logins: 1> SELECT name, is_policy_checked, is_expiration_checked 2> FROM sys.sql_logins 3> WHERE name = 'sa' 4> GO name is_policy_checked is_expiration_checked -------------- ----------------- --------------------- sa 1 0 (1 rows affected) 2. If you want to see what your password for the sa login looks like, just type this version of the same statement. This is the result of the hash function: 1> SELECT password_hash 2> FROM sys.sql_logins 3> WHERE name = 'sa' 4> GO password_hash ------------------------------------------------------------- 0x0200110F90F4F4057F1DF84B2CCB42861AE469B2D43E27B3541628 B72F72588D36B8E0DDF879B5C0A87FD2CA6ABCB7284CDD0871 B07C58D0884DFAB11831AB896B9EEE8E7896 (1 rows affected) 3. Now let’s create the login dba, which will require a strong password and will not expire: 1> USE master 2> GO Changed database context to 'master'. 1> CREATE LOGIN dba 2> WITH PASSWORD ='S0m3c00lPa$$', 3> CHECK_EXPIRATION = OFF, 4> CHECK_POLICY = ON 5> GO 4. Re-check the dba on the login list: 1> SELECT name, is_policy_checked, is_expiration_checked 2> FROM sys.sql_logins 3> WHERE name = 'dba' 4> GO name is_policy_checked is_expiration_checked ----------------- ----------------- --------------------- dba 1 0 (1 rows affected) Notice that dba logins do not have any kind of privilege. Let’s check that part. First close your current sqlcmd session by typing exit. Now, connect again but, instead of using sa, you will connect with the dba login. After the connection has been successfully created, try to change the content of the active database to AdventureWorks. This process, based on the login name, should looks like this: # [email protected]:~> sqlcmd -S suse -U dba Password: 1> USE AdventureWorks 2> GO Msg 916, Level 14, State 1, Server tumbleweed, Line 1 The server principal "dba" is not able to access the database "AdventureWorks" under the current security context As you can see, the authentication process will not grant you anything. Simply, you can enter the building but you can’t open any door. You will need to pass the process of authorization first. Authorization process After authenticating a user, SQL Server will then determine whether the user has permission to view and/or update data, view metadata, or perform administrative tasks (server-side level, database-side level, or both). If the user, or a group to which the user is amember, has some type of permission within the instance and/or specific databases, SQL Server will let the user connect. In a nutshell, authorization is the process of checking user access rights to specific securables. In this phase, SQL Server will check the login policy to determine whether there are any access rights to the server and/or database level. Login can have successful authentication, but no access to the securables. This means that authentication is just one step before login can proceed with any action on SQL Server. SQL Server will check the authorization process on every T-SQL statement. In other words, if a user has SELECT permissions on some database, SQL Server will not check once and then forget until the next authentication/authorization process. Every statement will be verified by the policy to determine whether there are any changes. Permissions are the set of rules that govern the level of access that principals have to securables. Permissions in an SQL Server system can be granted, revoked, or denied. Each of the SQL Server securables has associated permissions that can be granted to each Principal. The only way a principal can access a resource in an SQL Server system is if it is granted permission to do so. At this point, it is important to note that authentication and authorization are two different processes, but they work in conjunction with one another. Furthermore, the terms login and user are to be used very carefully, as they are not the same: - Login is the authentication part - User is the authorization part Prior to accessing any database on SQL Server, the login needs to be mapped as a user. Each login can have one or many user instances in different databases. For example, one login can have read permission in AdventureWorks and write permission in WideWorldImporters. This type of granular security is a great SQL Server security feature. A login name can be the same or different from a user name in different databases. In the following lines, we will create a database user dba based on login dba. The process will be based on the AdventureWorks database. After that we will try to enter the database and execute a SELECT statement on the Person.Person table: [email protected]:~> sqlcmd -S suse -U sa Password: 1> USE AdventureWorks 2> GO Changed database context to 'AdventureWorks'. 1> CREATE USER dba 2> FOR LOGIN dba 3> GO 1> exit [email protected]:~> sqlcmd -S suse -U dba Password: 1> USE AdventureWorks 2> GO Changed database context to 'AdventureWorks'. 1> SELECT * 2> FROM Person.Person 3> GO Msg 229, Level 14, State 5, Server tumbleweed, Line 1 The SELECT permission was denied on the object 'Person', database 'AdventureWorks', schema 'Person' We are making progress. Now we can enter the database, but we still can’t execute SELECT or any other SQL statement. The reason is very simple. Our dba user still is not authorized to access any types of resources. Schema separation In Microsoft SQL Server, a schema is a collection of database objects that are owned by a single principal and form a single namespace. All objects within a schema must be uniquely named and a schema itself must be uniquely named in the database catalog. SQL Server (since version 2005) breaks the link between users and schemas. In other words, users do not own objects; schemas own objects, and principals own schemas. Users can now have a default schema assigned using the DEFAULT_SCHEMA option from the CREATE USER and ALTER USER commands. If a default schema is not supplied for a user, then the dbo will be used as the default schema. If a user from a different default schema needs to access objects in another schema, then the user will need to type a full name. For example, Denis needs to query the Contact tables in the Person schema, but he is in Sales. To resolve this, he would type: SELECT * FROM Person.Contact Keep in mind that the default schema is dbo. When database objects are created and not explicitly put in schemas, SQL Server will assign them to the dbo default database schema. Therefore, there is no need to type dbo because it is the default schema. You read a book excerpt from SQL Server on Linux written by Jasmin Azemović. From this book, you will be able to recognize and utilize the full potential of setting up an efficient SQL Server database solution in the Linux environment.
https://hub.packtpub.com/sql-server-user-management/
CC-MAIN-2020-05
refinedweb
1,483
62.48
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. Using Strongly Typed Views6:46 with James Churchill In this video, we’ll see how “strongly typed” views offer an alternative to using the ViewBag object to pass data from our controller to our view.4.3 -b using-strongly-typed-views - 0:00 Remember how I promised to show you an alternative to using ViewBag - 0:05 to pass data from our controller to our view? - 0:08 In this video, - 0:08 I'll follow up on that promise by showing you how to use strongly typed views. - 0:15 In the Controllers folder, open the ComicBooksController.cs file. - 0:21 First let's start by instantiating our ComicBook model object. - 0:30 Visual Studio doesn't like the class name ComicBook. - 0:34 If we hover our mouse pointer over the offending code, - 0:38 we'll see the error information. - 0:40 The type or namespace name comic book cannot be found. - 0:44 Are you missing a using directive or an assembly reference? - 0:49 We've seen this error before. - 0:52 Click on the lightbulb icon and - 0:53 select the first action in the list to add the missing using directive. - 0:58 Okay, now our project is compiling again. - 1:02 There are two different approaches that we can use to set our model - 1:05 instance property values. - 1:07 We can use our comic book variable to set our properties, - 1:11 by typing a dot after the variable name followed by the name of the property. - 1:15 Or we can set the property values by adding a set of curly braces - 1:20 after the call to the ComicBook constructor - 1:22 followed by a list of the properties that we want to set. - 1:26 This approach is called object initializer syntax, and - 1:30 it's what I'll use to set our model instance property values. - 1:34 Let's move each of the ViewBag property values to the corresponding properties - 1:38 on the model instance. - 1:40 SeriesTitle goes to SeriesTitle, - 1:45 IssueNumber we can set to the value 700, - 1:53 Description goes to, no not description, - 1:58 we changed that property name to DescriptionHTML. - 2:07 The artist property is going to take a little more work as we change - 2:12 the array element data type from string to artist. - 2:15 We can start by instantiating our array, - 2:20 then we can add an artist model instance with empty name and - 2:25 role properties for each element that we need to - 2:30 add to the array for a total of five elements. - 2:34 To finish, we can copy and paste the name and - 2:37 role values from the ViewBag Artists array, up to our model array objects. - 2:43 I'll collapse the solution explorer panel to give us more room. - 2:46 Script. - 2:54 Pencils. - 2:59 Inks. - 3:05 Colors. - 3:10 And lastly Letters. - 3:17 Now we need a way to pass our coming book model instance to our view. - 3:22 We could use the ViewBag object again like this. - 3:30 But now we know that's not the optimal approach. - 3:34 Instead, let's pass the comicbook model instance into our view method call. - 3:39 By doing this, we can now update our view to be strongly typed. - 3:44 A strongly typed view is an M.V.C view that is associated with a specific type. - 3:50 A strongly typed view exposes the model instance through it's model property. - 3:55 Let's see how to do this. - 3:57 Using the solution explorer, open the comic book detail view. - 4:03 To make our views strongly typed, - 4:05 we just need to add a model view directive to the top of the view. - 4:08 Type an @ symbol followed by the word model with a lowercase m, followed - 4:15 by a space and the fully qualified namespace for our comic book model class. - 4:21 Now we can replace all of the ViewBag property references - 4:25 with the corresponding model properties. - 4:28 Both SeriesTitle and IssueNumber are easy to update, just change ViewBag to Model. - 4:37 Notice that when we replace view bag with model for - 4:40 the description property reference, - 4:42 Visual Studio is able to tell us that it can't find that property on our model. - 4:47 That's right, we renamed the description property to DescriptionHTML. - 4:57 Lastly, update the artist property references. - 5:06 Go ahead and save the view and run the website. - 5:10 Let's zoom in by pressing control plus. - 5:14 Something went wrong with our list of artists. - 5:18 That's right. - 5:19 I forgot that we changed our artists array from a list a string values - 5:24 to a list of artist objects. - 5:26 Let's update our loop to take advantage of the new artist model properties. - 5:31 We can do that by rendering the artist's row property, followed by a colon and - 5:36 a space, then another at symbol followed by the artist.Name property. - 5:43 See how easily Razor transitions from the row property - 5:46 to the colon separating the row and the name? - 5:49 Razor is so cool. - 5:51 Let's make one last change to our view. - 5:53 Instead of setting the ViewBag Title property to the string literal comic - 5:59 book detail, let's use the comic book display text property value. - 6:04 Save the view and refresh the page. - 6:09 Now our Artists list is displaying correctly, and - 6:12 we're seeing the comic book display text in the browser tab. - 6:16 Nice. - 6:18 Go ahead and close Chrome and stop the website. - 6:22 If you're using GitHub, let's commit our changes. - 6:26 Enter a commit message of Updated Comic Book Detail - 6:32 view to be strongly typed and click the Commit All button. - 6:39 In the next video, we'll work on improving the layout of our Comic Book Detail View. - 6:44 See you then.
https://teamtreehouse.com/library/using-strongly-typed-views
CC-MAIN-2017-22
refinedweb
1,083
72.16
I want to use regions for code folding in Eclipse; how can that be done in Java? #region name //code #endregion There’s no such standard equivalent. Some IDEs – Intellij, for instance, or Eclipse – can fold depending on the code types involved (constructors, imports etc.), but there’s nothing quite like #region. Jet Brains IDEA has this feature. You can use hotkey surround with for that (ctrl + alt + T). It’s just IDEA feature. Regions there look like this: //region Description Some code //endregion With Android Studio, try this: //region VARIABLES private String _sMyVar1; private String _sMyVar2; //endregion Careful : no blank line after //region … And you will get: No equivalent in the language… Based on IDEs… For example in netbeans: NetBeans/Creator supports this syntax: // <editor-fold ... // </editor-fold> For Eclipse IDE the Coffee-Bytes plugin can do it, download link is here. EDIT: Latest information about Coffee-Bytes is here. Custom code folding feature can be added to eclipse using CoffeeScript code folding plugin. This is tested to work with eclipse Luna and Juno. Here are the steps Download the plugin from here Extract the contents of archive - Copy paste the contents of plugin and features folder to the same named folder inside eclipse installation directory - Restart the eclipse Navigate Window >Preferences >Java >Editor >Folding >Select folding to use: Coffee Bytes Java >General tab >Tick checkboxes in front of User Defined Fold Create new region as shown: Try out if folding works with comments prefixed with specified starting and ending identifiers You can download archive and find steps at this Blog also. This is more of an IDE feature than a language feature. Netbeans allows you to define your own folding definitions using the following definition: // <editor-fold ...any code... // </editor-fold> As noted in the article, this may be supported by other editors too, but there are no guarantees. the fastest way in Android Studio(or IntelliJ IDEA) highlight the codeyou want to surround it ctrl+ alt+ t c==> then enter the description - enjoy Contrary to what most are posting, this is NOT an IDE thing. It is a language thing. The #region is a C# statement. The best way //region DESCRIPTION_REGION int x = 22; // Comments String s = "SomeString"; //endregion; Tip: Put “;” at the end of the “endregion” If anyone is interested, in Eclipse you can collapse all your methods etc in one go, just right click when you’d normally insert a break point, click ‘Folding’ > ‘Collapse all’. It know it’s not an answer to the question, but just providing an alternative to quick code folding. #region // code #endregion Really only gets you any benefit in the IDE. With Java, there’s no set standard in IDE, so there’s really no standard parallel to #region. I were coming from C# to java and had the same problem and the best and exact alternative for region is something like below (working in Android Studio, dont know about intelliJ): //region [Description] int a; int b; int c; //endregion the shortcut is like below: 1- select the code 2- press ctrl + alt + t 3- press c and write your description I usually need this for commented code so I use curly brackets at start and end of that. { // Code // Code // Code // Code } It could be used for code snippets but can create problems in some code because it changes the scope of variable. Actually johann, the # indicates that it’s a preprocessor directive, which basically means it tells the IDE what to do. In the case of using #region and #endregion in your code, it makes NO difference in the final code whether it’s there or not. Can you really call it a language element if using it changes nothing? Apart from that, java doesn’t have preprocessor directives, which means the option of code folding is defined on a per-ide basis, in netbeans for example with a //< code-fold> statement In Eclipse you can collapse the brackets wrapping variable region block. The closest is to do something like this: public class counter_class { { // Region int variable = 0; } } Just intall and enable Coffee-Bytes plugin (Eclipse) There is some option to achieve the same, Follow the below points. 1) Open Macro explorer: 2) Create new macro: 3) Name it “OutlineRegions” (Or whatever you want) 4) Right Click on the “OutlineRegions” (Showing on Macro Explorer) select the “Edit” option and paste the following VB code into it: Imports System Imports EnvDTE Imports EnvDTE80 Imports EnvDTE90 Imports EnvDTE90a Imports EnvDTE100 Imports System.Diagnostics Imports System.Collections Public Module OutlineRegions (Type: Macro into the text box, it will suggest the macros name, choose yours one.) 7) now in textbox under the “Press shortcut keys” you can enter the desired shortcut. I use Ctrl+M+N. Use: return { //Properties //#region Name:null, Address:null //#endregion } 8) Press the saved shortcut key See below result:
https://exceptionshub.com/java-equivalent-to-region-in-c.html
CC-MAIN-2022-05
refinedweb
813
57.81
NAME Bio::Phylo::Parsers::Nhx - Parser used by Bio::Phylo::IO, no serviceable parts inside DESCRIPTION This module parses "New Hampshire eXtended" (NHX) tree descriptions in parenthetical format. The node annotations, which are described here:, are stored as meta annotations in the namespace whose reserved prefix, nhx, is associated with the above URI. This means that after this parser is done, you can fetch an annotation value thusly: my $gene_name = $node->get_meta_object( 'nhx:GN' ); This parser is called by the Bio::Phylo::IO facade, don't call it directly. In turn, this parser delegates processing of Newick strings to Bio::Phylo::Parsers::Newick. As such,. Note that the flag -ignore_comments, which is optional for the Newick parser cannot be used. This is because NHX embeds its metadata in what are normally comments (i.e. square brackets), so these must be processed in a special way. SEE ALSO There is a mailing list at for any user or developer questions and discussions. - Bio::Phylo::IO The NHX parser is called by the Bio::Phylo::IO object. Look there to learn how to parse newick strings. -.
https://metacpan.org/pod/Bio::Phylo::Parsers::Nhx
CC-MAIN-2019-30
refinedweb
184
54.63
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hello all. I'm working with a Processing-Arduino interface, but this is more of a programming question. The Arduino is receiving data via sms, and my goal is to parse and save the data using Processing and make a data visualization real-time, at the same time. Right now i don't have the admin rights to change the program in arduino so that data parsing could be easier. Things successfully done: 1. Data parsing with a logged data. CoolTerm was used to logged the data, then converted to a .csv file. The original data is Space Delimited. Data parsing done in Processing. Here is my code, based on programs by cool programmers : :) import java.io.*; String addCo2Alas = "F:\\co2_alas.csv"; String addCo2Isle = "F:\\co2_isle.csv"; String[] lines; String a = "mcAlas"; String m = "mcisle"; void setup() { size(600, 600); lines = loadStrings("co2.csv"); for (int i = 0; i < lines.length; i++) { float co2Gas, airTemp, lakeTemp7, lakeTemp50; float laLevel; String date, time, station; if (lines[i] != null) { String[] val = splitTokens(lines[i], " ,"); if (val[6].equals(m)) { co2Gas = float(val[0]); airTemp = float(val[1]); lakeTemp7 = float(val[2]); lakeTemp50 = float(val[3]); date = val[4]; time = val[5]; station = val[6]; appendData(addCo2Isle, date+" "+time+","+co2Gas+","+airTemp+","+lakeTemp7+","+lakeTemp50); }else if(val[6].equals(a)){ laLevel=float(val[0]); airTemp = float(val[1]); lakeTemp7 = float(val[2]); lakeTemp50 = float(val[3]); date = val[4]; time = val[5]; station = val[6]; appendData(addCo2Alas, date+" "+time+","+laLevel+","+airTemp+","+lakeTemp7+","+lakeTemp50); } } } } void draw() { } void appendData(String filename, String text) { File f = new File(dataPath(filename)); // if(!f.exists()){ // createFile(f); // } try { PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter(f, true))); out.println(text); out.close(); } catch(IOException e) { e.printStackTrace(); } } Data visualization based in the example from the book 'Distributed Network Data' (page 121.). Connected with sms receiver serially. Here is my code(just checked if the data will come in: import processing.serial.*; import java.io.*; //import gnu.io.*; Serial port; //String addCo2Alas = "F:\\CO2\\co2_alas.csv"; //String addCo2Isle = "F:\\CO2\\co2_isle.csv"; //String addCo2Opet = "F:\\CO2\\co2_opet.csv"; //String discard = "F:\\CO2\\co2_opet.csv"; //String[] lines; //String a = "mcAlas"; //String m = "mcisle"; //String o = "mcopet"; String[] header = {"goodnyt", "CMGF=1", "AT+CNMI=3,0,0,0,0", "OK", "AT+CMGD=1,4", "OK"}; void setup() { size(820, 600); Serial port = new Serial(this, "COM6", 9600); //port.write(65); port.bufferUntil('\n'); } void draw() { background(125, 45, 76); } void serialEvent(Serial port) { String input = port.readString(); if (input != null) { println(input); // port.clear(); } port.write(65); } Foreseen problems: 1. In CoolTerm the data streaming is slightly different compared to what is printed in the Processing console. Is it possible to make it look like the CoolTerm streaming? Please see attached figures. I would like to start the data logging after the AT commands. What method/function should i use? Basically, i'm having problems dealing with the live streaming data. Answers Oh btw, i already tried to do data parsing and data visualization using DHT11 sensor in arduino, it turned out smoothly. I did'nt expect that data coming from an sms receiver would be very different. Cheers, Saling :)
https://forum.processing.org/two/discussion/2565/data-parsing-from-serial
CC-MAIN-2020-45
refinedweb
549
60.61
I am running the below code to find an element containing Unicode Arabic characters. The below code works just fine if I replace XXX with English letter, however, if I replace them with Arabic letters It won't. I checked the html page and it has "< meta" so I set the character set in my Py script at the first line just to make sure the letters are interpreted as expected but still not working. Any clue is much appreciate it. Thanks # coding=UTF8 from selenium import webdriver # create a new Firefox session driver = webdriver.Firefox() driver.implicitly_wait(10) driver.get("") print driver.find_element_by_xpath(u"//*[contains(text(), 'XXX')]").text I think you are not using the correct unicode in the xpath, check the demo in Ipython here First I have selected one node to get the corresponding unicode for that arabic word, so after using that unicode modified the xpath as follows and this was the output. In [1]: response.xpath('//li[@class="lensItem"]/a/text()').extract() Out[1]: [u'\u0639\u062f\u0633\u06cc'] In [2]: response.xpath(u'//a[contains(text(), "\u0639\u062f\u0633\u06cc")]/text()').extract() Out[2]: [u'\u0639\u062f\u0633\u06cc', u'\u0639\u062f\u0633\u06cc', u'\u0645\u0634\u062e\u0635\u0627\u062a \u0639\u062f\u0633\u06cc \u0622\u0641\u062a\u0627\u0628\u06cc'] In [3]: a = response.xpath(u'//a[contains(text(), "\u0639\u062f\u0633\u06cc")]/text()').extract() In [4]: for i in a: ...: print i ...: عدسی عدسی مشخصات عدسی آفتابی Edit I have tested the xpath using Scrapy but this will also work with selenium, In [6]: driver.find_element_by_xpath(u'//a[contains(text(), "\u0639\u062f\u0633\u06cc")]').text Out[6]: u'\u0639\u062f\u0633\u06cc' I hope this will help you to solve your issues.
https://codedump.io/share/AQIQaHP7VVaH/1/how-to-search-for-elements-containing-unicodearabic-letters
CC-MAIN-2016-50
refinedweb
290
55.03
There are many reasons that you will want to upgrade your Windows NT domains to Active Directory, not least of which is to make use of Active Directory and other features. It's possible to have significantly fewer domains in Active Directory because each domain can now store millions of objects. While fewer domains means less administration, the added benefit of using organizational units to segregate objects makes the DNS. With multimaster replication and a significantly more efficient set of replication algorithms. Reduce the number of PDCs/BDCs to a smaller number of DCs through a more efficient use of DCs by clients. Eliminate the need for reliance on WINS servers and move to the Internet-standard DNS for name resolution. There are three important steps in preparing for a domain upgrade: Test the upgrade on an isolated network segment set aside for testing. Do a full backup of the SAM and all core data prior to the actual upgrade. Set up a fallback position in case of problems. We cannot stress strongly enough how enlightening doing initial testing on a separate network segment can be. It can show a wide variety of upgrade problems, show you areas that you never considered, and in cases in which you have considered everything, give you the confidence that your trial run did exactly what you expected. In the world of today's complex systems, some organizations still try to roll out operating system upgrades and patches without full testing; this is just plain daft. The first part of your plan should be for a test of your upgrade plans. When you do the domain upgrade itself, it goes without saying that you should have full backups of the Windows NT SAM and the data on the servers. You would think this is obvious, but again we have seen a number of organizations attempt this without doing backups first. The best fallback position is to have an ace up your sleeve, and in Windows NT upgrade terms, that means you need a copy of the SAM somewhere safe. While backup tapes are good for this, there are better solutions for rapid recovery of a domain. These recipes for success require keeping a PDC or a BDC of your domain safely somewhere. In this context, by safely we mean off the main network. Your first option is to take the PDC off the network. This effectively stores it safely in case anything serious goes wrong. Next, as your domain now has no PDC, you need to promote a BDC to be the PDC for the domain. Once that has been done successfully, and you've manipulated any other services that statically pointed at the old PDC, you can upgrade that new PDC with the knowledge that your old PDC is safe in case of problems. The second option is to make sure that an existing BDC is fully replicated, then take it offline and store it. Both solutions give you a fallback PDC in case of problems. Remember that the first domain in a forest is a significant domain and cannot be deleted. That means you cannot create a test domain tree called testdom.mycorp.com, add a completely different noncontiguous tree called mycorp.com to the same forest, and subsequently remove testdom.mycorp.com. You have to make sure that the first domain that you ever upgrade is the major or root domain for the company. In Windows NT domain model terms, that means upgrading the master domains prior to the resource domains. The resource domains may end up being Organizational Units instead anyway now, unless political, cultural, or bandwidth reasons force you to want to keep them as domains. Single Windows NT domains and complete trust domains can be upgraded with few problems. With a single domain, you have just one to convert, and with complete trust domains, every domain that you convert will still maintain a complete trust with all the others. However, when you upgrade master domains or multimaster domains, there are account and resource domains that need to be considered. No matter how many master domains you have, the upgrade of these domains has to be done in a certain manner to preserve the trust relationships and functionality of the organization as a whole. We'll now explain the three broad ways to upgrade your master domain structure. Let's assume that you have one or more single-master or multimaster domains that you wish to convert. Your first task will be to create the forest root domain. This domain will act as your placeholder and allow you to join the rest of the domains to it. The forest root domain can be an entirely new domain that you set up, or you can make the first domain that you migrate the forest root domain. Take a look at Figure 15-1, which shows a Windows NT multimaster domain. Each domain that holds resources trusts the domains that hold user accounts, allowing the users to log on to any of the resource domains and use the respective resources. There are three main ways to upgrade this domain. None of them is necessarily any better than the other, as each design would be based on choices that you made in your namespace design notes from Chapter 8. First, the domains could all be joined as one tree under an entirely new root. Each master domain would represent a branch under the root with each resource domain joined to one of the masters. This is shown in Figure 15-2. The second option is to aim toward making one of the master domains the root of the new tree. All resource domains could then join to the root, one of the other master domains, or one of the resource domains. Figure 15-3 shows this in more detail. Two resource domains have been joined to one of the master domains, but the third resource domain can still go to one of three parents, as indicated by the dashed lines. Finally, you could make each domain a separate tree. While the first master domain that you migrate will be the forest root domain, the rest of the master domains will simply be tree roots in their own right. Let's now consider the process for migrating these domains. We must migrate the master account domains first, since they are the ones that the resource domains depend on. To start the process, convert any one of the master account domains over to Active Directory by upgrading the PDC of that master domain. If any of the trust relationships have been broken between this domain and the other master and resource domains during migration, reestablish them. Once the PDC is upgraded, proceed to upgrade the other BDCs of that domain (or you can leave the domain running with Windows NT BDCs; it doesn't really matter to the rest of the migration). The next step is to migrate the other master domains. You continue in the same manner as you did with the first domain until all master domains have been converted. Once each domain is converted, you need to reestablish only trust relationships with the existing Windows NT domains; the Active Directory domains in the forest will each have hierarchical and transitive trusts automatically anyway. So now you end up with a series of Active Directory master domains in a tree/forest and a series of Windows NT resource domains with manual trusts in place. Once all the master domains are converted, you can start consolidating them (as discussed in the next section), or you can immediately convert the resource domains. Either way, once all domains are converted, you are likely to start a consolidation process to reduce the number of domains that you have in existence. Part of that consolidation will be to convert existing resource domains to Organizational Units. This is because resource domains by their very nature tend to fit in well as Organizational Units.[1] For that to happen, these future Organizational Units will need to be children of one of the migrated master or resource domains. It doesn't matter which master or resource domain acts as the parent, since there are consolidation tools available that allow you to move entire branches of the tree between domains. The process is simple: you take each resource domain in turn and convert it to a child domain of one of the existing Active Directory master or resource domains. Once they are all converted, you can really begin consolidation. [1] Resource domains were created because of Windows NT's inability to allow delegation of authority within a domain. Now Organizational Units provide that functionality, so separate resource domains are no longer required. Thus, old resource domains can become Organizational Units under Windows 2000 and still maintain all their functionality. Upgrading your domains is not the end of the story. Many administrators implemented multiple Windows NT domains to cope with the size constraints inherent in Windows NT domains. With Active Directory, those constraints are lifted, and each domain in a forest can easily share resources with any other domain. This allows administrators to begin removing from the directory information that has become unnecessary in an Active Directory environment. When your final Windows NT 4.0 BDC for a domain has been taken out of service or upgraded, you are ready to convert the domain to Windows 2003 functional level. After the conversion, you have some decisions to make about the groups you have in this domain. You can leave all groups as they are or start converting some or all groups to universal groups. With multiple domains in a single forest, you can consolidate groups from more than one domain together into one universal group. This allows you to combine resources and accounts from many domains into single groups. There are two methods for bringing these groups online: Setting up parallel groups Moving existing groups In a parallel group setup, the idea is that the administrator sets up groups that hold the same members as existing groups. In this way, users become members of both groups at the same time, and the old group and a user's membership can be removed in a calculated manner over time. The arguably easier solution is to move existing groups, but to do that you need to follow a series of steps. Take the following example, which leads you through what's involved. Three global groupspart_time_staff in finance.mycorp.com, part_time_staff in mktg.mycorp.com, and part_time_staff in sales.mycorp. comneed merging into one universal group, to be called part_time_staff in mycorp.com. The following is the step-by-step procedure: All part_time_staff global groups are converted to universal groups in their current domains. To make the part_time_staff universal group names unique so that they can all exist in one domain, the group needs to be renamed with the domain element. That means finance\part_time_staff, mktg\part_time_staff, and sales\part_time_staff become finance\finance_part_time_staff, mktg\mktg_part_time_staff, and sales\sales_part_time_staff. Make use of the Windows 2003 functional level ability to move groups, and move the three groups to the mycorp.com domain. This leaves you with mycorp\finance_part_time_staff, mycorp\mktg_part_time_staff, and mycorp\sales_ part_time_staff. Create a new universal group called part_time_staff in the mycorp. com domain. Make the mycorp\finance_part_time_staff, mycorp\mktg_part_time_ staff, and mycorp\sales_part_time_staff groups members of the new mycorp\part_time_staff universal group. You can then remove the three old groups as soon as it is convenient. Remember that, while this is an easy series of steps, there may be an entire infrastructure of scripts, servers, and applications relying on these groups. If that is the case, you will need either to perform the steps completely, modifying the infrastructure to look at the new single universal group after Step 5, or modify the groups immediately after you complete Step 2 and then again after you complete Steps 3 to 5 in quick succession. We favor the former, since it requires that the work be done once, not twice. When it comes to considering computer accounts, things are relatively straightforward. Under Windows NT, a computer could exist in only one domain at a time, since that computer and domain required a trust relationship to be established to allow domain users to log on to the domain at that client. You could set up bidirectional trust relationships manually between domains, allowing a client in Domain A to authenticate Domain B users to Domain B, but this was not common. With Active Directory, all domains in a forest implicitly trust one another automatically. As long as the computer has a trust relationship with one domain, users from any other domain can log on to their domain via the client by default. The following is a rough list of items to consider: Moving computer accounts between domains to gain better control over delegation Joining computers to the domain Creating computer groups Defining system policies In all of these, it is important to understand that the source domain does not have to be at the Windows 2003 functional level to move computers to a new domain. In addition, administrators can use the NETDOM utility in the Windows Support Tools to add and remove domain computer objects/accounts; join a client to a domain, move a client between domains; verify, reset, and manage the trust relationship between domains; and so on. While you may have had computer accounts in a series of domains before, you now can move these accounts anywhere you wish in the forest to aid your delegation of control. Group Policy Object processing also has a significant impact on where your computer accounts should reside. However, you now can work out what sort of Organizational Unit hierarchy you would ideally wish for your computer accounts and attempt to bring this about. Moving computers between domains is as simple as the following NETDOM command. Here we want to move a workstation or member server, called mycomputerorserver, from the domain sales.mycorp.com to the location LDAP://ou=computers,ou=finance,dc=mycorp,dc=com. We specifically want to use the myDC domain controller and the MYCORP\JOINTODOMAIN account to do the move. Connection to the client will be done with the SALES\Administrator account, which uses an asterisk (*) in the password field to indicate to prompt for the password. We could just as easily have used an account on the client itself. We also include a 60-second grace period before the client is automatically rebooted: NETDOM MOVE mycomputerorserver /DOMAIN:mycorp.com /OU:Finance/Computers /UserD:jointodomain /PasswordD:thepassword /Server:myDC /UserO:SALES\Administrator /PasswordO:* /REBOOT:60 This is actually the long-winded version, split up onto multiple lines for visibility; here's the short form: NETDOM MOVE /D:mycorp.com /OU:Finance/Computers /UD:jointodomain /PD:thepassword /S:myDC /UO:SALES\Administrator /PO:* /REB:60 Note that moving a Windows NT computer doesn't delete the original account, and moving a Windows 2000 computer just disables it in the source domain. You also need to consider who will be able to add workstations to the domain. You can set up an account with join-domain privileges only, i.e., an account with the ability to make and break trust relationships for clients. We've used this approach with a lot of success, and it means that an administrator-equivalent user is no longer required for joining clients to a domain. Let's take the previous example, but this time we wish to both create an account and join a new computer to the domain with that account. This is the code to do that using NETDOM: NETDOM JOIN mycomputerorserver /D:mycorp.com /OU:Finance/Computers /UD:jointodomain /PD:thepassword /S:myDC /UO:SALES\Administrator /PO:* /REB:60 In all these NETDOM examples, we're using a specially constructed account that only has privileges to add computer objects to this specific Organizational Unit. At Leicester we precreated all the computer accounts, and the jointodomain account was used only to establish trusts between existing accounts; it had no privilege to create accounts in any way. You also need to be aware that workstation accounts under Windows NT could not go into groups. Under Active Directory, that has all changed, and you can now add computers to groups. So when moving computers between domains for whatever purposes, you now can use hierarchical Organizational Unit structures to delegate administrative/join-domain control, as well as using groups to facilitate Group Policy Object (GPO) upgrades from system policies. System policies themselves are not upgradeable. However, as explained in Chapter 7 and Chapter 10, you can use system policies with Active Directory clients and bring GPOs online slowly. In other words, you can keep your system policies going and then incrementally introduce the same functionality into GPOs. Since each part of each system policy is included in the GPO, you can remove that functionality from the system policy while still maintaining the policies application. Ultimately, you will end up replacing all the functionality incrementally, and the system policies will have no more policies left so can be deleted. When consolidating domains, you'll need at some point to move users around to better represent the organization's structure, to gain better control over delegation of administration, or for group policy reasons. Whichever of these it is, there are useful tools to help you move users between domains. To be able to transfer users between domains, you need to have gone to Windows 2000 functional level, and this will have ditched all your Windows NT BDCs. This allows a seamless transfer of the user object, including the password. A good method for transferring users and groups so that no problems occur is as follows: The first stage is to transfer all the required domain global groups to the destination domain. This maintains the links to all users within the source domain, even though the groups themselves have moved. Now the users themselves are transferred to the destination domain. The domain global group memberships are now updated with the fact that the users have now joined the same domain. You then can consolidate the domain global groups or move the domain global groups back out to the original domain again. This latter option is similar to Step 1, where you move the groups and preserve the existing links during the move. Clean up the user's Access Control Lists to resources on local computers and servers, since they will need to be modified after the change. If you do it this way, you may have fewer problems with group memberships during the transition. As for moving users, while you can use the Active Directory Users and Computers MMC to move containers of objects from one domain to another, there are also two utilitiescalled MOVETREE and SIDWALKin the Resource Kit that can come in very handy. MOVETREE allows you to move containers from one part of a tree in one domain to a tree in a completely different domain. For example, suppose we wish to move the branch of the tree under an Organizational Unit called Managers from the sales.mycorp.com domain to the Organizational Unit called Sales-Managers on the mycorp.com domain. The command we would use to start the move is something like the following, preceded by a full check: MOVETREE /start /s sales.mycorp.com /d mycorp.com /sdn OU=Managers,DC=sales /ddn OU=Sales-Managers /u SALES\Administrator /p thepassword The SIDWALK utility is designed to support a three-stage approach to modifying ACLs. Each stage of changing ACLs can take a while to complete and verify, sometimes a day or more. It thus requires some amount of system resources and administrator time. The stages are: The administrator needs to determine what users have been granted access to resources (file shares, print shares, NTFS files, registry keys, and local group membership) on a particular computer. Based on who has access to what resources on the system, the administrator can chose to delete old, unused security identities or replace them with corresponding new identities, such as new security groups. Using the information from the planning and mapping phases, the third stage is the conversion of security identities found anywhere on a system to corresponding new identities. After you've migrated, you may want to get rid of some old domains entirely, move member servers between domains, consolidate multiple servers together, or possibly even convert a member server to become a DC. Whatever you're considering, moving member servers and their data while maintaining group memberships and ACLs to resources can be done. Again, as with users and computers, taking the process in stages helps ensure that there is less chance of a problem. If you're considering moving member servers between domains or removing domains in general, these are the steps that you need to consider: Make sure that the source domain and the destination domain are at the Windows 2000 or higher functional level. Move all groups from the source domain to the target domain to preserve memberships. Move the member servers to the destination domain. Demote the DCs to member servers, removing the domain in the process. Clean up the Access Control Lists to resources on local computers and servers, since they will need to be modified after the change.
http://etutorials.org/Server+Administration/Active+directory/Part+II+Designing+an+Active+Directory+Infrastructure/Chapter+15.+Migrating+from+Windows+NT/15.1+The+Principles+of+Upgrading+Windows+NT+Domains/
CC-MAIN-2017-30
refinedweb
3,571
50.87
AWS Cloud Development Kit: Now I Get It The AWS Cloud Development Kit (CDK) is an "open source software development framework to define your cloud application resources using familiar programming languages". When CDK launched in 2019, I remember reading the announcement and thinking, "Ok, AWS wants their own Terraform-esque tool. No surprise given how popular Terraform is." Months later, my friend and colleague Matt M. was telling me how he was using CDK in a project he was working on and how crazy cool it was. I finally decided to give CDK a go for one of my projects. Here is what I discovered. Composing and sharing⌗ A key concept in CDK is that everything is a construct. A construct represents cloud components and can be as small as a single resource or much more complex such as a multi-account distributed application. Constructs can be nested allowing a construct to use other constructs. Constructs are composed into stacks that are deployed to AWS. This concept of constructs becomes really powerful when you think about reusable infrastructure-as-code artifacts. For example, consider a scenario where you have to deploy an AWS Virtual Private Cloud (VPC) multiple times (perhaps in different accounts, or into different dev/test/prod environments). And let's say that you always want a specific Security Group configured in that VPC which allows ingress traffic from a jump host. The VPC and Security Group can all be defined in a CDK construct; the construct is made of the CDK code that defines this infrastructure. The code below demonstrates such a construct: from aws_cdk import core as cdk from aws_cdk.aws_ec2 import ( Peer, Port, Protocol, SecurityGroup, Vpc ) class CdkNowIGetIt(cdk.Construct): def __init__(self, scope: cdk.Stack, construct_id: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) # Create the VPC resource. self._vpc = Vpc(self, "MyVPC", cidr="10.10.0.0/16") # the jump host # 10.255.0.10 to TCP/22 and TCP/3389. self._sg.add_ingress_rule(peer=Peer.ipv4("10.255.0.10/32"), connection=Port(protocol=Protocol.TCP, string_representation="host1", from_port=22, to_port=22)) self._sg.add_ingress_rule(peer=Peer.ipv4("10.255.0.10/32"), connection=Port(protocol=Protocol.TCP, string_representation="host1", from_port=3389, to_port=3389)) This construct can be consumed by a CDK app to instantiate copies of the infrastructure. Since each copy is being created from the same code blueprint, they all end up looking the same, just as desired. On the topic of composition, the code above shows an example of this. The Vpc and SecurityGroup classes are themselves CDK constructs. These constructs are authored by AWS as part of the aws-cdk.aws-ec2 Python module. These constructs are composed together to form a new construct called CdkNowIGetIt. This idea can be taken even further. CDK constructs can be packaged and shared. So the example of deploying multiple copies of the VPC can be expanded to actually sharing the construct with other builders or engineers to allow them to deploy their own copies of the infrastructure. And coming back to the composability of CDK constructs, those engineers could compose the VPC construct together with their own or other third-party constructs to build their entire infrastructure stack. Imagine having your own library of constructs that are vetted and approved for use in your environment that developers then consume in their code. This would really help ensure consistency, repeatability, and governance of the infrastructure. In the example below, I show one possible method for packaging the CDK construct from above which has been written in Python: ~/git/cdk-now-i-get-it% python3 setup.py sdist running sdist [...] Writing cdk_now_i_get_it-0.0.1/setup.cfg Creating tar archive ~/git/cdk-now-i-get-it% ls -l dist total 4 -rw-r--r-- 1 joel joel 2299 Apr 4 15:50 cdk_now_i_get_it-0.0.1.tar.gz AWS CloudFormation and Terraform both have their own concept of modules which have varying degrees of composability and reusability. In practice, I see composition happening much more with Terraform than CloudFormation, so I feel that Terraform and CDK are fairly evenly matched here. CDK excels on the next point, however. Native language⌗ As the CDK website says, "AWS CDK uses the familiarity and expressive power of programming languages for modeling your applications". In other words, instead of using a language such as YAML, JSON, or something bespoke to model the infrastructure, CDK uses native TypeScript, Python, and other supported programming languages to define the infrastructure. This ability opens up an amazing amount of possibilities. Not only can the code describe cloud infrastructure, but it can do anything else the language is capable of as well. - Conditionals (if some condition is true, build the infrastructure this way, else, build it that way) - Loops (!) (for i in 1..10, build a cloud resource) - Unit tests (mock the API calls; did my conditions, loops, and calls all executed as expected?) - Integrations with other systems (look up parameters in a third-party data source) Building on the example VPC construct from above, consider that instead of hardcoding the IP address of the VPC and jump host, and the ports the jump host is allowed to connect to, you want to dynamically acquire those values by looking them up in a database. For good measure, let's also throw in some conditions and loops. Here is what the modified construct looks like: from aws_cdk import core as cdk from aws_cdk.aws_ec2 import ( Peer, Port, Protocol, SecurityGroup, Vpc ) class CdkNowIGetIt(cdk.Construct): def __init__(self, scope: cdk.Stack, construct_id: str, vpc_cidr: str, jump_host: str, mgmt_ports: list, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) # args: # - vpc_cidr (str): The CIDR range for the VPC. # - jump_host (str): An optional IP address for the jump host. If this # is not specified, the Security Group will not be # created. # - mgmt_ports (list): A list of TCP ports which the jump host is # allowed to connect to. # Create the VPC resource with the given CIDR range. self._vpc = Vpc(self, "MyVPC", cidr=vpc_cidr) # Security Group only created if the jump host parameter was # specified. if jump_host is not None and len(jump_host) > 0: self.create_sg(jump_host, mgmt_ports) def create_sg(self, jump_host, mgmt_ports): # port in mgmt_ports: self._sg.add_ingress_rule(peer=Peer.ipv4(jump_host), connection=Port(protocol=Protocol.TCP, string_representation="jump", from_port=int(port), to_port=int(port))) And here is what the CDK app code looks like which calls the above construct: from cdk_now_i_get_it.cdk_now_i_get_it_v2 import ( CdkNowIGetIt as CdkNowIGetIt_v2 ) class MyStackv2(cdk.Stack): def __init__(self, scope: cdk.App, id: str, vpc_cidr: str, jump_host: str, ports: list, **kwargs): super().__init__(scope, id, **kwargs) self._network = CdkNowIGetIt_v2(self, "CdkNowIGetItv2", vpc_cidr, jump_host, ports) def get_params_from_database(): # Ok, this isn't exacty a database, but it makes the point. This could # also be a call to a relational DB, an API call to an IPAM system, or # anything else. import csv with open("network.csv", newline='') as csvfile: reader = csv.reader(csvfile, delimiter=",") for row in reader: if row[0] == "MyVPC": return { "vpc_cidr": row[1], "jump_host": row[2], "ports": row[3].split(":") } app = cdk.App() params = get_params_from_database() stack_v2 = MyStackv2(app, "MyStackv2", params["vpc_cidr"], params["jump_host"], params["ports"]) app.synth() The network.csv file represents the configuration "database" and holds the VPC CIDR address, the jump host IP address, and TCP ports the jump host is allowed to connect to. (Note: the full code is available on Github) Proven defaults⌗ The fact that CDK provides proven defaults in its constructs is vastly underrated. Consider again the creation of a VPC. Creating a VPC itself isn't much use; the VPC needs subnets, one or more gateways, one or more route tables, security groups, and at least one network access control list. There could be upwards of a dozen additional resources that all need to be defined in the code. By providing proven defaults, CDK creates these necessary resources as part of the Vpc construct. The way this works varies by construct, but the Vpc construct takes a CIDR range as a parameter which is assigned to the VPC. The construct then carves this CIDR up into 3 public and 3 private subnets , provides a NAT gateway per zone, and an internet gateway for the VPC. These defaults don't work in every case, so it is possible to customize this behavior. But the best part is how easy that customization is. Customization here does not mean building or defining additional constructs. Again, this will depend on the construct, but for the Vpc construct it's just a matter of passing some parameters to the construct to modify its default behavior: from aws_cdk import core as cdk from aws_cdk.aws_ec2 import ( Peer, Port, Protocol, SecurityGroup, SubnetConfiguration, SubnetType, Vpc ) class CdkNowIGetIt(cdk.Construct): def __init__(self, scope: cdk.Stack, construct_id: str, vpc_cidr: str, jump_host: str, mgmt_ports: list, subnet_len: int, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) # args: # - vpc_cidr (str): The CIDR range for the VPC. # - jump_host (str): An optional IP address for the jump host. If this # is not specified, te Security Group will not be # created. # - mgmt_ports (list): A list of TCP ports which the jump host is # allowed to connect to. # - subnet_len (int): The prefix length for subnet CIDR addresses. # Create the VPC resource. The VPC does not have an internet gateway, # or NAT gateway. Subnets are created in 2 zones. subnets = [SubnetConfiguration(name="MyVPC-Private", subnet_type=SubnetType.ISOLATED, cidr_mask=subnet_len)] self._vpc = Vpc(self, "MyVPC", cidr=vpc_cidr, max_azs=2, nat_gateways=None, subnet_configuration=subnets) ... Summary⌗ CDK rocks. Its use of native programming languages makes it incredibly powerful. The ability to use the full power of the native language you're writing your CDK code in makes CDK unique amoung infrastructure-as-code tools. And the proven defaults reduce the time it takes to get your code into usable shape. Need a VPC? It could be as little as one line of code. Creating a Lambda function? CDK will create the IAM role for you. Reference⌗ All of the code samples in this post are available on Github at github.com/knightjoel/cdk-now-i-get-it To learn more about AWS CDK, visit these links: - Infrastructure is Code with the AWS CDK - AWS Online Tech Talks - AWS CDK Frequently Asked Questions - AWS CDK Python Reference - Amazon EC2 Construct Library Disclaimer: The opinions and information expressed in this blog article are my own and not necessarily those of Amazon Web Services or Amazon, Inc.
https://packetmischief.ca/2021/04/11/aws-cdk-now-i-get-it/
CC-MAIN-2021-21
refinedweb
1,738
57.47
4.1.1. LED Control¶ Your OpenMV Cam has an RGB LED and two IR LEDs on board. You can control the red, green, and blue segments of the RGB LED individually and the two IR LEDs as one unit. To control the LEDs first import the pyb module. Then create an LED class object for the particular LED you want to control: import pyb red_led = pyb.LED(0) green_led = pyb.LED(1) blue_led = pyb.LED(2) ir_leds = pyb.LED(3) The pyb.LED(number) call creates an LED object which you can use to control a particular LED. Pass pyb.LED “0” to control the red RGB LED segment, “1” to control the green RGB LED segment, “2” to control the blue RGB LED segment, and “3” to control the two IR LEDs. After creating the LED control objects like above I heavily recommend that you call the off() method for a new LED to put it into a known state. Anyway, there are three methods you can call for each LED, off(), on(), and toggle() which do exactly that. Unlike other MicroPython Boards the OpenMV Cam doesn’t support the intensity() method to allow for PWM dimming of the LEDs. We re-purposed the timer that was used for LED dimming for generating a clock source to power the camera chip. Finally, use the RGB LED for indicator purposes in your script. As the IR LEDs, those are for night vision. When you switch out your OpenMV Cam’s regular lens for our IR lens (which is a lens without an IR filter) you can then turn on the IR LEDs so that your OpenMV Cam can see in the dark. The IR LEDs are strong enough to illuminate about 3 meters in front of your OpenMV Cam in pitch black.
http://docs.openmv.io/openmvcam/tutorial/led_control.html
CC-MAIN-2017-26
refinedweb
303
80.72
Lingua Programmatica Typical non-programmer question: Why are there so many programming languages? Why doesn’t everyone just pick the best one and use that? Fair enough. The definition of the term “computer language” can be really nebulous if you encounter someone who is in the mood to engage in social griefing through pedantry. Instructions for a machine to follow? Does that include 19th century player pianos? Machine code? Flipping switches and wiring on those first-gen electric computer-type contraptions? Considering over half the “language” consists of dragging and dropping icons, does Scratch count? Let’s just sweep that all aside and assume languages began with the idea of stuff like COBOL and FORTRAN and we don’t care to refine the definition further. Humor me. It’s pretty much standard practice to do the “Hello World” program when you’re learning a new language. The goal is to simply print the worlds “Hello World!”, and that’s it. It’s basically the simplest possible program that can still do something observable and meaningful for someone new to the language. Here is the program in assembly language: _start: mov edx,len mov ecx,msg mov ebx,1 mov eax,4 int 0x80 mov eax,1 int 0x80 section .data msg db 'Hello, world!',0xa len equ $ - msg Here is a functionally identical program, written in standard C: #include <stdio.h> int main(void) { printf("hello, world\n"); return 0; } And in BASIC: 10 PRINT "Hello, world!" The first is incomprehensible to anyone who doesn’t understand assembler. The second is tricky, but you might be able to intuit what it does. The last one is simple and obvious. So why would you ever use anything other than BASIC? Here is how the trade-off works: On one end you have a powerful, flexible language that makes highly efficient code. It can be used to make anything and the code will always be extremely speedy and have the lowest possible memory overhead. (Setting aside the issue of individual programmer skill.) On the other end you have a language that’s easy to use and understand. Some languages are optimized for specific tasks. If you happen to be doing one of those tasks, then your work as a coder will be easier. Let’s say you want to write a program to take a given number and perform two tasks: 1) Print the number normally in base ten. So, ten and a half would look like: 10.5 2) Print the number in a base-6 number system and use an @ symbol instead of a decimal point. So, ten and a half would look like: 14@3. I don’t know why you would want a program that does this, but I promise this isn’t really any more arbitrary or goofy than a lot of crazy stuff a boss might assign the hapless coder. The first task is conventional and almost all languages will have a shortcut for making that happen. The second task is unconventional and thus we’re not likely to have a lot of built-in language tools for doing it. In assembler, these two tasks will be of a similar level of difficulty. You’ll have to write your own number-printing code from scratch, but when you’re done the two bits of code will be about the same level of complexity (very complex) and the same level of efficiency. (Highly optimized. (Again, this is assuming you know what you’re doing.)) In C, the first task will be trivial, and the second will take some extra effort. Printing the base ten number will be much, much faster than printing in base 6 with @ symbols. (Although both will be so fast on modern computers you’d have trouble measuring them. Still, if you had to print out a LOT of numbers, the differences between base 10 and base 6 would become apparent.) In BASIC, the first task would be super-trivial. One line of code. The second task would require pages of code. 99% of your programing time would be spent on the second task, and it would be much, much slower than the first task. Assembly is referred to as a “low level” language. You’re down there interfacing with the machine on a pure, fundamental level. Every line of code is basically an instruction carried out by the processor. BASIC is a very high level language. You’re writing things in abstracted, human-friendly terms, and a single line of code might represent hundreds or even thousands of processor instructions. Generally, the higher the level of the language, the more tasks become either trivially easy, or impossible. The C / C++ language seems to be the “sweet spot” in this particular tradeoff. Most software on your computer was written in that language. But despite its dominance, there are still a tremendous number of situations where other languages are better for specific tasks. Some examples: - Java is totally cross platform. Write one bit of code and it’ll run the same everywhere. Downside: It’s slower. Really slow.* - Visual basic is dynamite if you need lots of small programs with simple functionality but complicated interfaces. If you need a lot of complex dialog boxes with sliders and buttons and drop downs, it will be easier to set them up in Visual Basic than C++. 10 PRINT "My chosen computer language is better than yours!" 20 GOTO 10 * There. Now we can be friends again. I bet you won't even read all 186 comments before leaving your own. In my country, we say that programming languages are like soccer teams. Everyone has their favourite, and not necessarily for any identifiable reason. Still, the choice of language (at least for general-purpose languages like C or Basic, and not specific like SQL) is a very personal choice for a coder. Sometimes it’s as simple as “what I’m used to”, sometimes they can spout a list of twenty items mentioning such exotics as Hindley-Milner type inference. For the record, my favourite languages are, in no particular order, C, Scala and Lua. A programming metaphor that involves sports? Heresy! Hm, I’d have to say “Common Lisp”, “Python” and would have a hard choice picking “C” or “go” as my third choice. (this is very late for a reply, but!..) sounds like a reasonable metaphor to me, what country though? interested.. My favorites are very close to your actually, Mine(also no particular order as well) are C(++), Lua, and Python. Python is really easy, and i usually think about it more than others. Great post…haven’t seen assembly like that in at least five years. Different tools for different tasks for people with different skill levels…that’s what it basically boils down to. If you don’t mind me asking, what are the differences between PERL and Python? What are they used for? PERL’s primary domain is in performing common tasks very well. Use PERL if you want to scan file directories, use regular expressions, etc… Application and human interface type stuff. Python has a lot of overlap with PERL, but what it’s best at is stuff that goes beyond PERL’s intended purpose…something like a complex data structures, OOD, etc… They’re sort of like first cousins as far as languages go. THere are a million programs that you could do just as well in either language, but some that will be much much easier in one or the other. Personally I prefer PERL, because I use those kinds of features a lot more. I will often write a program that scans a data set and performs regular expressions on it, but I rarely need to do anything with a complex data structure. A lot of languages are like this. They’re about 90% similar, and the other 10% is the stuff that the original designer felt was lacking in other languages. Python tends to be more readable; Perl can certainly be written readably, but it’s also much easier to end up with a visual disaster if you’re not careful. The tradeoff is that Python doesn’t give you as many ways to do the same thing; in Perl you have many options with their own upsides and downsides, so you can do either much better or much worse than Python, depending on your experience, attention levels etc. Perl is also very nice to use for people with an old-school UNIX background, as it uses many conventions from that environment. They say that good Perl code is indistinguishable (?) from line noise :) My favorites would probably be Python, bash (er.) and awk (um.); and Delphi for windows platforms – precisely for the reason Shamus mentioned VB. Programming basic applications in Delphi is a childs’ play, really. – include standard disclaimer about english not being my first language Aw, man. Don’t write Perl as “PERL”. It was never an acronym and all the definitive texts write it ‘Perl’ for the language and ‘perl’ for the interpreter. ‘PERL’ makes it seem like FORTRAN or COBOL, not the lovely, thriving language that it is. Perl is the Practical Extraction and Report Language. I’m fairly sure the full text was devised mostly after Wall had a name for it. There are many backronyms for Perl, but they’re just that backronyms. I’ll point to the Perl 1.0 man page (side note: I never used Perl 1.0, but I have a vague recollection of seeing Perl 2.0 come across Usenet). Though there’s also this note, at the end of the same man page: Are you saying FORTRAN isn’t a lovely, thriving language? Cuz I’d probably have to agree. PERL is a comparatively older language, evolved (I think) from a set of macros to manipulate text, so it is very good at that. Its development has always been very haphazard, with features added as time went. As a result, it’s a very heterogeneous language, and a bit tough to learn. It is, however, a very good language to write things like batch files and initialisation scripts. Python is comparatively newer, and tries to make its syntax be as clear as possible. It’s a language that can be used for embedded scripting (for example, in games), but it’s rather large and generally is used on its own for all sorts of small to medium-size projects. I unfortunately don’t have snippets of code to show you the difference, but suffice to say they have very different feels. More specifically, Perl was devised as a way to pull together the functionality of a bunch of Unix text tools that traditionally were pieced together using shell scripts: awk, grep, sed, tr (and others, but these are pretty clearly the most important and most influential). Perl was intended to help automate Unix system adminstration tasks. Aha! here is the original public release (on the Usenet group, comp.sources.unix). Python is a much more elegant and modern language with a robust library that supplies the functionality that’s embedded into Perl’s language core. As such, Python tends to be rather more verbose but more understandable than Perl code often tends to be. From a “what are they good at” point of view, they’re close enough that they’re basically interchangeable. Both are quite good at text processing, acting as glue between other programs, and are powerful enough to do Real Work, if you’re willing to accept that they’re slower than, say, C. Both provide a rich set of primitives and vast libraries of supporting tools allowing you to get on with your actual task instead of spending time building infrastructure. Both are great for the sort of small one-off programs that programmers frequently find themselves needing; tasks like, “We need a program to covert all of our data from the old database system to the new one.” The difference is primarily mindset. “The Zen of Python” include this key element: “There should be one– and preferably only one — obvious way to do it.” (That’s from the Z Python says that uniformity is best; part of the payoff is that if I need to work on another person’s Python code, it likely look very similar to how I would have written it. I shouldn’t have to learn the unique idioms of a particular project or developer when a common set of idioms could have solved the problem. The Perl motto is: “There’s more than one way to do it.” Perl says that a programming language shouldn’t tell you how to do your work, that you’re a smart human being and know the best way to approach your problem. Sure, you could solve the problem given more limited tools, but the resulting solution won’t be quite as succinct, quite as clear, and quite as idiomatic to the problem. Larry Wall, the creator or Perl, once pointed out that the real world is a messy place with all sorts of unusual problems, and that Perl provides a messy language that frequently lines up well with the unusual problems programmers have to solve. For most programmers, one of those two mindsets will better fit how you approach problems. The other language will look stupid. Interestingly, this argument plays out in other sets of languages as well. Scheme versus Lisp, Java versus C++. Python is also really, really good at graphical processing. Perl, not so much. I don’t see why you say that, perl has SDL and openGL just like python does, both have access to a huge range of GUI interface toolsets like GTK or QT. I’ll admit that perl wasn’t written that way, it’s got some quirks as a result, but the functionality is there, and it works perfectly. The downside of: “There’s more than one way to do it.” is that reading other people’s code can be a real challenge. Hence this remark lifted from Wikipedia: Some languages may be more prone to obfuscation than others. C, C++, and Perl are most often cited as easy to obfuscate. Wow. I wasn’t expecting that many replies, but thanks all: that helped a lot. Then there are also languages like LISP and Prolog, which are very good at doing things quite unlike the things BASIC and C are good at, and really terrible at doing other things (including, sadly, a lot of practical tasks); and then there’s the really fun stuff like Scheme and the simply-typed lambda calculus, which are terrible at pretty much everything but are much beloved by computer scientists because, while doing things with them is pretty close to impossible (or at least, pretty mind-numbing), proving things about them is amazingly easy. God help you if you want to prove something about C++. I never felt like Scheme was difficult to use…but the only thing I’d ever want to use it for would be AI programming. Maybe that’s because that’s what I was taught to use it for. I can’t even wrap my head around writing a genetic algorithm in C++, but in Scheme it’s no problem. It’s a niche language, for sure, though. Having done genetic algs in C++ let me tell you: they’re really not that bad at all. I second this. I took a genetics algorithm course with a bunch of mechanical engineers, where I was the only one who knew C/C++. While they wrote stuff in a higher level language that took forever in their various complex solution-spaces, I was able to tailor my code to make it efficient, without much coding time added. All it takes is some carefully planned object manipulations. The GA’s I used for my Master’s Thesis were originally encoded in C (written as a single-semester project), ported to C# when I went independent study (I was learning C# and coding new stuff for it in C# was a good way of doing it), and continued in C# when I ripped it apart and rewrote it. Finaly project involved complex data processing, database creation and storage, a really complex GA, automated run/load/train/test/re-start processes, and worked quite well. Sadly, 2 of my three “Big ideas” utterly failed to pan out. Still, got my degree, learned C#, and patted myself on the back for being rigorous with my OO-design when I found myself rewriting the low-level stuff for the third time and realized how much time I’d saved by properly black-boxing things. In college I got to learn ML (which is a LISP-type language) and Prolog. Prolog is a ridiculous language. It’s sort of fun from an AI perspective, but it seems kind of useless if you want to actually want to make it do something. ML, on the other hand, was a lot of fun. It’s a hard language to learn, but once you figure it out, it’s a pretty solid language. The trick with LISP-type languages is learning how to do everything recursively. It’s counter-intuitive at first, but once you get the hang of it, it’s pretty useful. I think it’s worth learning a LISP-type language, if only so you can better understand how recursion works. Of course, given that recursion tends to take up a lot of memory, LISP type languages probably aren’t the most efficient for everyday tasks. I don’t think that’s true in most cases. Tail Call Optimization is extremely common in functional languages (though perhaps not in every implementation). I believe it’s required by the spec of Common Lisp. Essentially, if the recursive call is the last thing in the function (i.e. a Tail Call), then it can be converted into a loop by the compiler. You certainly can write recursive programs that don’t take advantage of this, and therefore have memory issues, but it isn’t as much an issue as you might think. Required by the Scheme spec(s), not required by the Common Lisp standard, but frequently available at the right optimization settings. surprisingly not in CLISP’s default settings I’m assuming when you say “LISP-type” you mean “functional”, since that is the only way this post really makes sense. In that case, it’s full of wrongness, as Haskell is actually one of the more efficient languages out there right now. Scarily efficient, actually. I was wondering how long it was going to take to get Haskell mentioned! “These are your father’s parentheses. Elegant weapons for a more… Civilized age.” Some languages even exist in a weird limbo zone whose tasks are so highly specialised they’re practically useless outside that task – RPG, for instance, creates reports. It will never do anything else. But fifty years ago it was a tremendous boon to the industry because it was much simpler to make a nicely formatted report in RPG than in COBOL. And that’s another issue – programming has been around for seventy years. We have seventy years worth of programming tools laying around, and like most tools, programming languages don’t just go away because a better tool comes along. Those old tools may be better for working on older programs, and the newer tools might be lacking a key, but obscure, feature an older one had. So all these programming languages just linger around, still being somewhat useful and not quite obsolete. One thing to keep in mind is this: BASIC got us to the moon. Even a high-level, low-powered language can be highly useful. Beg pardon? Are you saying the Apollo Guidance Computer was programmed in BASIC? Systems inside the lander itself, if I remember correctly. I doubt the BASIC we use now would be recognisable as the version used by NASA, however. Taken from – “AGC software was written in AGC assembly language…” and “The AGC also had a sophisticated software interpreter, developed by MIT, that implemented a virtual machine with more complex and capable pseudo-instructions”. However, no mention of BASIC. RPG has changed radically in the past 10 years or so, and is much more general purpose language these days. It does have a whole boatload of things that make handling formatted files really easy, including the ability to embed SQL into the program and run that on files… Whilst I’ve been having to suffer Java recently, I do feel the need to point out that these days most operations have no noticeable speed difference between native code and managed code. The real speed hit with Java is on startup, but once it’s running most differences are negligible. For a fuller discussion, but it basically boils down to ‘It used to be RE-HE-HE-HEALLY slow but each new version has brought optimizations such that it’s to the point where the winner will depend on context’ Yeah, nowadays Java is more of a server-side language in my experience (my experience being a server-side programmer, so…) Ironically this utterly ignores Java’s so-called portability (which is much less functional in reality once you start writing complicated programs). What it gets you is fast-enough performance, ease of coding (nice libraries, esp with network code), and free memory management. Even that is possible to defeat (believe me, there are still plenty of out of memory issues) but it’s much less likely you’ll write a program that will choke and die after a few hours of non-stop use. Which, you know, is good for servers. The motto of Java is supposed to be “write once, run anywhere.” But, really, it’s “write once, test everywhere.” Each Java VM is slightly different in annoying ways. Oh, how I do not miss my 5 years as a Java programmer. I learned to code with pascal back in high school might just be nostalgia, but it’s still my favorite language. I really like java, too Unsurprising. The whole reason Pascal exists is to create a good language for teaching general programming structure, while being robust enough to create complex-enough programs to show students why they would want to learn programming in the first place. I, too, learned Pascal in high school and I, too, hold a special place in my heart for it. But I also understand why it doesn’t have the hold that C does. Ah, the joy of being so distant from a problem that it becomes difficult to see. You didn’t even have to get into libraries, the nature of variables, definitions, and the thousand other things that usually mean you consider whatever you got trained in first / most thoroughly to be “superior” because the way the language behaves in the same as the way you “think” a solution to a coding problem, and so it becomes natural to express your solution in that language, trying to express it in another language can be very difficult. For these reasons I still have a soft spot for PASCAL, despite not actually using it on a computer in over a decade, I still write my to-do/task lists in a shorthand version of Ruscal. (dating myself there) Heh, you reminded me of a story on The Daily WTF: there was a bit of C code that opened with #define BEGIN { #define END } and went on from there. “A true programmer can write FORTRAN code in any language.” Full text in RSS now? When did that start happening? Yeah! we want just a teaser, or otherwise we feel like it’s a waste to click into the actual site! Wait, so I didn’t actually have to come here? I clicked the link before even seeing the RSS contained the whole text. Regardless, this is one of the few sites I like to visit rather than read the posts in RSS. I noticed that too. I think the thing people need to understand is that there isn’t really any such thing as a “best” programming language. In principle, all programming languages are equally powerful. If you can write a program for it in Assembly, you can write in C or in BASIC or in Python or in LISP. The difference between languages is usually a matter of tradeoffs. Lower level languages are more efficient, but harder to use and much more platform dependent. Higher level languages tend to be less efficient, but they’re much easier to use and are less platform dependent. Then there’s the fact that learning a new programming language isn’t an easy task and most people tend to stick with what they know. As a result most programmers favor C or C-like languages (C++, C#, Java). C was the first good, high-level programming language that worked on most platforms, so everybody learned it. When I was in college my professors all hated C++, even though out in the working world that was the language everybody used. Even if other languages are better, when you get a job, you’re probably going to work with C++ (or these days, C#). Knowing a better programming language is worthless if no one else is using it (unless you’re a one man development team). No point in learning the awesome powers of LISP if you’re the only one who knows it. Whoa, whoa, whoa. Hold it right there! We learn other languages for a variety of reasons, and actually using said language in practice is one of the small reasons. In my mind, the two major reasons to learn other languages are to get used to thinking about programming in a different light (for example, LISP is a great way to get comfortable with recursion!) and for helping us better understand the strengths of the language we do use. Definitely agreed with this. In my opinion, every serious programmer needs to learn to program in at least one assembly language and at least one functional language, even if they never use them. Plus, learning Lisp makes you really comfortable with operating on lists, and a great boon in learning how to do nifty magical things in languages with native list types (Perl and PHP, for example) What about Python? I heard a lot of things about that language.. The nice thing about python is that it’s easy to learn and easy to use. It has the gentlest learning curve of any programming language I know. The result is you spend less time struggling to learn how to accomplish even the most basic tasks and more time writing programs that do cool things. Like flying: Python is “the best” to me – easy to learn & write, vast built in library, cross-platform, and if you need speed can be extended with C. I’ve been tinkering with Python on and off over the last two years, and I’ve come to one inescapable conclusion: it’s great if you’re writing a processing script for one specific task, but I wouldn’t ever try to use Python to write anything with a GUI. That’s what C# and Java are for. I might consider using Python to do a quick functional demo of a concept (even a GUI), but I don’t think I’d go any further than that. The reason is this: Python tries to pretend it’s dynamically typed, but it’s actually strongly typed — but even worse, it sometimes swallows type errors silently, producing invalid behavior. I once spent three hours debugging something that turned out to be Python choking on an int when it was expecting a string, only it never actually gave me an error message about it. As a result I have become extremely wary of anything more complicated than Project Euler when it comes to Python. As for speed, well, depending on what you’re doing it’s not really that slow… but if you really do need the speed difference, and you’re going to write Python extensions in C, why not do the whole program in C/C++ in the first place? I’m not sure what you mean here. Those are not incompatible concepts. (Edit: May be a terminology problem. I found this article very useful: What To Know Before Debating Type Systems. ) Static vs. Dynamic typing has to do with when names are bound. Static languages require you to specify ahead of time, the types used in your program. If you call an invalid function for a type, the compiler will yell at you. In a dynamic language, the error doesn’t show up until runtime. Weak vs. Strong typing has to with whether types are coerced by the runtime. With weak typing, if you pass a int to a function that expects a double, the language will convert the int to a double for you. With strong typing, you get a type error, and have to cast it yourself. I don’t use Python much, so I can’t comment on the other issues you brought up. Sorry, I misspoke. Python (at least when you learn it) appears to be weakly typed, but it’s actually strongly typed. By “pretends to be” I don’t necessarily mean from the language’s point of view, I really just meant from the programmer’s point of view (at least if you’re not already familiar with how it works). But maybe the real lesson to take from this is that I suck at Python. I read that article you linked. It’s pretty interesting. Towards the end the author talked about using static typing to prove program correctness. I’d never thought about typing that way before. Also, I was always told that proving program correctness was something that only mathematicians and academics cared about. It seems like this guys making the case that we should use statically typed languages to prove program correctness, rather than testing our programs until we’re pretty sure that we’ve removed most of the bugs, which is an interesting thought. The reason why we don’t program in C or (shudder) C++ instead of Python is because Python has automatic garbage collection and high level abstractions that lower level languages lack. For example, try writing the following Python code in C: from random import randint foo = [randint(1,100000) for i in xrange(10000)] foo.sort() # code tags don't seem to like indentation... for item in foo: print item del foo It would be huge. First you’d have to write your own linked list implementation, then your own sorting algorithm for said linked list implementation. You’d have to allocate and deallocate memory, probably including some temporary storage for your sort. If I had to write that program in C, I would bet that it would take longer to run (albeit with less memory) than the Python snippet above the first time it worked. And it would definitely not work the first time. It would take hours to write. I didn’t run that example code in an interpreter and I’m fairly confident it will print out a sorted list of 10,000 pseudorandom numbers. I’m not saying C doesn’t have its place; I write C code for a living. I’m just saying that if you have the option and the compute time and resources, there’s no reason whatsoever to write in C these days. By the way, PyLint will help a lot with finding those strongly dynamically typed errors. I use it heavily on bigger Python programs and it’s found many bugs. Now this is getting long, but another benefit is that while the same C code will run on many architectures, that Python code will run on any major and many minor operating systems. I’m fairly sure porting the C stuff would be a nontrivial amount of work. I used to program BASIC games from books as a kid. In high school we used BASIC on the Apple IIs. My favorite though was AMOS, a graphical programming tool for the Amiga. I used to make games like Asteroids and Pacman on it. Was lots of fun! I haven’t programmed in years now, really want to get back to it and learn some more C but I don’t really have the time (I PLAY too many video games to be making them). I’m in the same boat, with a couple of additional caveats. My job is developing web applications, and often, when I get home, I don’t have the interest to work on one of my personal projects. Plus, I feel like if a game is going to go anywhere, I would need to write it in C++, which at this point I would have to relearn, especially the memory management stuff. So when it comes down to either learning C++, so that several months down the line, I can write some interesting code that actually does something, or playing video games, video games always wins. I do work on other personal projects sometimes, but they are all things that are fun/interesting on their own, and that let me see interesting results right away. If you don’t want to deal with memory management and C++, check out the XNA Framework which will let you write games using C#. The creators site also has a ton of useful samples to help you get something interesting going quickly. Which brings up the other issue. I run Linux (Ubuntu, specifically) on my primary laptop. I was booting to Windows for a while, and working through a C# DirectX book (not XNA specific), until I found out that my video card is so out of date that some of the examples just didn’t work. The one I remember had to do with putting a three color gradient on a triangle. It worked like it was supposed to on my work machine, but on my laptop, the triangle was a single color. I have another laptop with Vista on it, and a slightly newer video card, but I just haven’t gotten around to trying it there. obligatory SDL bump. Don’t tie yourself to Windows, says the fanatic. XNA is a good tool to learn about development. You can master concepts like drawing graphics and the update loop quickly. It’s memory management isn’t foolproof. Once, I needed an array of about 500 sprites. For some reason, XNA wouldn’t initialize one that large, even though I remade my project. It didn’t work until I rebooted my system. I write all my code in Superbase Data Management Language, a 16-bit Basic-like with an integrated non-SQL database engine. Because I like it, that’s why. Yeah… Java is comparable in speed to C++ in most situations these days, after the JVM has fully started up. It’s not interpreted any more, it’s just-in-time compiled. That was actually true five years ago. Often times a Java program winds up being faster and more efficient than something comparable in C++, as well, plus faster to code. Depending on the ability of the programmer, of course. I think the example you wanted for a very high level, but slow language was probably Python or Ruby. Not sure I’d ever use the word “dynamite” to describe VB, unless the connotation you were looking for was “will leave your computer a pile of smoking rubble”. Though the thing to remember is that the biggest JVM around today doesn’t JIT code until it needs to, to cut down on startup time. Java starts out running pretty slowly, then once it gets a feel for what code paths are often-utilized it gets much faster. Huh. I always think of a different trade off – time to write vs time to run. Some languages are easy to write (python), some are hard (assembler). Easy to write generally means slow to run. Great if your writing something that will be only ever be run once (e.g. research), terrible if it will be run many times (e.g. web page generator). Factored alongside this is “time to learn” – your first program in any language will take a while (though this overhead decreases as you know more languages) but as you do more, you get quicker. Hence why some use FORTRAN; they don’t want the overhead of learning another one. And then there is portability – if you only need it to run on one machine vs customers need to install it on Windows / Mac / Linux / mobile / etc. Why are there so many types of vehicle on the road? Subcompacts, compacts, coupes, full size, minivans, vans, pickups, trucks, semis, motorcycles, scooters. Why doesn’t everyone just pick the best one and use that? Why are there so many types of aircraft? Dedicated cargo haulers, dedicated fighters, fighter-interceptors, fighter bombers, dedicated bombers, jetliners, electronics platforms, single engine personals? Why are there so many types of firearms? *Channeling my inner McNamara–the **** Paul Graham has a detailed essay on this topic: Beating the Averages. He says that, in general, you should be using the most powerful programming language available. And he’s right. I think deciding which language is the most powerful is a complicated task. The power of a language depends on the task at hand, where a domain-specific language could be more powerful than a language that’s a lot more powerful for more general tasks. There’s also the matter of support. Some languages in wide use are crappier than other languages no longer in use, but it’s better to go with the lesser language as it has much better support and a larger community. As for the definition of programming language, Turing-completeness is a good rule-of-thumb, but is by no means a requirement. All Turing-complete languages are equal in computation ability. Since my brain refuses to memorize anything as arbitrary as programming/script-languages’ syntax, my ..uh.. language of choice goes like this: Hello World Its a treat sometimes to see you explain these programming concepts. When I was finnishing High School, I had taken an advanced placement computer science course that was equivelent at the time (over ten years ago) to a first year university course on computer programming. The year I took this course was the last year that turbo pascal was being taught and the curicullum was moving over to C++. It was frustrating for me at the time because it was evident that the languages used were going to change often and frequently. I did not continue due to this, as I wasnt prepared to re-learn a new language each time. I guess C++ held on, but they still seem to carry obvious similaritys to what I had learned. Its fun to see what things still remain the same in the languages. And heres to those who ‘sucked it up’ and continued the learning process in this field of study. I admire it and enjoy reading about it. THanks again Shamus, and to your pals that add thier own knoledge on this subject. Its fun reading for me :) Time for the Stupidest Comment of the Day: “How does the computer know what the code means?” Surely the microchip doesn’t understand the word “print”. Sorry. Assuming you’re serious, and not just making a funny, see that bit of Assembly code up top? There’s a very complicated program called a compiler that translates the basic code (or the C++ code, or the Python code, etc.) into that code. It does the heavy lifting so you don’t have to. As Shamus mentioned, the Assembly code is the language the processor speaks. Further pendantry: Assembly still has to be assembled, by an assembler, before an actual binary executable is produced. Well, same question then. Before there was assembly code, how did they program the computer such that it understood assembly code? The assembler is the compiler for Assembly. Someone wrote in direct machine code a program that took Assembly commands and translated them back to machine code. Early programmers, the very first ones, wrote directly in machine code – binary codes, simple commands that manipulated bits in memory and produced results. To understand why assembly “works” (or, rather, why the 1s and 0s (the machine language) it so closely corresponds to work), you have to understand how the processor on the machine works. You don’t program a computer to know assembly (or, rather, the machine language), the computer understands it by construction. Like the line “mov edx,len” might really be (by the way, this binary is made up, not even close to real – for example, in reality, these would be at least whole bytes, not nibbles as I’ve shown. And, really, the first two codes might be combined in some clever way): 0001 0001 1010 the chip on the machine gets fed ’0001′ that puts it in a ‘state’ where it next expects to see a reference to one of the tiny memory buffers on the chip. Then it sees ’0001′ which (in my silly example machine language) corresponds to the ‘edx’ register). Now it’s in a state where it next expects to find a memory address. ’1010′ is fed in, so it looks in the 10th byte of memory (this computer only has 16 bytes of memory. yipes!) and copies that memory into the edx register. All of this because the chip itself changes state with each input of bits. It isn’t programmed to have these states, it’s built to have them – you could, given a diagram of the chip, follow how the ’0001′ toggles a couple flip flops as the voltage goes down the ‘wires’ (traces in silicon. or something more advanced than silicon these days). Essentially, assembly instructions correspond to bit strings (for instance, on a 32-bit processor, a series of 32 ones and zeroes in a row). This is what the CPU actually acts on. As a gross simplification, consider that the CPU understands that a string of bits is an Add command because the first 4 bits are all ones, whereas a Subtract command would be 4 zeroes, then the remaining 28 bits contain data like the two numbers to be added and the location in memory where the result should be stored. Now, before assembly languages existed, a programmer would have to somehow manually create these strings of bits, and have the CPU execute them (I have no idea how they did this back then ;P). I assume that this is how an assembler was written. They literally toggled switches for the next byte, and pushed a button to process the “next clock tick”. Then someone made punch card readers to automate that… Yah. These days we measure millions or billions of cycles per second, back then it was seconds per cycle. Fun stuff. Glad I started after there were keyboards and disk files to store the code in :) I was going to guess punch cards. ;) “Real programmers use bare copper wires connected to a serial port” :P Oh yes, the days of the altair 8800 with its switch interface and 7-segment LED display come back to mind. Yes, in the old days people actually worked like that. Keyboards and crt monitors were unheard of except for the big multi-million dollar corporations. If you were a hobbyist you were lucky to have switches, and a display like a cheap four-function calculator. How did it you manually create the strings of bits? Well, back in the day (you kids get off my lawn!), there were coding sheets. They were exactly one instruction wide, marked into bits, and had columns for each field in the assembly instruction. For example, the first 5 bits may be the opcode, next the first operand type, then the details of the first operand, then the second operand type, and so on. Those were filled out by hand in binary. Repeat for every instruction in the program. Then they had to be entered into the machine, at first by toggle switches. Set 8 switches, press the enter button, go to the next byte, repeat. As you may guess, one of the very first programs written for most machines was some way to read the program from a paper tape, punch cards, or other storage, so you could stop flipping switches. :) Once you had a working system, you could program it to read text and output binary – that was an assembler. The difference between an assembler and a compiler is that the assembler ONLY converts from text to binary and back. The text version of the assembler has to have all the same fields that the coding sheets did. They were just written in something that a human could learn to read. Cue horrible memories of coding a Z80 in machine language to produce saw-toothed waves back in 1982 for my Physics 111 lab at Cal. (Shudder.) The worst thing about keying in the hex code was that if you missed an entry, suddenly a memory reference would become a command or vice versa, and off the pointer would go to gibberish land. Debugging was non-existant, and you just had to keep trying to re-enter the code until ou got the chip to do something. Needless to say, the experience scared me away from low-level coding forever. Minor nitpick: An n-bit processor doesn’t mean that each instruction is n bits – for example, Itanium was a 64-bit architecture, and contained three instructions in 128 bits. Good point, thanks for the correction! That’s what the Compiler does, actually. The compiler’s job is to translate the language you understand – “print” – into the machine code that creates the effect you desire. Compiled code generally only runs on a specific subset of machines – those that understand the particular machine code the compiler created. You can take your “Print” program to any machine, compile it, and get the same result, but if you took the compiled code from a PC to a Mac, it wouldn’t work at all, because Mac doesn’t speak PC. That’s not a stupid question at all. Asking that question, over and over at each level of explanation, is how I ended up with a degree in hardware instead of software. Gnagn has the basics right, so let me just add one followup and get even more pedantic. Assembler is not just one language. There are as many different types of assembler as types of processor. The main CPU in your computer speaks one version, which is different from the computer in your car engine, which is different from the one in your microwave, which is different from the one in your modem/router, which is different from… You get the idea. But for all of those different assembler languages, the Basic/C++/Java code is always the same, so the programmer doesn’t have to care what the assembler looks like. This was the original reason for high-level languages, before the Intel processors got so common. The differences can be hidden by the compiler, which reads the common high-level code and translates it to the specific binary assembler code for the kind of processor you actually need. The compiler is what allows someone to be a “programmer” and not have to get separately certified for every unique processor type in existence. Not to mention, it saves us all from having to write in assembler. :) NO! I was promised I would never ever have to remember PAL-Asm existed again. The post right before this one in the RSS reader was from Lambda the Ultimate, and I managed to mix them up. When you described Java as “slow, really slow” I thought, “Holy shit, the comments section will be insane.” Just mentioning Java on LtU is enough to spark a flame war, let alone calling it slow. But no, it was d20, so the comment thread was mostly sycophantry. (Also: heck yes full text RSS feed) Yeah, I’m not seeing sycophantry here on the topic of Java. I’m seeing polite disagreement on the issue of whether Java is ass slow, which the majority of commenters seem to have decided it is not. Sidney: That’s why the assembly code exists – so the computer doesn’t need to know. The C gets compiled into basically that exact code by something that understands print, and the BASIC gets interpreted (or sometimes compiled) by something else that knows what print means. Shamus: I disagree on your Base 6 example. I think the code for printing that out would be considerably easier in BASIC than in C, because you’re basically going to be writing the same things (take the number modulo 6, put it in the next slot to the left, and recurse on the number divided by 6), but of course BASIC makes things like that much easier than C (how much space do you malloc for the string? In BASIC you don’t care). Actually, it’d be pretty easy to do in either if you just wanted to print it and ignored the fraction part. void printbase6(int num) { if(0 == num) return printbase6(num/6); printf(“%d”, num % 6); } I bet BASIC would be easier for writing the fraction part just because C makes a lot of things harder than they should be. Not a bad explanation. I have some things to add: You say that C/C++ seems to be the sweet spot for complexity vs simplicity. I believe that C/C++ don’t so much hit “the” sweet spot in complexity vs simplicity rather than hit “every” required sweet spot, from kernel to high-level GUI application programming. In fact, it illustrates most magnificently the drawbacks that a unique lingua programmatica has: integrating every feature for everyone results in something that is only really understandable by seasoned programmers, who will only use a small subset of functionalities in any given program. You could conclude that, in the end, there is no “best” programming language, because the criteria for determining best depend on the design goals, time constraints, programmers experience, etc. C is the English of programming. Like English, it’s flexible enough to cover most concepts, and if there’s a concept it can’t cover, it will simply steal the concept from some other language and pretend it was there all along. Most people can grasp the basics of both languages with a bit of schooling, but it takes years being immersed in the language to truly grasp all its nuances and call yourself fluent. Echoing comments above – Java is really not slow once the JVM is up and running. C/C++ can be faster, especially on low-level stuff, but Java is in general a lot more useful with it’s libraries nowadays. @Sydney: The code goes through a program called a compiler, which turns the instructions into 1s and 0s that the processor can execute. The binary file is what is executed, not the text file. I want to add that C is actually not that much faster than Java any more. Years ago, it was. By now, not so much: (The only thing that could be called “slow” is Python3, everything else is pretty much similar) On average, it is less than a factor of two. To someone in computer science, a fixed factor is nothing, we are usually interested in orders of magnitues. You could essentially just use a machine that is three times faster, and program in Java. This is often way cheaper than writing code in C, because C is more complicated and therefore more error-prone, which means that it takes that much longer to finish any given task. And since even a single programmer-month is more expensive than a dozen computers (and the computers stay with you for more than a month to boot) this more often than not makes the “easiest” language a good choice for most tasks. Instead of throwing money at a problem, we beat it to death with processing power. :) Additionally, writing the same code in a higher-level language (such as C# compared to C++) is not just “easier”. In C++, you have to write your own memory management system. In C#, you do not have to do that, but instead you can spend the same amount of time on optimizing your code. Assuming infinite time, the C++ code will (nearly) always end up faster. But assuming limited time (and realistically, your deadline was last week), you will often end up with optimized C# code compared to unoptimized C++ code, because the C++ guy spent all his time writing stuff the C# guy got for free. I dare you to take a single hour and implement and tune a simple application in Java, and once in C++. Your Java version will most likely run faster, because it can use sophisticated algorithms and optimizations and be properly tested, while your C++ version will be pretty much bare-bones, if you can even finish it in this very short time frame. And it probably crashes ;) But most people do not choose language by listening to reason, but rather by “I’ve always written my code in [antique monster]! There’s no reason why we cannot use it for this project!” I’m 50% graphics programmer. I’m pretty sure I’m not the victim of dogma when I insist that the speed advantages of C++ are worth the cost. Sometimes those “trivial” speed differences are really, really important. And you need access to libraries / SDK’s that won’t be available to Java. You should check out the XNA Framework, Shamus. C# is of course slower than C/C++, but I find writing in it really satisfying, for some reason, and ironically enough I hated Java back in school… Edit: Btw, I don’t mean this in a “C# will change your mind” way at all, I simply mean you might be interested to check out XNA out of curiosity or to experiment, since I know from some of the programming projects you’ve written about that you like to do that sometimes. (Edit: I worded this poorly, and I think it may come off as an insult. Please don’t take it that way.) Then say that. It’s a trade off either way, and what constitutes “Really slow” depends on the problem space. Using something besides C for graphics is probably a bad idea, but if you are working on a web app, then Java, or something like it, is great. Heck, even Ruby is fine unless you are planning to become the next Twitter. Uh. I think it’s better for Java programmers to not be so thin-skinned than to add a bunch of qualifiers that the target audience of this post will not care about in the least. (This is for non-programmers, remember.) I spend a lot of time in the graphics area, so to me, Java IS really slow. :) (And to be fair, I didn’t hear about how much the speed gap had closed until this thread. I tried Java about 4 years ago and concluded the differences were too extreme for me to spend time getting to know the language better.) Shamus, I think part of this is that Java is still very much a language that is growing, developing, and becoming more and more optimized. While C and C++ constantly see the addition of new libraries, they are older and more mature libraries for which speed improvements are likely harder won (in terms of compiler optimizations and whatnot). Because Java is JIT compiled code and Java compilers are comparatively young, there is, or at least was, apparently a good deal of performance yet to be wrung. At the actual language level it’s a fine, high quality language. At the compiler level it has seen much improvement. So while it may not be the preferred language for an area as twitchily performance dependent as 3D graphics, it performs quite well in many other areas. I think Java has established it’s longevity and isn’t going anywhere. And it’s certainly not the whipping boy it was when I was learning it in 1999. If you do any work that’s not 3D graphics dependent you may wish to give it another chance. You may be pleasantly surprised. I’m a C#/ASP.NET programmer, actually, but I get your point. However, if the goal is to educate non-programmers, in a broad way, then mentioning performance at all doesn’t seem all that useful, especially next to your good VB example. Maybe something like “Often used for server applications”, or something like that, would be better? Not sure. Regarding your parenthetical note, I haven’t used Java since college, (except as an end user, and even then, only sparingly). At the time, it seemed to offer some benefit over C++ in that “everything” (but not really) was OO, and it had built in memory management. These days, I’ve seen and used better, more cohesive, and just in general better designed languages. Java seems to suffer from a lack of an identity. (C# does to, but it seems to have learned, to some degree, from Java’s mistakes.) My information about Java is out of date at this point, but I do read things occasionally about Java, and I never see enough interesting stuff there to bother trying it out again. I wouldn’t turn down a job, just because it required Java, but I’m not itching to use it (or C# for that matter) for a personal project. The reason we Java programmer get a little twitchy is that you were so dismissive of java that new programmers would be completely put off learning it. It’s as if I described programming, used Java as my natural language of choice and then mentioned C at the end as ‘unnecessarily low-level and non-object oriented’. That sentence may have some elements of truth to it, but it’s more than a little unfair to C and would discourage people from learning it Let me just say though, as a guy who professionally programs in both Java and C++, that no student should ever study Java until their last year. It’s too much like BASIC in that it makes it too easy to take shortcuts without actually understanding what you are asking the processor to do. At my university we did a whole year of Java before touching C++. I found it to be a perfect way to go about it because we learned OO concepts and how to structure and write programs without needing to worry about memory management and with far more useful compiler error messages than you’d get in C to get us up and working faster. The rest of the course was in C++ where we could really dig deep learning to write memory systems, optimize code, deal with preprocessor macros etc. I’ve been working in the games industry for nearly 2 years now and firmly believe learning languages in the order I did made it a very easy transition. Don’t you think it is a lot easier to first learn how to use something, and then learn how it works on the inside? but if you are working on a web app, then Java, or something like it, is great Please, nobody do this. Maybe it’s great for you, but having to deal with some huge Java-laden webpage on a lower-end machine sucks. (Especially when I have to use different browsers for different tasks because the site is differently buggy in each one. This is what I’m talking about. That system is great… for hurling invective at.) Unless you’re making something just for yourself, I mean. Then I guess you can do whatever you want. Perhaps I didn’t phrase my point well, but I wasn’t referring to Java applets, or client apps. I meant using Java on the backend, and rendering HTML. Javascript is a different issue, but if that’s causing performance problems, then you have done something seriously wrong. When I work on personal project, I prefer minimalist Javascript, if I use it at all. I agree. Generally, for web development I think the language of choice should be no lower level than Java or C#, i.e. managed code with a level of hard abstraction from the bare machine (no pointers or malloc). Assuming that you’re talking about OpenGL, then it’s all made available by the JOGL and LWJGL libraries. It’s not an ideal solution (and can be quite slow), but the Java Native Interface can give you access to just about anything. Automatically generating Java, Python and C# bindings for a C++ library is what I’m working on at work at the moment. And another thing… If you native compile Java (such as with Excelsior Jet, which my company uses for security reasons) then you get some additional speed increase as well, and completely loose any penalty for calling native code. One last thing… I’m actually starting to get interested in Objective-C, which seems to have a lot of the benefits of C++ and Java and few of the downfalls. It’s fully native, object orientated, can directly call C and C++ code and has optional garbage collection. Why would you need to work on your own language bindings when there is SWIG? I’m using SWIG, but there is a non trivial amount of work to be done to actually implement the input to swig. Especially when your build system is CMake… A fair analysis if (and only if), you actually need the features C# provides that C++ doesn’t (such as memory management). I wrote a compiler (in college) in C++, and didn’t malloc a thing – I had a few fixed sized buffers to contain the next symbol and wrote my output directly to disk. The memory management in C# wouldn’t have bought me anything, so I had the same amount of time to optimize as the C# programmer would have had. :) What, what?!? For my job, I’ve had to program in Java, C, and C++ (mostly the latter) so … “This is often way cheaper than writing code in C, because C is more complicated and therefore more error-prone, which means that it takes that much longer to finish any given task.” Um, syntax-wise, I don’t see all that much difference, except for pure C not having objects. Objects — and everything being such — are often more complicated than just having a function to call, and Java has it’s own peculiarities (only passing by reference, for example) that can complicate certain things. I don’t see how C, therefore, is inherently more error-prone or more complicated than Java is (we’ll get to memory management later). “Instead of throwing money at a problem, we beat it to death with processing power. :) ” This only works if you can convince your USERS to buy new machines. Sometimes, that isn’t possible. And with really, really large software suites, if all of them require that you might have a real problem in that your inefficiencies add up process-by-process to an utterly ridiculous level. “In C++, you have to write your own memory management system. In C#, you do not have to do that, but instead you can spend the same amount of time on optimizing your code. ” Huh?!? Using “new” and “delete” doesn’t work for you? SOMETIMES, a new memory management system is required. In some cases, that might not have been required with something like Java. But in a lot of cases, that new memory management system is written BECAUSE the default system simply doesn’t work. Thus, C++ would win in those cases because it LETS you re-write that memory management system to do what you want, and let’s you take the consequences. Otherwise “x = new ClassA()” and “delete x” works fine. There are issues if you forget to delete the class you created, but then again at least it definitely gets deleted when you don’t want it anymore. “Your Java version will most likely run faster, because it can use sophisticated algorithms and optimizations and be properly tested, while your C++ version will be pretty much bare-bones, if you can even finish it in this very short time frame. And it probably crashes ;)” And you aren’t using libraries in your C++ version to get pretty much what Java gives you because … why, exactly? Using libraries lets you use those same algorithms and optimizations, with the same amount of testing. I have no idea what C++ code you have that doesn’t allow for this. Well, Java MIGHT come with more standard jars, but that’s about it as far as advantage goes. I’m not saying that C++ is the best language to use. My view on it is this: Java does everything for you, C++ gives you control. In many cases, for very basic functionality, standard libs and jars gives you what you need for both. Java might go a little further in that, but because a lot of those are so tightly intertwined it’s often VERY hard to change the behaviour if it doesn’t suit you. C++’s advantage is that it is, in general, pretty easy to do that if you need to. Now, that also sometimes makes it easier to screw-up, since C++ won’t tell you that you are screwing up until you run it. I prefer more control, so I prefer C++. Python’s also cool for quick work where I don’t need graphics. But I can see why people like Java; it’s the same people that like Microsoft Word [grin]. I will say that the standardization effort of C++ and the invention of STL really did help the language. I only used C++ in High School, (keep meaning to relearn it) so take this with a grain of salt, but I consistently hear several complaints about C++. 1. Manual Memory Management. The problem isn’t with newand delete. It’s with having to always call them your self. Yes, it’s deterministic, which is good, but forgetting to delete your variable when your done leads to a memory leak. A garbage collector is non-deterministic, but it means that memory management is one less thing to worry about. How big of a deal this is probably depends on the project. 2. The surface area of C++ is so large that everyone ends up using a subset of the language features, and it is a often a different subset. This is probably true for other languages as well, but I get the impression that it is really bad for C++. 3. The syntax itself is needlessly complicated, so e.g. it’s hard to write parsers for it. Like I said, I don’t use C++, so I can’t judge if these issues are accurate, but I do get the impression that a lot of the complaints amount to whining that a hammer makes a really bad screw driver. Nondeterminism isn’t what makes a garbage collector :) There are C++ libraries that somewhat relieve you of having to do manual garbage collection – Boost’s smart pointers, for example. The problem is that reference counting doesn’t catch cyclic dependencies (so there is still stuff you need to watch out for and manually free), and that a lot of the smart pointer syntax is incredibly arcane and it’s easy to mistakenly add an extra addref or release somewhere. 1) Yep, that’s the problem I mentioned: having to remember to delete it (and preferably nullify it) or else it’s a memory leak. Mostly my reply was aimed at needing to write your own memory management system, which I took as something far stronger than “remember to delete after your news”. 2) Same problem happens with Java for certain. A lot of people on the project before me used GridBagLayout for the GUI work. I had a hard time getting that to work out, since it resizes its grid according to what I ask it to stick in. After discovering that other people struggled with it, I used SpringLayout. The more variety in your libs you have, the more likely it is that everyone will pick their favourite and use it. And that’s not even including things like window builders … 3) Comparing it to Java or even Python, the syntax is remarkably similar. I knew pretty much what everything did before learning them because their syntax matched C’s and C++’s so well. Other languages might be better. I was really not going for syntax, which is really the same in nearly all OO-languages. C++ is much more difficult to get right because of things such as these: - No garbage collection. Seriously, freeing all your memory perfectly and correctly is a ton of work when projects get big and does not improve performance by much, compared to a GC. - Segmentation Faults. In Java, you do not have to bother with arrays, you can just use templated Lists (though I hear the C++ people have a library for that too), which will only go into pointer-nirvana when you make grave mistakes and offer very practical accessors such as for-each loops. And even if stuff goes wrong, it is easy to debug or handle. - Hard Typing and no pointer arithmetic. It is very easy to mess up a construct along the lines of *pointer& in C. Since you are not allowed to do such things in Java, you cannot make mistakes and that means you do not have to debug such problems. - It is incredibly easy to write incredibly ugly code which nobody else can understand, because there are twenty billion ways to do every simple thing and another twenty billion short hands which are hard to read. Sure, once in a blue moon you actually need that functionality, but at the same time, it would be a lot easier if your every day code was written and understood twice as fast. And if you write twice as fast, you get to do ugly hacks twice as often, and we all love to do those, right? :D A programmer spends the majority of his time debugging, and that means that code that can be read and understood quickly is incredibly important. C++ is really bad at that. That said, I am not much fond of Java either, with its horrible Swing-layouts and sometimes rather complicated ways of doing things: Creating a new, empty XML-document takes four instructions instead of one, and basically requires you to look them up every time you have to use them. ;) That said, the language I am currently learning in my free time is Objective C. “I was really not going for syntax, which is really the same in nearly all OO-languages.” You should try F# (yes, it’s OO, because it’s just as multi-paradigm as C++, just the paradigm “functional” on top of it). You pretty much have to learn like 80-90% totally new syntax. Good thing is, the syntax is still pretty good to understand when reading. Example: type Person(_name: string) = let mutable _name = _name member this.Name with get() = _name and set(name) = _name <- name member this.Greet someone = printfn "Hello %s, my name is %s!" someone _name (Yes, F# reintroduced the good old printf family of C-functions, just with real built-in strings instead of pointers) For 2), I actually meant language features specifically, not API, but I suppose they amount to the same thing. For 3), yes, it is fairly similar from a programmer’s perspective, but what I was referring to, poorly (I couldn’t think of the right word), was that C++ has a context-sensitive grammar, which leads to undecidability, and really nasty errors. (The guy who wrote that site seems to really hate C++, but it comes up a lot in discussions. I haven’t seen a rebuttal of it, but that doesn’t imply that it’s correct.) Then again, you know the saying: There are two types of programming languages: those that people complain about, and those that no one uses. I was trying to work out where to jump in, and here seems as good a point as anywhere. I don’t have time to make a proper post, and most people have done great already, so have some bullet points: - C still cheaper than C++ (barely) - Java pretty fast nowadays, esp. with JOGL and JNA - STL not great for portability (much worse than Java) - Neither Java nor STL are good enough for high-performance apps (i.e. games). Often need to write own containers for specific problems, avoiding cache misses etc. - Garbage collector is a pain (Java, C#), but can be worked around - C# as a scripting language is lovely (interop) - C#/XNA woefully inadequate on the XBox 360 (Cache misses for managed memory, mark-and-sweep garbage collection churns on an in-order processor) - I’ve used C++, Java, C# and Python (and BASIC) a *lot*. I love them all, and assembly too. But not VB (shudder). >> The only thing that could be called “slow” is Python3 << That just depends on how many languages are measured :-) You only looked at measurements made on x64 allowing all 4 cores to be used. More languages were measured on x86 forced onto just one core. (And ye olde out-of-date measurements from years ago included even more language implementations.) >> less than a factor of two << Compare Java and C programs side-by-side. >> In C++, you have to write your own memory management system. << Really? Why wouldn't you re-use a memory management library written by someone else? Why don’t we all speak English? As you mention languages have different purposes. It’s a trade-off between quickly programming something and speed. Speed is becoming less and less of a concern these days. Our GUI is written in C#, our server backend in C++. It really doesn’t matter if a language is “slow” for the GUI side. None the less I’d say speed doesn’t matter much these days unless if you’re working on embedded devices or kernels. Else? Go for a higher level language, you’ll program a lot faster. Grahams story is indeed nice. Since they were competent in a higher level languages they could implement features faster and provide eadier, more maintainable code. And that’s why English is spoken so much worldwide, isn’t it? It’s so easy to pick up, not too many prescriptive rules, little in the way of conjugation. The trade-off there is that it’s a lot easier to speak it than to understand it, precisely because of the “fast and loose” feel of the language. No, it’s not. In fact, English is one of the more difficult languages to learn. It’s got a lot of rules and, perhaps more importantly, a lot of exceptions to those rules. It’s not the hardest language — some of the American Indian languages are harder, and the major Chinese dialects are both tonal and ideographic, making both speaking and reading them a grueling process to learn. But it’s up there. The reason it’s so widely spoken is because of geopolitical, economic, and cultural influence. It was the language of the British Empire, which came to dominate most of the globe by the end of the 18th century, and it’s the language of the US, which has been reasonably influential since World War II or so. For similar reasons, in ancient Rome, the dominant language among the educated upper classes was… Greek. OT, more or less: this is one of the most reasonable and even-handed discussions I’ve ever seen on the Internet. Thanks so much for posting this! Now I want to flood your questions page with dumb non-programmer questions about programming. I’ll restrain myself. For now. I kind of like “dumb non-programmer questions about programming” personally, it’s always an interesting challenge to explain something about programming in a straightforward and clear manner. A lot of the time I think we as programmers underestimate how arcane some of the stuff we say can sound. Bear in mind that what makes c easily able to do the magic thing with the printing specially-formatted output is in the #include <stdio.h> not c itself. BASIC is similarly extensible, even at some pretty primitive versions, via the DEF FN functionality. As a budding programmer, I’m desperately trying to glean some sense of what languages I should learn from all this. I’m getting the impression that programmers are a completely conservative lot, and I better learn some powerful languages soon, or I’ll be jaded and forever stuck with an inferior language if I go with the popular stuff. I really don’t know, I’m not too into the industry, history and platform-dependence, I just want to code stuff. Python is a good place to start. It’s a bit of a toy programming language, and it tends to encourage really bad programming practice in some cases, but it’s very easy to pick up and very fast to code. It’s also some use in web programming and an increasing number of actual applications are being written in it. If you want to program for windows you probably want to be looking at C# and/or C++. I’d start with C#, it’s a blatant knockoff of Java and it maintains a few of C++’s more dodgy design decisions, but an easier language to work with. If you want to program for Mac learn Objective-C. Here, it’s the best option by far. If you want to program for the web then Java and Ruby are your best bets, though Python and Groovy* have some utility as well. If you’ve got games on your mind… that’s a tricky one. C / C++ will give you the performance, but will also make your life a lot harder. Java has pretty good support for 3D graphics (see) and is good language for writing any AI (I did my PhD using it and it needed 3D graphics and a lot of AI). Likewise, C# has pretty good support for building games, XNA is a good place to start. *Groovy is my personal favourite scripting language. You will be jaded as stuck with an inferior language no matter what language you actually chose. As demonstrated here: It really doesn’t matter what language you pick as long as it gets the job done. The hight of the hurdles for different tasks depends on the language you use but with an affinity for programming and experience you can overcome these hurdles in a reasonable time or pick up another language which can. As a programmer, you will sooner or later have more then one language under your belt anyway. The first few languages you learn are the hardest. Everything after that is easy. I’m suspicious of any professional programmer who isn’t comfortable in at least 3 programming languages. So where to start? A common concern for new programmers, but it turns out that it largely doesn’t matter. What’s more important is to just start. My recommendation would be to look to what you want to accomplish, and look what languages people in that area tend to use. There is probably a good reason, typically either the language is innately well suited to the task, or have good libraries for the task. In the long run, you’ll want to learn enough languages to cover the range of techniques. My really quick and dirty list: Object Oriented: Java or C++. C# or Objective-C are also fine, but tend to cluster more strongly around specific companies offerings (Microsoft and Apple). Object oriented imperative languages are where most work is being done these days. Scripting/glue language: Python. Perl or Ruby are also good options. Most programmers end up doing lots of glue work or one-off work, and a scripting language really speeds it up. Imperative (notably, sans-objects): C. Maybe Pascal or Fortran. Non-object oriented imperative languages are disappearing, which is a shame because working in one sheds a lot of light onto object-oriented programming. Functional: Lisp or Scheme. When you start to really “get” functional programming, it opens your mind up to lots of useful techniques that apply even outside of functional languages. I would add that it’s important to learn an assembly language at some point. Knowing exactly what the computer is capable of, and what’s just being abstracted away by your choice of high-level language, stops you doing silly things like for(int i = 0; i < arraylen; i++) strcat(mystring, array[i]); I think we can add to that a good grasp of algorithmic principles. This way you get all your bases covered: what is theoretically possible and how it works in practice. “If you all have is a hammer, everything looks like a nail.” A programmer that can’t learn a new language in a couple of weeks is like a carpenter that can’t learn a new tool in a couple of minutes. Hopefully, the programmer (or carpenter) in question is merely inexperienced, because that is something which fixes itself with practice. If you’re looking at a programmer’s resume and it only includes one language, don’t hire them. Even if it’s the language you’re hiring programmers for (unless you purposefully want a newb so you can mold everything they know, muhuwahaha. er. ahem) And, really, learning a new syntax is trivial (except maybe APL). The reason it takes time to learn a new language is you have to pick up on idioms and explore the libraries. Although, these days, Google makes that a lot easier than it was when I started :) I’m not sure why, but I feel like this article should have a reference to Lingua::Romana::Perligata. Damian Conway is awe inspiring, isn’t he? :) This reminds me… How does the optimization of your website fair, Shamus? Been hit with another wave of traffic that would have crashed the old site off of Reditt or something yet? Sadly, no traffic spikes in the last month or so. Just get Stephen Fry to twitter about this place :P Where does Prolog stand on the complexity vs. simplicity scale? DON’T DO IT! Prolog is a different mindset. The closest thing it resembles that most (*n*x) programmers deal with is writing a “Makefile” – you write rules, and ask the database questions and it tries to deduce the answers from the rules. You don’t use it to “write programs” – it’s not something you’d make a web page with. It’s a deduction engine and has limited use outside of that field. The best trade-off I’ve found is Cython, a branch of Python that’s closely linked with its C framework. You can do anything you’d normally do in Python, but it’s amazingly integrateable with C, in case you run into tricky things that need C, or would be easier to write in C. It’s very versatile, and generally very user-friendly. To anyone that likes mathematics, logic puzzles, and/or programming: Go to Project Euler. Awesome problems that you solve by programming. Very awesome. Finally, realize this: Chris Sawyer wrote RCT1, RCT2, Transport Tycoon, and Locomotion entirely in Assembly. Worship the man. WORSHIP. Being an amateur programmer, I don’t judge programming languages by adaptability or efficiency. I can devour memory and make it only work on Windows 7 64-bit in Bulgarian, and I’d still be over the moon; my code is based on what I’m doing. At the moment, I’m not very diverse with my languages. For networking, I like Java, albeit I will never use that damn language outside anything to do with sockets and packets. ‘Number crunching’, databases, and other such business practices are lovingly prepared in C++. Go into the indie gaming side of me, however, and it takes a twist towards C# coupled with XNA. Now that I’ve put it on paper(?!), I can see that my choices are mainly based on the ease of the language to do the task I want it to do. I don’t have much pride in crowbarring a fully-working packet sender into C++, but I do have the desire to keep head-desking to a minimum. :) A reasonable programming discussion? This is still the Internet, isn’t it? oO But this may be an opportunity to get help with a problem I have. I’ve tried to learn one or two neww languages recently (since C and Perl are the only things I can work with and I’m interested in new things). However, every single book I stumble upon for whatever language is utter garbage. I’m a learning-by-doing guy so the best way for me to learn a new language is getting a task and a programming reference to accomplish this task. As a hobbyist I have no running projects and all books have either copy&paste examples explaining what they do (which I don’t need because I can handle a reference book and I’m not a typist) or outlandish tasks with no foundation in reality (let’s program a car or otherwise stupid projects; it should be a program that not only runs but accomplishes something useful) Someone got an idea or some recommendation on how to tackle this problem? Because I tried to broaden my horizon a bit for over 3 years now an I’m always falling back to my status quo. Well, if you want to branch into Java, think of a nice web-based app you’d like people to have access to. Only think of something that would work better client-side than server-side. Since Java is very cross-platform you could offer this app or functionality to almost anyone with a robust web browser and Java installed. Structure and Interpretation of Computer Programs is a decent book for learning Scheme. Starts off fairly simply (it is an introductory text after all), but ends up getting you to write a Lisp interpreter in Scheme. You could always look on open-source websites such as Soureforge.net or github.com. Their search engine lets you filter by project language and release status (e.g. coding, Alpha testing, production etc). So you could browse the non-Production projects in a language you want to learn, find a project that sounds interesting, download/checkout the source and see if they have a TODO or BUGS list. That way you can use the existing source as an example of the language and you can extend it for the practise. Btw, does anyone still use BASIC for anything serious? I think a better example of a very high level language would be python or ruby. Also you forget that the abstraction level is not the only factor. Different languages offer different features. Some languages are compiled, some are interpreted and some offer both. Some languages are procedural, some are functional, others are object oriented. There is static typing and dynamic typing. Then there are logical languages like prolog. At the end of the day the speed and optimization is only a fraction of the argument. For example, if you compare Java, Jython and Clojure they will all likely have similar performance. They all compile into Java bytecode and run on the JVM. But picking one language over the other is a matter of features and aesthetics. If you want static typing and OO, use Java. If you want dynamic typing and more flexibility use Jython. If you want functional language with lisp style macros use Clojure. It is a very complex question – nevertheless this is a decent crack at explaining it to a non-programmer. Still, you should put a disclaimer there that it is an oversimplification. :) Yes, people use Basic — especially Visual Basic. Note that these days Basic has OO capabilities and the occasional label instead of omnipresent line numbers. You wouldn’t recognize it :) Not sure about Clojure, but Jython has significantly slower performance than Java, slower than CPython, even. My understanding is that one of the main reasons for this is that the JVM doesn’t natively support dynamic typing, and so this basically has to be emulated. Plans for Java 7 include an update to the JVM to make this work correctly. Groovy, which I mentioned earlier, is capable of both static and dynamic typing, which is a nice feature. I think computer languages are to programmers as tools are to carpenters – you need a few, for different situations. (And any decent programmer – not elite, just “normal” – should be able to adapt to new languages fairly easily). I started with BASIC, because pretty much all you can get your hands on at an 80′s elementary school ;) High school was Macs, so you learn Hypercard. In University I was subjected to Modula-2 before getting the true power of C and C++ (and Unix in general). I started Web programming with C++, but once I was out of school webhosting companies had this odd dislike of compiled programs on their servers. (Can’t imagine why…). So it was time to learn Perl (which I think gets a bad rap, to be honest). My current job is VBA with some JScript and batch files. (Mainly because the intended users pretend it’s just an Excel/Access file ;) And just for kicks I’m trying to learn some Java (so I can fix some stupidity Corporate wants us to implement). Never learned assembly, though… Why are there so many programming languages? Why doesn’t everyone just pick the best one and use that? Why are there so many tools? Why doesn’t everyone just pick the best one and use that? You can hammer in a nail with a saw, but it’ll take a while and be a pain to do it. Equally, you can ‘cut’ that piece of wood in half with a hammer, but it’ll take a while and produce a really ugly end product. Programming languages are tools. They’re (mostly) designed with a particular purpose in mind, so there is no ‘best’ language in the same way that there is no ‘best’ tool. The goal is to simply print the worlds “Hello World!”, and that’s it. Print the “word” or “world”. Funny typo, just letting you know. There is exactly ONE thing I like better about C++ compared to Java, and it’s template programming. Which ironically is the one thing most C++ programmers (who are actually mostly C programmers with objects thrown in) don’t use much. Java is probably not worth it for triple-A games programming, but IMO it’s definitly worth using for more indy/retro games. The only disadvantage of Java is if you want to leverage its cross-platform capabilities you need to stick to OpenGL instead of DirectX. At my workplace we do a graphics-heavy program (though not speed critical like an action game) and we’re busy switching our code from C++ to Java where necessary (because we have a client/server architecture, whenever we move functionalities to the client or want to remove cross-platform headaches). OK, don’t have time to read it all today (sadly), but I do have to say one thing about C/C++. Generally, I agree that many languages have their strengths and purposes – your general point in the article is spot on. HOWEVER. C is just plain stupid. It drives me crazy that it continues to be used, almost exclusively out of habit. SURE, it gives you a lot more power/flexibility, and occasionally you need some of that… but there are other ways to get it than a language that allows bloody POINTER ARITHMETIC (among other crazy things). It’s like saying that you really need that 30 inch, 80 horsepower handsaw with no guard – sure, it’s powerful… but there’s a reason none of those (if they even exist) actually see any use. Even if you made a successful case that C hits some kind of sweet spot for certain things (a big if, but I’ll grant it for this argument), why oh why oh WHY is it used for SO MUCH OTHER BLOODY STUFF?!? GAH!!! Edit: I’ve done actual paid work in VB, C++, VB.net, C#.net, COBOL (and Cobol), and bits and pieces in a few other things, and C# is really not much like C other than syntax, but C++ is. Python is on my hobby list, but I haven’t touched much on my hobby list in a while, really. /Edit Just to end this post on a little nicer note, I will point out that there’s really nothing special about the Java language itself (it’s primarily a rip-off from C) – the ability to run multiple places with the same code could easily be (and in a few other cases that aren’t nearly as wide-spread, actually has been) implemented with any language. The big bonus to Java is simply that it’s already been done and the resulting “virtual machine” widely distributed. Hmm… I commented, then edited the comment to add a list of languages I’ve worked in, and “Your edited comment was marked as spam”. ??? I wonder how I did that – honestly, the only thing I could see about my edit that seemed much different than the rest of my post was a few all caps words and words for a couple of the programming languages. Weird. Sometimes my spam filter is just a sadistic imbecile. It will let pass some obvious bit of spam and then eat a perfectly innocent post with no “danger” words and no links. I dunno. In any case: Comment restored. Thanks! Not just imbecile, but sadistic imbecile, eh? Sounds like you should check that code for useful video-game AI teammate code – sadistic imbecile would be an improvement for most of them. I’ve already seen that implemented in at least one game: Army of Two the 40th Day. There are spots where you are supposed to save civilians from the bad guys. When you come up to one of these spots, rather than getting to cover, or shooting back at the guys actually firing on them, the bad guys kill the civilians. Uh… what? Wow, a lot of programmers sure do read this blog! Haskell! That is the language to which my vote goes. One more point on JAVA I was taught it in first year in CS&SE and then they changed course directors and from year 2 on it was C all the way. More relevant, maybe, but when you’ve done your ‘Coding 101′ courses ignoring memory management arriving into year 2 with lecturers expecting you to be up to speed with C memory management is a real issue. I never really got over this hurdle personally (my nightmares are haunted by segment faults and pointers) and struggled through the degree. Most of my friends who stayed in SE work with Java ironically but the whole experience soured me and now I sell PCs for a living! To bring this back on point I think a lot of talented programmers can underestimate how much the core conceptual differences between languages can throw novices for a loop. I still love the core logic of code which is why I find discussion so f programming topics so fascinating. I had a similar experience at Curtin University. LOGO man.. LOGO.. I enjoy reading all this material about general-purpose languages. One think most people would be amazed at also is the crazy variety of languages that exist for very specialized fields, especially academic fields. The wikipedia comparison pages are fairly instructive in that regard. I can kinda understand why, seeing how I think this discussion is primarily focused on languages meant to complete some specific task, but I think this comment thread could stand to be a bit more esoteric. A lot of programmers (in my experience) like to toy with esoteric languages just as fun pastimes, and they’re fun to show non-programmers and reinforce all those movie-based stereotypes about programming (it’s just a bunch of seemingly meaningless symbols, that is). Some examples: My, and probably most people’s, first esoteric language: Brainfuck Only eight instructions, no graphical capabilities (though you could theoretically make a roguelike of some kind, as far as I know), and pretty much everything you ever write in it will be a blast because you actually made the language do anything recognizable. “Hello, world!” as seen by Brainfuck: ++++++++++[>+++++++>++++++++++>+++>+<<<++.>+.+++++++..+++.>++.<.+++.------.--------.>+.>. A more interesting one: Piet A language where every piece of code is literally a work of art. The code is an image, with a cursor that gets its instructions and where it needs to go next from colors in the image. "Hello, world!" as seen by Piet: One that takes the term "esoteric" to levels of pure insanity: Malbolge Named after the eighth level of Hell, the language is so contrived that it took two full years after it was made public in 1997 for any functional program at all to be written, and even then, the program wasn't even written by a human. "Hello, World!" as seen by Malbolge: ('&%:9]!~}|z2Vxwv-,POqponl$Hjig%eB@@>}=<M:9wv6WsU2T|nm-,jcL(I&%$#" `CB]V?Tx A slightly more hilarious one: LOLCODE IM IN YR CODE MAKIN U LOL “HAI WORLD” LIKE SEEN N LOLCODE: HAI CAN HAS STDIO? VISIBLE "HAI WORLD!" KTHXBYE (LOLCODE has actually been used to for web page scripting, so it is definitely a functional language. It’s also a funny language.) EDIT: I will be moderately amazed if this doesn’t get caught in a spam-filter of some kind. Take the link to a .gif, a ling containing the word “fuck”, the loltalk, the lines of crap that amount to essentially gibberish… Impressive. No, marking as spam is only for those that appear innocent – as Shamus mentioned above, the spam filter is a sadistic imbecile. I’ll see your Malboge, and raise you Whitespace. I’ve never quite understood why programming languages don’t program other languages. Program in the easiest language for the task then output that into assembly. Then compile the assembly and have that language instruct the computer. In theory it would be efficient both for the computer and efficient for the human programmer. But I’m a non-programmer and understand only the very basics of coding. I’m sure there’s got to be a reason why that isn’t done. That’s just what compilers do, actually: turn your high level (easy to write in) language into assembly, and then turn the assembly into machine code… It’s become trendy of late, though, to make programming languages write their assembly/machine code for “virtual machines” instead of the actual chip in the computer, because you can make the virtual machine the same for every computer – meaning you get to compile it once and run it on any machine. But, let us set that aside and address the core of your question, the unasked next question: well, if we compile programs just like that, why aren’t all these languages at the peak of potential efficiency, then? The answer is because the assembly code they generate comes from templates and patterns which naturally express what you wrote in the high level language, and which do not necessarily represent the best possible machine language program you could write for the task. For example, to make a string in assembly (like what Mr Young did at the end of his example): section .data msg db 'Hello, world!',0xa That string contains exactly the number of bytes it needs and no more. However, in _even the most efficient compiled language there is_, the code it generates for a string will be msg db 'Hello, world!',0xa,0x0 The extra byte is so that it knows where the string ends. C is, however, blazingly efficient, and the compiler code is so old that people have taught it all kinds of tricks to speed things up here and there (they call it “optimization” of the compiled code)… It gets worse in the next logical jump from C to C++, because in addition to allocating the string, it’s going to allocate memory to contain data about the “string object” that it’s a part of (probably a couple bytes representing the object ID, several more so it can map object IDs to code, a couple bytes representing a reference count on the string, and so on). And so far I’ve just covered the memory footprint. Even in C, when you call printf(“Hello, world!n”); you don’t get the 9 assembly instructions Mr Young wrote to to perform the print. You get a bunch of assembly to define the function of “printf()” (which has quite a few features, and so generates a lot of code), code to set up the string in memory so that it can call printf, code to call printf… when you jump to C++, you get even more ‘free code’ to go with the ‘extra features’ of the language. If you compile Java straight to assembly, you get even more free code to do the “memory management” stuff people keep bringing up. If you compile a really high level language like Perl straight to assembly (though, to my knowledge, there isn’t such a compiler), then you get Yet Even More ‘free code’ so that it can treat strings as potential containers of numbers and all kinds of spiffy features. and you get all that code even though in each of those languages the program is very, very simple indeed: (in C: #include <stdio.h> void main() { printf("Hello, World\n"); } in C++, the C code will work. I don’t remember it in Java, but I think it’s class Main { public void main() { System.out.println("Hello, World"); } } and in Perl it’s print "Hello, World\n"; Each language (except Java :) ) gets simpler as you move “up” the hierarchy, but each generates more assembly language, as well, to handle all the neat features you aren’t even using in this example program. Whereas a human writing assembly would just generate the 9 instructions needed, because the human knows there aren’t any more features needed. (okay, for some reason I can’t get Word Press to let me insert backslashes where they go, but in the C and Perl examples, there’s a backslash before the final ‘n’) In theory, the compiler could determine which stuff to include – that is to say, only include the “extra” code if the code being compiled makes use of features that require that extra code. Assembly created by such a system would be, in many cases, perfectly efficient (in other cases, the person writing the high-level code would be using features in unnecessary ways, but the compiler wouldn’t know that). Unfortunately, no one has written a compiler like that… it would be insanely complicated to write. That may be true for some languages, but many these days have reflective features that let you modify the code itself, and thus have to be prepared for any eventualities. I say “may” be true, because there’s a lot of past thought that has gone into this sort of thing, with phrases like “NP Complete Problems” which may apply here, but I’m not current on the topic, so I can’t say for sure (I do know that “determining whether or not an arbitrary program terminates in a finite time” is a literally unsolvable problem) In BASIC, the first task would be super-trivial. One line of code. The second task would require pages of code. 99% of your programing time would be spent on the second task, and it would be much, much slower than the first task. You might want to read up on your BASIC, Shamus. Check out a language called FreeBASIC while you’re at it ;) For a concrete example of what puny slow BASIC can do nowadays check my website link, it’s even in OpenGL ;) [...] Lingua Programmatica – Twenty Sided Typical non-programmer question: Why are there so many programming languages? Why doesn’t everyone just pick the best one and use that? [...] Over the last (and I cannot believe I am writing this) 30 years or so, I have written production real world code in a significant plurality of the languages mentioned in this thread, including all of the BIG ones, and several others not mentioned. One can only really understand the value of a programming language in context of its place in history and the understanding of the craft (and I use that word carefully, not a science, and not an art, but elements of both) of programming. The one trend that I see absolutely governing my choice of tools as it has changed over the years is this: human time is more valuable than computer time, in most circumstances (I will not argue that there are exceptions). Therefore, the more the development environment (including the language features like garbage collection, and my latest love, dynamic typing, as well as the libraries, and even the IDE in which the work is done) take care of, the more the human time is expended on the actual business (or scientific, or gaming) project at hand, and not the mechanics. In many programming domains (with the notable exception of Shamus’ one), optimization is too expensive to do except when it is really needed–because humans are more expensive than machines, over the lifetime of an application. I teach my team to optimize -maintainability- over everything else. Again, not appropriate in all domains, but I believe it shows the value and trend of the evolution of tools. The lowest level tools haven’t evolved much since the macro assembler; but the highest level ones continue to be a hot bed of innovation. I code often in C++, C# and Delphi. But Delphi is my favorite. It strikes a balance between awesome ease of use (setting up an interface) and power. I haven’t found anything that I can do in C++ that I can’t do in Delphi. It even has the ability to inbed assembly code on the fly. It compiles super fast (seriously anyone used to any other language will wet themselves at the speed). On the Win32 versions the .exe’s it produce are rediculously small (easily 30 to 40% smaller than other languages). So. Basically. I love Delphi to death.
http://www.shamusyoung.com/twentysidedtale/?p=7452
CC-MAIN-2013-20
refinedweb
18,026
69.52
I have produced a database a b table ("Mail") getting 2 posts: id INTEGER, content INTEGER. During my aplication I've examined the bond and delay pills work well. using Finisar.SQLite; ... string db = "mydatabase"; SQLiteConnectionsql_con = new SQLiteConnection("Data Source=" + db + ";Version=3;New=False;Compress=True;"); sql_con.Open(); sql_con.Close(); Following this I've affect the table "Mail" also it seem like this: id INTEGER, content INTEGER, accountid INTEGER. After I tryed the bond again the following error was show: UNSUPORTED Extendable. This suggest which i can't modify any table? Please cause me to feel understand. Thanks! While it might not be needed, it's a good practise to title your database files by having an .db extension (like "mydatabase.db"). However, within this situation, it appears that changing your table through the SQLite(3? 2?).exe command, or perhaps an exterior administrator, turns your DB unreadable by Finisar's library. I counsel you to download the succesor from the SQLite .Internet library from Sourceforge, as it is suitable for the most recent SQLite version. Remember to import the machine.Information.SQLite.dll file to your project and to modify your references from Finisar.SQLite to System.Data.SQLite. Also, should you install the ADO.Internet 2. libary, you might like to add design time support for Visual Studio 2008 which means you van edit tables through the server explorer.
http://codeblow.com/questions/finisar-sqlite-library-for-c-unsuported-extendable/
CC-MAIN-2018-17
refinedweb
232
51.24
Viewer component which uses a virtual trackball to view the data. More... #include <Inventor/Win/viewers/SoWinEx:); Customize behaviors: Viewer components: Left Mouse: Rotate the virtual trackball. Middle Mouse or Ctrl + Left Mouse: Translate up, down, left, right. Ctrl + Middle Mouse or Left + Middle Mouse: Dolly in and out (gets closer to and further away from the object) (Perspective camera) or zoom in and out (Orthographic camera). . SoWinFullViewer, SoWinViewer, SoWinComponent, SoWinRenderArea, SoWinWalkViewer, SoWinFlyViewer, SoWinPlaneViewer CorrectTransp, SetKeyBinding Constrained viewing mode. Viewing mode. Constructor which specifies the viewer type. Please refer to SoWin. Adds a new function key binding to the viewer. This method allows you to associate a keyboard key with a viewing function, such as rotation or translation. The specified viewing function replaces any viewing function currently associated with the specified key. Any key may be used, not just "function keys". A key cannot be associated with more than one viewing function at a time. However, more than one key can be associated with a single viewing function. For example, the S and T keys could each be used to invoke the seek operation. Adds a new mouse binding to the viewer. This method allows you to associate an array of modifier keys (of length numKey) and an array of mouse buttons (of length numMouseBtn) with a viewing operation, such as rotation or translation. The specified viewing function replaces the viewing function currently associated with the specified key and mouse button sequence. Modifier keys (CTRL, SHIFT, CTRL+SHIFT...) can be specified in addition to the mouse buttons chosen. There is no error for specifying non-modifier keys, but they have no effect. The order of the keys and the mouse buttons is important. For example, CTRL+SHIFT+BT1 is different from SHIFT+CTRL+BT1. Likewise, CTRL+BT1+BT2 is different from CTRL+BT2+BT1. A key and mouse button sequence cannot be associated with more than one viewing function at a time. However, more than one key and mouse button sequence can be associated with a single viewing function. Notes:. Removes the specified function key binding (if it exists). Removes a mouse binding (if it exists). The key and button order is important. For example, CTRL+SHIFT+BT1 is different from SHIFT+CTRL+BT1. Likewise, CTRL+BT1+BT2 is different from CTRL+BT2+BT1. Restores the camera values. Reimplemented from SoWin SoWinFull). SoWinViewer.
https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_win_examiner_viewer.html
CC-MAIN-2022-05
refinedweb
393
51.24
Containers on Kubernetes is the modern way of deploying, managing and scaling applications in the cloud. At Engine Yard, we’ve always built products that make it easy for developers to deploy applications in the cloud without developing cloud expertise - in other words, we always helped developers focus on their applications; not the plumbing around deployment, management and scaling. In early 2021, we released container support on Engine Yard (called Engine Yard Kontainers, EYK for short). We spent more than a year architecting and building the product. Happy to share the lessons we learned in the process. Kubernetes is the way to go: Kubernetes (also known as K8s) is the best thing that happened to modern application deployment, particularly on the cloud. There’s no doubt about it. Some of the benefits of deploying Kubernetes include improved engineering productivity, faster delivery of applications and a scalable infrastructure. Teams who were previously deploying 1–2 releases per year can deploy multiple times per month with Kubernetes! Think ‘Agility’ and Faster time to market. Kubernetes is complex: OK. That’s an understatement - Kubernetes is insanely complex. Here’s a visual analogy of how it feels like in expectation vs. reality. • Out of the box Kubernetes is almost never enough for anyone. Metrics, logs, service discovery, distributed tracing, configuration and auto-scaling are all things your team needs to take care of. • There's a steep learning curve with Kubernetes for your team. • Networking in Kubernetes is hard. • Operating and tuning a Kubernetes cluster takes a lot of your team’s time away from development. Given the enormous benefits of Kubernetes and the complexity involved, it’s almost always beneficial to choose a managed Kubernetes service. And remember - not all managed Kubernetes services are created equal - we’ll talk about it in a while. Nonetheless, these lessons we learned below are useful irrespective of your choice of Kubernetes service. Here’s the list of 10 lessons: 1 Stay up to date with Kubernetes releases Kubernetes is evolving fast and they release rapidly. Make sure you have a plan to stay up to date with Kubernetes releases. It’s going to be time consuming and it might involve occasional downtime but be prepared to upgrade your Kubernetes clusters. As long as you do not change your application’s container image, Kubernetes releases themselves may not force a big change on your application but the cluster configuration and application management would probably need changes when you upgrade Kubernetes. With Engine Yard, here are the few things we did: • Planned for cluster upgrades and updates without application code changes and with minimal application downtime • Created a test plan that we execute every time there is an upgrade or update on the cluster - just to make sure that we are addressing all aspects of the cluster management • Architected the product to run multiple versions of Kubernetes clusters. We don’t want to run multiple versions because that’s a lot of work but we know that we’ll eventually run into a situation where we’d be forced to leave a particular version without upgrading because that version might break applications of several customers. 2 Set resource limits Kubernetes is designed to share resources between applications. Resource limits define how resources are shared between applications. Kubernetes doesn’t provide default resource limits out-of-the-box. This means that unless you explicitly define limits, your containers can consume unlimited CPU and memory. These limits help K8s with right orchestration - (a) K8s scheduler chooses the right node for the pod (b) limits maximum resource allocation to avoid noisy neighbor problems (c) defines pod preemption behavior based on resource limits and available node capacity. Here’s a brief description of of four resource parameters: - cpu.request - minimum cpu requested by the pod. K8s reserves this CPU and cannot be consumed by other pods even if the pod is not using all of the requested CPU. - cpu.limit — if a pod hits the CPU limit set for it, Kubernetes will throttle it preventing it to go above the limit but it will not be terminated - memory.request - minimum memory requested by the pod. K8s reserves this memory and cannot be consumed by other pods even if the pod is not using all of the requested CPU. - memory.limit - if a pod goes above its memory limit it will be terminated There are other advanced concepts like ResourceQuota which help you set resource limits at the namespace level but be sure to set at least these four parameters listed above. With Engine Yard, here’s how we set resource limits: - We simplified these limits to a metric called Optimized Container Unit (OCU) which is equal to 1GB of RAM and proportionate CPU. Developers just need to decide the number of OCUs (vertical scaling) and number of pods / containers (horizontal scaling). - We automatically set and tune Kubernetes resource limits based on OCU - We developed a system of predictive scaling that factors in the resource limits (via OCUs) and usage patterns - more on that later 3 Watch out for pod start times Whether your application is a standard application, serverless or follows micro-services architecture, you always need to watch out for pod start times. Managed K8s platforms promise pod availability around 2 seconds but that’s far from reality when it comes getting your application container up and running - total time (including pod availability, pulling docker image, booting the application etc.) can be minutes before you can expect your application to be available. Steps involved in pod booting - courtesy Colt McAnlis The best bet is to test your application's start times and tune the infrastructure such as K8s configuration, docker image size and node availability. With Engine Yard: - We provide standard ‘application stacks’ which are tuned for a particular runtime environment such as ‘Ruby v6 stack’. This allows us to tune these application stacks for faster start times and superior performance. - Node availability is the single biggest factor that influences pod start times. We built a predictive scaling algorithm that scales underlying nodes ahead of time. 4 AWS Load Balancer alone is not enough Engine Yard runs on Amazon Web Services (AWS). For each of our EYK private clusters, there is an underlying AWS EKS cluster and a corresponding Elastic Load Balancer (ELB). We learned through our experience that AWS ELB alone is not enough because of the limited configuration options that ELB comes with. One of the critical limitations is that it can’t handle multiple vhosts. You can use either HAPorxy or NGINX in addition to ELB to solve this problem. You can also just use HAProxy or NGINX (without an ELB) but you would have to work around dynamic AWS IP addresses on the DNS level. With Engine Yard: • We configured an NGINX based load balancer in addition to the ELB that is standard in an Elastic Kubernetes Service (EKS) cluster. • We generally prefer using AWS managed services because those scale better with less configuration but in this case, ELB limitations forced us to use a non-managed component (NGINX). • When you set up NGINX as the second load balancer, don’t forget to configure auto scaling of NGINX - if you don't, NGINX will become your bottleneck for scaling traffic to your applications. 5 Setup your own logging When you deploy your applications on Kubernetes, those applications run in a distributed and containerized environment. Implementing Log Aggregation is crucial to understanding the application behavior. Kubernetes does not provide a centralized logging solution out of the box but it provides all the basic resources needed to implement such functionality. With Engine Yard, we use EFK for logging: • We use Fluent Bit for distributed log collection • Fluent Bit aggregates data into Elasticsearch • Kibana helps you analyze logs that are aggregated into Elasticsearch 6 Setup your own monitoring Monitoring your applications and cluster is important. Kubernetes does not come with built-in monitoring out of the box. With Engine Yard, we use: • Prometheus for metrics and alerting • Grafana for metrics visualization 7 Size and scale it right There are two, well - may be three, ways of scaling your K8s cluster. - Cluster Autoscaling - Vertical Sizing & Scaling - Pod Sizing - Vertical Pod Autoscaling - Horizontal Pod Autoscaling These are a bit tricky but it’s important to understand and configure them right. Cluster Autoscaler automatically scales up or down the number of nodes inside your cluster. A node is a worker machine in Kubernetes. If pods are scheduled for execution, the Kubernetes Autoscaler can increase the number of nodes in the cluster to avoid resource shortage. It also deallocates idle nodes to keep the cluster at the optimal size. Scaling up is a time sensitive operation. Average time it can take your pods and cluster to scale up can be 4 to 12 minutes. Pod Sizing is the process of defining the amount of resources (cpu and memory) you need for your pod (equals the size of container if you are keeping 1 pod per container) at the time of deployment. As we discussed in resource limits, you are going to define the pod size with (1) min cpu (2) max cpu (3) min memory (4) max memory. The most important thing is to make sure that your pod isn’t smaller than what you’d need for a bare minimum workload. If it’s too small, you won’t be able to serve any traffic because your application might end up in out of memory errors. If it’s too big, you’ll be wasting resources - you are better off enabling horizontal scaling. Right pod sizing would approximately be a [minimum + 20%]. Then, you’d apply horizontal scaling on that pod size. Vertical Pod Autoscaling (VPA) allows you to dynamically resize a pod based on load. However, this is not a natural or common way to scale in K8s because of few limitations. • VPA destroys a pod and recreates it to vertically autoscale it. It defeats the purpose of scaling because of the disruption involved in VPA • You cannot use it with Horizontal Pod Autoscaling and more often that not, Horizontal Pod Autoscaling is way more beneficial Horizontal Pod Autoscaling (HPA) allows you to scale the number of pods at a predefined pod size based on load. HPA is the natural way of scaling your applications in K8s. Two important things to consider for successful HPA are: - Size the pod right - if the pod size is too small, pods can fail and when that happens, it wouldn’t matter how much you scale horizontally. - Pick your scaling metrics right - (a) average CPU utilization and (b) average memory utilization are the most common metrics but you can use custom metrics too With Engine Yard, we use: • Engine Yard manages cluster autoscaling for its customers and we use custom built predictive cluster scaling to minimize the time it takes to scale your pods. • Each application configures its pod sizes in increments of 1GB memory (we call these Optimized Container Unit = OCU) that allows fine grained control while making it simpler to size pods. • We do not encourage Vertical Pod Autoscaling • We fully support and manage Horizontal Pod Autoscaling for our customers including (a) cpu, memory based scaling and (b) custom metrics based scaling 8 Not all Managed Kubernetes are created equal There are several offerings in the market that claim to be ‘Managed Kubernetes’ services. What you really need to understand is the level of managed services each of these offer. In other words, what the service manages for you and what you are expected to manage yourself. For example, AWS Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) position themselves as Managed Kubernetes services but you cannot manage these K8s clusters without in-house DevOps expertise. With Engine Yard: • We designed the EYK product to be a NoOps Platform as a Service (PaaS) where our customers can focus on their applications and not need in-house DevOps expertise. Our customers are expected to just deploy their code to EYK with git push and the platform takes care of the rest. • We top it off with exceptional support (as Engine Yard always did) so that customers never have to worry about ‘Ops’ 9 Setup a workflow to easily deploy and manage applications What’s the cluster worth if you cannot easily and continuously deploy applications to the cluster? Describing apps, configuring service, configuring ingress, setting configmaps etc. could be overwhelming to set up for each application. Make sure there is a workflow process / system for managing the lifecycle of applications deployed on your K8s cluster. With Engine Yard: • We add a developer friendly layer to Kubernetes clusters. This layer makes it easy to deploy applications from source via git push, configure applications, creating and rolling back releases, managing domain names and SSL certificates, providing routing, and sharing applications with teams. • Developers can access their cluster and applications via both a web based UI and a CLI. 10 Containerization is not as hard as you think You do need to containerize your application in order to run it on Kubernetes and containerizing from scratch is hard. Good news is - you don’t necessarily need to start containerization from scratch. You can start with prebuilt container images and customize those. Start with public container registries such as Docker Hub, GutHub Container Registry to find a suitable image to start with. With Engine Yard: • We create ‘Application Stacks’ which are prebuilt container images to suit most common application patterns. • Based on the code in your repositories, many times, Engine Yard is able to recommend suitable Application Stacks for your application. • As an additional benefit of building ‘Application Stacks’, we can tune these stacks for faster start times, better performance and greater scalability. • Containerization, particularly when your team doesn’t have DevOps expertise, can be hard despite all the guidance. That’s why our support team helps our customers with the process. One more thing! When you set up Kubernetes and components, it’s better to do it using Terraform scripts and Helm charts so that you can always reproduce the cluster (by running these scripts again) whenever you need it. Enjoy all the goodness of Kubernetes and Containers! Discussion (4) Very informative. Thank you, @pavanbelagatti . We have been producing such blogs for long. Read them here: blog.engineyard.com/ Omg, one of the most informative article that I have read the last time. THx! Thank you, @cermitio . Read more such blogs on our website: blog.engineyard.com/
https://practicaldev-herokuapp-com.global.ssl.fastly.net/devgraph/10-lessons-learned-from-building-engine-yard-s-container-platform-on-kubernetes-m16
CC-MAIN-2021-25
refinedweb
2,420
51.18
The following is a guest post by Bryan Jones, the creator of CodeKit. I've been using CodeKit for a couple of years now and I've talked about it plenty. In my opinion it changed the game in front end development making it easy to use advanced tools that, while powerful, felt out of reach for many. Now CodeKit 2.0 it out, which has followed the landscape of front end development, bringing us more powerful tools that are tough to pull off otherwise. Bryan is going to introduce it, and I'll interject here and there to share my thoughts as I've been using 2.0 the last month or so. What is CodeKit? CodeKit is an app that helps you build websites faster. It compiles all the cutting-edge languages like Sass, Less, Stylus and CoffeeScript. It live-refreshes your browsers. It combines, minifies and syntax-checks JavaScript. It even optimizes images. All stuff that speeds up both your website and your workflow. There are other ways to do these things, but CodeKit's mission is to take the pain out of the process. You drop your project folder onto the app and get to work. No JSON files to edit, nothing to install or download. No commands to memorize. It just works. What's New in 2.0? For starters I hired a designer (Guy Meyer) so the UI no longer looks like it was repeatedly beaten with a DOS 5.1 manual. The new version is also 1,400% faster thanks to a bunch of optimizations and works a lot better in team environments. But what you really care about is how it can make you faster. So instead of listing every new feature, here's the top four that will make a difference right away: 1. Refresh Every Browser Your website has to look good on lots of devices. You pull it up on an iPhone, an iPad, a Galaxy S3, Chrome, Firefox and even IE 11 on a PC. That's a lot of refresh buttons to click. CodeKit can do that for you. CodeKit will now live-refresh all of these devices and more. Make a change to your code and a split-second later, every device updates to show those changes. No plugins, no complex configurations. It works even with advanced sites like WordPress and Drupal. Just click the Preview button in CodeKit and then copy the URL to your other devices. Once you see this in action, you won't work without it ever again. Note from Chris: Not only does the page literally refresh when you change something like a template or JavaScript file, the page will do style injection for CSS changes (whether they came from a preprocessor or not). Meaning designing for interactive states is a lot easier. CodeKit 1 could do style injection too, but now CodeKit has it's own server built-in (which can forward to MAMP or anything else if you prefer) meaning that literally any browser gets the refreshing and style injection. 2. Bower Bower lets you quickly install over 7,000 components: jQuery, Modernizr, Bootstrap, even WordPress. Bower is now built-in to CodeKit, so all those resources are just two clicks away. Open the Assets area, select the components you want and click the cloud icon. CodeKit grabs the latest versions from the web, along with any required dependencies, and puts them right in your project. CodeKit also saves you a ton of work when it's time to update components. Just open the Assets area and choose the Installed tab. It'll show you the version of each component in your project and what the latest one available online is. Update them all with a single click, or pick and choose. Note from Chris: while I haven't had a chance to use Bower a bunch yet, keeping front end dependencies up to date is the #1 reason I want to. 3. Autoprefixer Vendor prefixes: the CSS rules that only an IE6 Engineer could love. Autoprefixer makes them painless and it's now built-in to CodeKit. You just write standard CSS and Autoprefixer adds all the necessary vendor prefixes based on the latest information about each browser. It works seamlessly with Less, Sass and Stylus. It's also totally configurable: just specify which browsers you need to support and it does the rest. Note from Chris: I think Autoprefixer is almost as big of a game changer as CodeKit itself, and they are a perfect match for each other. While I'm still a big fan of preprocessors, I'm no longer a fan of using them for prefixing. Autoprefixer is a much better way to handle it. You can learn more about it from it's creator here. 4. Libsass You're reading CSS-Tricks, so you probably write Sass. It takes a few seconds to compile, right? Not anymore. Flip on Libsass in CodeKit and your Sass compiles instantly. Libsass is a new Sass compiler written in C instead of Ruby, so it's like Justin-Beiber-tanking-his-billion-dollar-singing-career fast. Now, Libsass is a beta, and some advanced Sass features (like namespaces and the new 3.3 syntax additions) aren't supported yet. But Libsass is advancing rapidly and the goal is to reach complete parity by this summer. Unless you're doing really complex stuff, you can probably use it today and drastically speed up your work. (We used it on CodeKit's site and that one has some really bleeding-edge CSS going on.) Note from Chris: While Bryan correctly joked I prefer Sass, I don't care tremendously much what you use, because there are things that are very likeable about all the CSS preprocessors. One of the few strikes against Sass is that it's slow to compile compared to the JavaScript-based preprocessors. Libsass makes Sass the fastest as well, so that's pretty awesome (if you can use it). More Cool Stuff OK, I lied. There's way too many new features to stop at just four. Here's four more features you'll love: Source Maps CodeKit can create source maps for Sass, Less, CoffeeScript, JavaScript and TypeScript files. (By the way, CodeKit compiles TypeScript now.) Source maps let you see your original source code in the browser's web inspector instead of the compiled output, which makes debugging easy. Zurb Foundation There's now a "New Zurb Foundation Project" command that pulls down the latest version of Zurb Foundation from the web and sets it up automatically. This was a really common feature request. Hooks Need to run a custom AppleScript or Bash script when files in your project change? Maybe tell Transmit or Coda to sync to a remote server? Gzip some files? No problem. Just set up a Hook and CodeKit will run whatever you need. Note from Chris: It would be interesting to see it run Grunt or Gulp. Part of the beauty of Grunt is there are a zillion things it can do - things that can be super specific and probably aren't a good fit for a core CodeKit feature (e.g. the SVG stuff I described here). I'm not sure if mixing multiple build tools is a good idea or not, but it's worth thinking about. CoffeeScript Love If you write CoffeeScript, CodeKit has two new features you'll like. First, you can now prepend JavaScript files (like jQuery) directly to your CoffeeScript files. Do it with a drag-and-drop in the app, or an in-file statement. Either way, CodeKit combines it all into one minified JavaScript file. Secondly, CoffeeLint is built-in now, so you can syntax-check your CoffeeScript files before they ever compile. This is also handy for enforcing particular styles, like how many spaces a line should be indented. What's Next? The short answer is, "Whatever Chris Coyier asks for." The long answer is that I completely overhauled CodeKit's architecture so that adding new features no longer requires major surgery. I plan to move quickly and keep iterating. Jekyll support is at the top of my list. Scaffolding and templates are up there too. HTML minifiers. If-else and loops in the Kit language. As Tim Cook would say, "We have some exciting products in the pipeline." Get In Touch! I love hearing from people in the industry, even if they don't use CodeKit. (Grunt FTW!) Come have a look at our new website. I can't take credit; Guy Meyer designed and built it, but we'd really like to hear what you think, one professional to another. You can find me on Twitter: @bdkjones I’ve been using CodeKit 2 and it’s awesome. There is special upgrade pricing (which may not be obvious straight away) which makes it a real no brainer. If anyone here is concerned about support as it only has a single developer then don’t be – I had some small issues (all resolved) and Bryan replied within a few hours, despite being in a different time zone. This sounds fantastic, but you forgot to tell a price! Okay, saw the price on the website. But will there be a discount for version 1 users? There is a discount – go to purchase and it will ask you for your V1 license number. I think it’s $18. I didn’t find it entirely clear before I purchased either. Thank you JC! Upgrading is like when you first purchased it. There’s a set amount or options to give more. It starts at free and goes from there. How can you focus on discount when you pay nada for such an super-awesome product that will save you hours of work and make your pro life so much easier? We tend to know the price of everything and the value of nothing. CodeKit 2 comes with an Upgrade offer. But honestly—Bryan did an excellent job creating a new software that speeds you workflow dramatically. Talking about an amount as low as $20 (you may actually choose the amount yourself) for such a great tool is ridiculous. Excited to try these new features – Codekit IS my workflow currently I may try this out, though I am pretty darn high right now on the power of gulp.js (It kinda can just do all the things…). I get that this is supposed to streamline that kinda thing for folks who just want to frikkin roll some fat webz though. Any tool that gets more people into the power of sass, bower, etc – that’s a real plus regardless. I love Codekit, it’s totally worth buying and supporting. I found myself wasting far too much time with Grunt, Node, etc, but with Codekit it just does everything for me so I can focus on what I’m paid to do – design and code! The new version with Components is just brilliant. Thanks for all the work done Bryan, you’re awesome! BTW, I’ve made a screencast-review of Codekit 2.0 for russian-speaking audience if anyone is interested:. I wish there was a version for Windows too… From FAQ: “I’m on Windows. What do you recommend I use to work with Less, Sass, etc? A Mac.” ;) Try Prepos: It’s the closest thing on Windows that I have found to CodeKit’s functionality. I can’t see that Codekit can do anything that can’t be done with Sublime Text and Grunt. I have Autoprefixer, jshint, JPG and PNG optimization, JS minification and lots lots more, all running every time I press ctrl+s Sublime is also the best and most extendable IDE I’ve ever used, it’s cross platform and it comes with no attitude ;-) If you’re on windows(or even if you’re on mac) I would try gulp or grunt. My gulp setup compiles sass, auto-prefixes css, minifies css, lints js, concats js, uglifies js, style injects into chrome on stylesheet changes, auto-refreshes on other file type changes(js, html, php, etc…). I’m going to be moving image optimization over to it soon, and start looking into building a few custom functions that I’d like to automate. I started out with mixture, then prepros as a windows alternatives to codekit. Gulp is way easier(probably grunt too, but I haven’t used it yet). Sure. Yet I switched from Linux to Mac because I wanted to focus on my work, not fiddle around setting up a computer. Not everybody feels comfortable learning a new framework/task runner/build tool every day while setting up CodeKit 2 is fun and done in a glance. If you feel happy using Sublimes Build on Save, that’s absolutely OK. However, no reason to talk down a great software that is an awesome tool for others. PrePros is amazing, I love it! Beautifully designed and very functional. It looks at though a lot of the new features for CodeKit 2 are features that Prepros has had already for months. Great post, Codekit 2 is certainly a game-changer. Has been blowing me away every single day since its release. Bryan is on a different planet. From their FAQ:> “I’m on Windows. What do you recommend I use to work with Less, Sass, etc? A Mac.” Wow, what a loser! FYI, even if there was a Windows version I’d avoid any software created by this guy. Just use Sublime and Grunt :) Obvious sarcasm is rampant on that site. Did you ignore the fake reviews as well? The guy thrives on tounge-in-cheek humor, it doesn’t make his product less valid. No, I noticed those too, but even those were all Apple related. Look, lots of people have a PC/Mac bias, but it’s childish, and you shouldn’t let it work it’s way into your product’s website. It just makes you look unprofessional, not funny. And although it might have been an attempt at humour, Windows user often have to put up with the misguided sense of superiority Apple users often exhibit and it gets tiring. It’s his choice at the end of the day and he’s choosing to lose a large part of his potential audience by letting his personal preferences get in the way of his business. All the more business for rival products. So it’s unacceptable for the author to make fun of Windows users a bit, but perfectly okay for you to call him a loser, despite the fact that this “loser” created an invaluable tool for many? Double standards anyone? I’d love to see rival products like CodeKit for Windows. “Wow, what a loser! FYI, even if there was a Windows version I’d avoid any software created by this guy.” Oh, come on. I get being frustrated that there’s not a Windows version of CodeKit, and maybe the get a Mac sarcasm rubs you the wrong way (I say get a sense of humor on that one), but don’t get all hyperbolic about it. That’s awesome. Didn’t know the new version is out. Mamp Pro recently revamped couple of weeks ago, I believe. They revamped everything, such as website (fully responsive), new logo and UI. I’m not too fond of their UI on MAMP Pro. It’s quite bigger than the older one. Sublime 3 + MAMP Pro + Codekit = Best tools ever MAMP Pro+ : consider lookin into Vagrant Yes. You had me at 2.0. I will now be in the process of giving you my money. Enjoy! How about a build that works with linux? I know that most people tend to build on macs, but a mac is just a fancy GUI on top of linux. So even if you hate windows like the rest of the dev world how about a GTK version? If you refering to the BSD layer (Darwin BSD) in Mac OS X, it’s not Linux. If it were a command line only app, then yes, but it’s not. There are lot’s more to OS X than BSD. A GTK version would be a total rewrite from scratch. I will put another vote in for a Windows version…some of us use Windows… The feature set in this program is so great it even makes consider buying a Mac just for this program…just sounds so amazing… Looks like an amazing program… Prepros is a pretty great Windows alternative. Michael, time to make the jump. Less talk, more rock! Same. Can’t be pained into switching to an evil OS for one app though. Keep banking on an Ubuntu or Windows version. Prepros is great but has it’s issues. Come to Mac for CodeKit, stay for the far superior *nix command line interface. And there are some other cool pieces of software exclusively on this OS, too! Like the phantastic font editor Glyphs or Sketch the new screen design tool. Come to the other side of computing where it is fun! I have been developing on a Windows machine for years, never used a Mac in my life. I bought a MBP with Retina maybe 2 weeks ago. I’ve never been happier. I always had a bias that Mac’s were overpriced and not really worth the money. That may have been true not too long ago, but just like KIA, Apple has really put some quality into this new line. My 13″ MBP is hooked up via thunderbolt ports to 2 27″ inch, Dell ISP monitors at 2560 x 1440, plus the Macbook itself. So a 3 monitor setup that is just as fast as my gaming PC in terms of anything dev related, and the best part? I can unplug the cables, and continue building sites on the toilet! The entire workflow, vagrant server, and files are right inside the MBP and you can plug in the monitors for max productivity, or go wireless for max portability. No more having to sync files from my old laptop and my desktop. You can even close the MBP monitor and the “desktop” moves over to the 2 external monitors without the MBP going to sleep and hook up a mouse and keyboard to turn the MBP into a sleek little desktop tower that you can tuck away. Thanks man! Using this daily. Really a game changer. Recently purchased previous version so upgrade to this was free. Been waiting for Firefox auto-refresh but can’t see it happening in this version either? at least the article indicates this way: I picked this up a few months back and hot damn did it save me a lot of time. It’s an amazing app and my upgrade is happening for sure. Thanks for an awesome program! By the way, if you’re worried about an upgrade fee, don’t. You can upgrade for free OR toss the maker a few extra dollars. He deserves it. I just upgraded. I wish it kept my old folders that I had it detecting. It appears I have to re-add them all back again. Oh no! That’s no buenos! Thanks for the warning! +1 for a Windows version, I’d be your first customer. Installed…. SOOOO AWESOME! Took me 2 minutes to have my iphone refreshing changes instantly. The SASS really does compile super fast too. Codekit is the reason why I keep putting off learning Grunt… But thus far Codekit has been great. I also love being able to point to local gem versions of Sass & Compass. In faq : I think you have a serious problem with mac… How do you know someone uses a Mac? They tell you every fucking chance they get. :) Hey ladies: Bryan here. Thanks for all the feedback guys! To the folks that are concerned about my Apple fanboy-ism, I’d say two things: 1) I don’t hate Windows at all and I think some of Microsoft’s latest work is actually a bit more innovative than what’s coming out of Cupertino. I’d write CodeKit for Windows if I knew that platform. I just don’t. So far, no good developer has offered to work with me to port CodeKit. But I am open to the idea. 2) You can’t go to a baseball game and say, “This is ridiculous! The announcer is CLEARLY biased towards the home team! What a loser.” Likewise, when you’re reading my website there’s clearly going to be a little playing to the home team in the copywriting. Getting upset about it makes no sense. Anyway, thanks again! I like reading what everyone thinks! Hello Bryan Thank you for upgrade. There are just two thinks I don’t like about it: first of all I had to re-add all projects. I think the auto-import from previous version is must-have. second – it would be nice to have possibility to choose if I want to use a CodeKit server or just have auto-refresh in browser (Chrome and Safari is just fine for me when developing). CodeKit server is soooo slow. Yes – I have turned Internet sharing OFF. It’s just slow. When I start website from somedomain.local (set in httpd-vhosts.conf and in /etc/hosts – I don’t use MAMP or any similar apps) it works normal, and if I set it in CodeKit as “external server address” it is loading slow – every file needs 5-15s to download :( It would be great if I could choose the method of browser refresh – old direct refresh or CodeKit server. For now I’m going back to 1.9.3 :( And one more thing – font on your website has to small contrast. When I was reading help page it hurts my eyes :( Sorry for this feedback. Regards, Darek If you need a CodeKit => Coda hook for auto-uploading files, you’ll find an example here: Thanks Christian, that is handy. Nice upgrade! This is worth taking a look at over grunt/gulp now. libsass is my favourite improvement. Speed! However, after a few minutes of playing, I see that the Susy framework is requesting for compass to be installed. Susy 2 doesn’t require compass anymore, the reason why I personally use it almost exclusively now. What’s wrong about Compass? =) I mean it’s not cool, that CodeKit uses not latest Susy, but Susy 2.0 still has some features, dependent on Compass. Nothing is wrong with Compass! I just don’t need it personally. I prefer autoprefixer over all of the vendor prefixes the Compass mixins contain. I also like my library of mixins, and separating some into extends instead. It’s the reason I like Susy, I get to choose how I want to do things. It’s not a shot at compass at all, if you like it use it. Which Susy features require Compass? The docs say otherwise. Anyway I see you can get latest version directly from the “Assets” tab in CodeKit. Also, if I remember correctly, another blocker is that libsass does not work with compass yet. Eric M. Suzanne (the creator of Susy) discusses Compass dependency in this screencast, on 3:00. I believe it’s about typography, vertical rhythm and debugging vertical baseline. BTW, Libsass doesn’t work with a lot of 3.3 features either =( Looks pretty cool, but I’m not about to switch to Mac just for this. Ubuntu needs this. I’ve been using CodeKit 1 and it very good, and i want also upgrade 2 Bower looks interesting, is it possible to add additional 3rd party dependencies that aren’t featured in the 7000 list? I just upgraded from Codekit 1 and I think this version is great. Works just as easy as version 1, just drop your site folder on the app, set your output paths and you are ready to go. It all looks and works very smooth, love it. The preview and refresh of the site via Bonjour on my iPad works fantastic. Thanks! Will definitely check it out, lots of cool features :) I had submitted an issue on his github however, simple question about turning off the constant notifications… I turned off growl notifications in OSX system preferences, but the Codekit 2 notifications still kept coming up for every SASS change, and the author was pretty rude and closed the issue right away :/ Was a pissy way to interact with potential customers, but I think in this case the product still shines. Is anyone else having issues with inconsistent page refresh on save? I’m working on WordPress stuff, running Mamp, & compiling SASS files through CodeKit 2. I only have one site active at a time in CodeKit, and I have external server configured. Regardless, when I save a file it’s like a 1 in 4 chance that CodeKit will refresh my browser. I’ve been seeing this in both Chrome & Firefox. Aside from that, I upgraded to CodeKit 2 as soon as I heard about it and I’m loving it. I ran into one issue and Bryan was quick to respond on Twitter and super cool getting me sorted out. CodeKit remains an indispensable tool for me. Thanks for your good work, Bryan. There one BIG LIMITATION to be aware of: I have the problem that the auto-refresh feature for less/css does not work. Great time saving bit of software, looking forward to Jekyll support!! Codekit 1.x was nice. Codekit 2.0+ looks/feels terrible, the UI feels like a bad Windows application. Hello Grunt! I’ve been using CodeKit 1 for a while now and found it very useful when developing with WordPress themes and html sites. I don’t use it fro Rails or Sinatra, I use asset pack and grunt etc. Anyway I was happy to pay for CodeKit 2 and want to help support it but after having a play with it I don’t like it. Maybe if your a Designer then it’s an improvement but as a developer I think it’s doing too much, I have my own workflow and CK2 seems to be making too many decision for me. I also have issues with working directly from the bower_components directory due to the extra unnecessary cruft in production. I’m sure for the most part people will love all the extra features and automation, but not me ! sorry After buying Codekit 2 to support the author, I still use Codekit 1 just because the interface is easier on the eyes. I had to update SASS and Bourbon by opening the package contents of Codekit (1), but the new version’s green on grey was just too much. I’ve been using CodeKit for about a month now. I like it. But Prepros. It’s just as cool. I think it’s easier to set up and navigate as well. Not to mention the double benefit of Mac + Windows making it easier no matter which of my laptops I use. Seriously, if you’re a Windows user or Mac user familiar with CodeKit then you should check it out! It’s free and limited unless you pay up, but you’ll get the picture if you’ve used CodeKit.
https://css-tricks.com/codekit-2-0/
CC-MAIN-2016-44
refinedweb
4,585
73.98
aktripathi.wordpress.com/2009/01/08/linq-for-beginners/ 1.0 What is LINQ? 1.0 What is LINQ? LINQ stands for Language INtegrated Query. Means query language integrated with Microsoft .NET supporting languages i.e. C#.NET, VB.NET, J#.NET etc. Need not to write\ use explicit Data Access Layer. Writing Data Access Layer require much proficiency as a Data Access Layer should be capable of at least - Efficient Extraction (Select) \ Add\ Update\ Delete of data. - Support to multiple database, ORACLE\ SQL Server\ MySQL etc. - Transaction Management. - Logging\ Tracing - And many more. LINQ enables you to use all the above features in very simple and efficient way with very less codes. 2.0 Why LINQ? \ What are benefits of LINQ? A simple architecture of any software is like LINQ Vs Without LINQ 3.0 What is LINQ entity class? A .NET class(s) which maps you to\ from database. This class provides you flexibility to access database in very efficient way. Usually LINQ entity class contains that many number of partial classes how many tables are present into the database. Each partial class contains properties same as columns present into the database table. Instance of the entity class acts as a single row. 4.0 How to generate LINQ Entity class? Using Visual Studio IDE: If you are using visual studio IDE then its very simple to create LINQ entity classes. Follow the steps below to create LINQ entity classes in your .NET Winform project - Go to Start –> Microsoft Visual Studio 2008 - Once VS 2008 IDE is launched. Go to File –> New –> Project - A “New Project” dialog would open. Select Windows Form Application templates from the templates listed right side and click ‘OK’. (Make sure you have selected right language from left panel and .NET Framework 3.5 is selected on top right) - This action will create a new windows form project with name “Windows Form Application1″ having default form Form1. - Now in order to generate LINQ entity class Right click on project i.e. WindowsFormApplication1 node available on the right side tree. - Select Add–> New Item - A new dialog “Add New Item” would be opened. Select “LINQ to SQL Classes” from the various templates listed on right side and Click on Add button. - Above action will bring Object Relational Designer for you. Click on Server Explorer Link available. This will bring a Server Explorer on left side. Right Click on “Data Connection” and select “Add Connection..” - Now you will see a new dialog “Add Connection”. Provide your database information i.e. Server/ Username/ Password/ Database Name ad hit OK button. - Above action will bring your desired database connection as a child node into “Data Connection” tree available on left. - Select all the tables available and drag them to the middle area. - You might get a dialog regarding saving sensitive information. You may choose ‘No’. - Now you will see the database diagram on the center panel. Save the .dbml file and build if required. - Now, you are done with your entity class creation. Without Using Visual Studio IDE In case you don not have Visual Studio IDE, Mi.NET provides a simple utility SQLMetal.exe to generate the LINQ Entity class. By default, the SQLMetal file is located at drive:\Program Files\Microsoft SDKs\Windows\vn.nn\bin. Follow the steps below to generate LINQ entity class- - Start – > Run - Write cmd and click on “OK” button. - Go to the location drive:\Program Files\Microsoft SDKs\Windows\vn.nn\bin - Type sqlmetal /server:<SERVER NAME> /database:<DATABASE NAME> /namespace:<NAMESPASE> /code:<GENERATED CODE LOCATION> / language:csharp example: sqlmetal /server:myserver /database:northwind /namespace:nwind /code:nwind.cs /language:csharp SQLMetal to generate LINQ entity class Suppose we have a simple database containing three tables with Structure/ Relations as follows- Relational Database Diagram For Sample Database - Generate the LINQ Entity class for the above database (Use step 4.0 to generate entity class) - Add the newly created entity class to your project. For a better architecture LINQ Entity class should be placed into separate class library. - Create instance of the LINQ Entity class. There are various overloads of LINQ entity class. As explained earlier that in order to access database first of all we need to create instance of entity class. Below is the lin of code which creates the instance of the entity class, DataClasses1DataContext objEntityClass= new DataClasses1DataContext(); 1.1 How to select all Columns and all the records? 1.2 How to use where clause? Same LINQ Sql may be written with the help of Lambda Expression as well in only one line of codeEmployee employee = _dataContext.Employees.Single(emp => emp.FirstName == “First Name”); 1.3 How to select particular columns only? If we want to select the above record without using LINQ then we will have to Join Employee table with Department and Designation tables and the Sql will look like 1.7 How to use Joins? 2.0 How to Update a Record? 3.0 How to Delete a Record? 4.0 How to use Transactions with LINQ? 5.0 How to Iterate / Loop through the records? 6.0 How to execute or use Stored Procedures? 6.1 Generate Entity Class for Stored Procedures In order to use Stored Procedures using LINQ you need to create entity classes for the stored procedures in the same way created the entity class for the tables. Follow the steps below to create LINQ Entity class for Stored Procedures - Start – > Run - Write cmd and click on “OK” button. - Go to the location drive:\Program Files\Microsoft SDKs\Windows\vn.nn\bin - Type sqlmetal /server:<SERVER NAME> /database:<DATABASE NAME> /sprocs /namespace:<NAMESPASE> /code:<GENERATED CODE LOCATION> / language:csharp Note: 1. If you have created Database Diagram then above command will fail to generate the entity class for the Stored Procedures. You need to create a new table into your database with the name dtproperties. This table will contain following columns 2. Above class will contain system stored procedures also. So far it was not possible avoid including system Stored Procedures. May be into recent releases of SQLMetal.exe we may get this flexibility. 3. Using /sprocs will generate the complete entity class which will include Stored Procedures as well as database tables. 6.2 Execute Stored Procedures Now your newly created entity class will contain a method with the name same as the stored procedure name. You simply need to call the method
http://asp-net-controls.blogspot.com/p/linq.html
CC-MAIN-2019-39
refinedweb
1,073
59.7
Joining is easy. Subscribe to our announce list. We'll send you a confirmation email in reply. Come back, enter the confirmation password, and you're done! Back in 2012 I wrote a blog post on using Tor on Android which has proved quite popular over the years. These days, there is the OrFox browser, which is from The Tor Project and is likely the current best way to browse the web through Tor on your Android device. If you’re still using the custom setup Firefox, I’d recommend giving OrFox a try – it’s been working quite well for me. (This was published in an internal technical journal last week, and is now being published here. If you already know what Docker is, feel free to skim the first half.) Docker seems to be the flavour of the month in IT. Most attention is focussed on using Docker for the deployment of production services. But that’s not all Docker is good for. Let’s explore Docker, and two ways I use it as a software developer.Docker: what is it? Docker is essentially a set of tools to deal with containers and images. To make up an artificial example, say you are developing a web app. You first build an image: a file system which contains the app, and some associated metadata. The app has to run on something, so you also install things like Python or Ruby and all the necessary libraries, usually by installing a minimal Ubuntu and any necessary packages.1 You then run the image inside an isolated environment called a container. You can have multiple containers running the same image, (for example, your web app running across a fleet of servers) and the containers don’t affect each other. Why? Because Docker is designed around the concept of immutability. Containers can write to the image they are running, but the changes are specific to that container, and aren’t preserved beyond the life of the container.2 Indeed, once built, images can’t be changed at all, only rebuilt from scratch. However, as well as enabling you to easily run multiple copies, another upshot of immutability is that if your web app allows you to upload photos, and you restart the container, your photos will be gone. Your web app needs to be designed to store all of the data outside of the container, sending it to a dedicated database or object store of some sort. Making your application Docker friendly is significantly more work than just spinning up a virtual machine and installing stuff. So what does all this extra work get you? Three main things: isolation, control and, as mentioned, immutability. Isolation makes containers easy to migrate and deploy, and easy to update. Once an image is built, it can be copied to another system and launched. Isolation also makes it easy to update software your app depends on: you rebuild the image with software updates, and then just deploy it. You don’t have to worry about service A relying on version X of a library while service B depends on version Y; it’s all self contained. Immutability also helps with upgrades, especially when deploying them across multiple servers. Normally, you would upgrade your app on each server, and have to make sure that every server gets all the same sets of updates. With Docker, you don’t upgrade a running container. Instead, you rebuild your Docker image and re-deploy it, and you then know that the same version of everything is running everywhere. This immutability also guards against the situation where you have a number of different servers that are all special snowflakes with their own little tweaks, and you end up with a fractal of complexity. Finally, Docker offers a lot of control over containers, and for a low performance penalty. Docker containers can have their CPU, memory and network controlled easily, without the overhead of a full virtual machine. This makes it an attractive solution for running untrusted executables.3 As an aside: despite the hype, very little of this is actually particularly new. Isolation and control are not new problems. All Unixes, including Linux, support ‘chroots’. The name comes from “change root”: the system call changes the processes idea of what the file system root is, making it impossible for it to access things outside of the new designated root directory. FreeBSD has jails, which are more powerful, Solaris has Zones, and AIX has WPARs. Chroots are fast and low overhead. However, they offer much lower ability to control the use of system resources. At the other end of the scale, virtual machines (which have been around since ancient IBM mainframes) offer isolation much better than Docker, but with a greater performance hit. Similarly, immutability isn’t really new: Heroku and AWS Spot Instances are both built around the model that you get resources in a known, consistent state when you start, but in both cases your changes won’t persist. In the development world, modern CI systems like Travis CI also have this immutable or disposable model – and this was originally built on VMs. Indeed, with a little bit of extra work, both chroots and VMs can give the same immutability properties that Docker gives. The control properties that Docker provides are largely as a result of leveraging some Linux kernel concepts, most notably something called namespaces. What Docker does well is not something novel, but the engineering feat of bringing together fine-grained control, isolation and immutability, and – importantly – a tool-chain that is easier to use than any of the alternatives. Docker’s tool-chain eases a lot of pain points with regards to building containers: it’s vastly simpler than chroots, and easier to customise than most VM setups. Docker also has a number of engineering tricks to reduce the disk space overhead of isolation. So, to summarise: Docker provides a toolkit for isolated, immutable, finely controlled containers to run executables and services.Docker in development: why? I don’t run network services at work; I do performance work. So how do I use Docker? There are two things I do with Docker: I build PHP 5, and do performance regression testing on PHP 7. They’re good case studies of how isolation and immutability provide real benefits in development and testing, and how the Docker tool chain makes life a lot nicer that previous solutions.PHP 5 builds I use the isolation that Docker provides to make building PHP 5 easier. PHP 5 depends on an old version of Bison, version 2. Ubuntu and Debian long since moved to version 3. There are a few ways I could have solved this: Docker makes it easy to have a self-contained environment that has Bison 2 built from source, and to build my latest source tree in that environment. Why is Docker so much easier? Firstly, Docker allows me to base my container on an existing container, and there’s an online library of containers to build from.4 This means I don’t have to roll a base image with debootstrap or the RHEL/CentOS/Fedora equivalent. Secondly, unlike a chroot build process, which ultimately is just copying files around, a docker build process includes the ability to both copy files from the host and run commands in the context of the image. This is defined in a file called a Dockerfile, and is kicked off by a single command: docker build. So, my PHP 5 build container loads an Ubuntu Vivid base container, uses apt-get to install the compiler, tool-chain and headers required to build PHP 5, then installs old bison from source, copies in the PHP source tree, and builds it. The vast majority of this process – the installation of the compiler, headers and bison, can be cached, so they don’t have to be downloaded each time. And once the container finishes building, I have a fully built PHP interpreter ready for me to interact with. I do, at the moment, rebuild PHP 5 from scratch each time. This is a bit sub-optimal from a performance point of view. I could alleviate this with a Docker volume, which is a way of sharing data persistently between a host and a guest, but I haven’t been sufficiently bothered by the speed yet. However, Docker volumes are also quite fiddly, leading to the development of tools like docker compose to deal with them. They also are prone to subtle and difficult to debug permission issues.PHP 7 performance regression testing The second thing I use docker for takes advantage of the throwaway nature of docker environments to prevent cross-contamination. PHP 7 is the next big version of PHP, slated to be released quite soon. I care about how that runs on POWER, and I preferably want to know if it suddenly deteriorates (or improves!). I use Docker to build a container with a daily build of PHP 7, and then I run a benchmark in it. This doesn’t give me a particularly meaningful absolute number, but it allows me to track progress over time. Building it inside of Docker means that I can be sure that nothing from old runs persists into new runs, thus giving me more reliable data. However, because I do want the timing data I collect to persist, I send it out of the container over the network. I’ve now been collecting this data for almost 4 months, and it’s plotted below, along with a 5-point moving average. The most notable feature of the graph is a the drop in benchmark time at about the middle. Sure enough, if you look at the PHP repository, you will see that a set of changes to improve PHP performance were merged on July 29: changes submitted by our very own Anton Blanchard.5Docker pain points Docker provides a vastly improved experience over previous solutions, but there are still a few pain points. For example: Docker was apparently written by people who had no concept that platforms other than x86 exist. This leads to major issues for cross-architectural setups. For instance, Docker identifies images by a name and a revision. For example, ubuntu is the name of an image, and 15.04 is a revision. There’s no ability to specify an architecture. So, how you do specify that you want, say, a 64-bit, little-endian PowerPC build of an image versus an x86 build? There have been a couple of approaches, both of which are pretty bad. You could name the image differently: say ubuntu_ppc64le. You can also just cheat and override the ubuntu name with an architecture specific version. Both of these break some assumptions in the Docker ecosystem and are a pain to work with. Image building is incredibly inflexible. If you have one system that requires a proxy, and one that does not, you need different Dockerfiles. As far as I can tell, there are no simple ways to hook in any changes between systems into a generic Dockerfile. This is largely by design, but it’s still really annoying when you have one system behind a firewall and one system out on the public cloud (as I do in the PHP 7 setup). Visibility into a Docker server is poor. You end up with lots of different, anonymous images and dead containers, and you end up needing scripts to clean them up. It’s not clear what Docker puts on your file system, or where, or how to interact with it. Docker is still using reasonably new technologies. This leads to occasional weird, obscure and difficult to debug issues.6 Docker provides me with a lot of useful tools in software development: both in terms of building and testing. Making use of it requires a certain amount of careful design thought, but when applied thoughtfully it can make life significantly easier. There’s some debate about how much stuff from the OS installation you should be using. You need to have key dynamic libraries available, but I would argue that you shouldn’t be running long running processes other than your application. You shouldn’t, for example, be running a SSH daemon in your container. (The one exception is that you must handle orphaned child processes appropriately: see) Considerations like debugging and monitoring the health of docker containers mean that this point of view is not universally shared.? Why not simply make them read only? You may be surprised at how many things break when running on a read-only file system. Things like logs and temporary files are common issues.? It is, however, easier to escape a Docker container than a VM. In Docker, an untrusted executable only needs a kernel exploit to get to root on the host, whereas in a VM you need a guest-to-host vulnerability, which are much rarer.? Anyone can upload an image, so this does require running untrusted code from the Internet. Sadly, this is a distinctly retrograde step when compared to the process of installing binary packages in distros, which are all signed by a distro’s private key.? See? I hit this last week:, although maybe that’s my fault for running systemd on my laptop.? So today I saw Freestanding “Hello World” for OpenPower on Hacker News. Sadly Andrei hadn’t been able to test it on real hardware, so I set out to get it running on a real OpenPOWER box. Here’s what I did. Firstly, clone the repo, and, as mentioned in the README, comment out mambo_write. Build it. Grab op-build, and build a Habanero defconfig. To save yourself a fair bit of time, first edit openpower/configs/habanero_defconfig to answer n about a custom kernel source. That’ll save you hours of waiting for git. This will build you a PNOR that will boot a linux kernel with Petitboot. This is almost what you want: you need Skiboot, Hostboot and a bunch of the POWER specific bits and bobs, but you don’t actually want the Linux boot kernel. Then, based on op-build/openpower/package/openpower-pnor/openpower-pnor.mk, we look through the output of op-build for a create_pnor_image.pl command, something like this monstrosity: PATH="/scratch/dja/public/op-build/output/host/bin:/scratch/dja/public/op-build/output/host/sbin:/scratch/dja/public/op-build/output/host/usr/bin:/scratch/dja/public/op-build/output/host/usr/sbin:/home/dja/bin:/home/dja/bin:/home/dja/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/opt/openpower/common/x86_64/bin" /scratch/dja/public/op-build/output/build/openpower-pnor-ed1682e10526ebd85825427fbf397361bb0e34aa/create_pnor_image.pl -xml_layout_file /scratch/dja/public/op-build/output/build/openpower-pnor-ed1682e10526ebd85825427fbf397361bb0e34aa/"defaultPnorLayoutWithGoldenSide.xml" -pnor_filename /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/pnor/"habanero.pnor" -hb_image_dir /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/hostboot_build_images/ -scratch_dir /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/openpower_pnor_scratch/ -outdir /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/pnor/ -payload /scratch/dja/public/op-build/output/images/"skiboot.lid" -bootkernel /scratch/dja/public/op-build/output/images/zImage.epapr -sbe_binary_filename "venice_sbe.img.ecc" -sbec_binary_filename "centaur_sbec_pad.img.ecc" -wink_binary_filename "p8.ref_image.hdr.bin.ecc" -occ_binary_filename /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/occ/"occ.bin" -targeting_binary_filename "HABANERO_HB.targeting.bin.ecc" -openpower_version_filename /scratch/dja/public/op-build/output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/openpower_version/openpower-pnor.version.txt Replace the -bootkernel arguement with the path to ppc64le_hello, e.g.: -bootkernel /scratch/dja/public/ppc64le_hello/ppc64le_hello Don’t forget to move it into place!1 mv output/host/usr/powerpc64-buildroot-linux-gnu/sysroot/pnor/habanero.pnor output/images/habanero.pnor Then we can use skiboot’s boot test script (written by Cyril and me, coincidentally!) to flash it.1 ppc64le_hello/skiboot/external/boot-tests/boot_test.sh -vp -t hab2-bmc -P <path to>/habanero.pnor It’s not going to get into Petitboot, so just interrupt it after it powers up the box and connect with IPMI. It boots, kinda:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [11012941323,5] INIT: Starting kernel at 0x20010000, fdt at 0x3044db68 (size 0x11cc3) CPU0 not found? Pick your 1.42486|ERRL|Dumping errors reported prior to registration Yes, it does wrap horribly. However, the big issue here (which you’ll have to scroll to see!) is the “CPU0 not found?”. Fortunately, we can fix this with a little patch to cpu_init in main.c to test for a PowerPC POWER8:1 2 3 4 5 6 7 8 cpu0_node = fdt_path_offset(fdt, "/cpus/cpu@0"); if (cpu0_node < 0) { cpu0_node = fdt_path_offset(fdt, "/cpus/PowerPC,POWER8@20"); } if (cpu0_node < 0) { printk("CPU0 not found?\n"); return; } This is definitely the wrong way to do this, but it works for now. Now, correcting for weird wrapping, we get:1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Assuming default SLB size SLB size = 0x20 TB freq = 512000000 [13205442015,3] OPAL: Trying a CPU re-init with flags: 0x2 Unrecoverable exception stack top @ 0x20019EC8 HTAB (2048 ptegs, mask 0x7FF, size 0x40000) @ 0x20040000 SLB entries: 1: E 0x8000000 V 0x4000000000000400 EA 0x20040000 -> hash 0x20040 -> pteg 0x200 = RA 0x20040000 EA 0x20041000 -> hash 0x20041 -> pteg 0x208 = RA 0x20041000 EA 0x20042000 -> hash 0x20042 -> pteg 0x210 = RA 0x20042000 EA 0x20043000 -> hash 0x20043 -> pteg 0x218 = RA 0x20043000 EA 0x20044000 -> hash 0x20044 -> pteg 0x220 = RA 0x20044000 EA 0x20045000 -> hash 0x20045 -> pteg 0x228 = RA 0x20045000 EA 0x20046000 -> hash 0x20046 -> pteg 0x230 = RA 0x20046000 EA 0x20047000 -> hash 0x20047 -> pteg 0x238 = RA 0x20047000 EA 0x20048000 -> hash 0x20048 -> pteg 0x240 = RA 0x20048000 ... The weird wrapping seems to be caused by NULLs getting printed to OPAL, but I haven’t traced what causes that. Anyway, now it largely works! Here’s a transcript of some things it can do on real e> Testing exception handling... sc(feed) => 0xFEEDFACE t> EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = RA 0x20010000 mapped 0xFFFFFFF000 to 0x20010000 correctly EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = unmap EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = RA 0x20011000 mapped 0xFFFFFFF000 to 0x20011000 incorrectly EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = un u> EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = RA 0x20080000 returning to user code returning to kernel code EA 0xFFFFFFF000 -> hash 0xFFFFFFF -> pteg 0x3FF8 = unmap I also tested the other functions and they all seem to work. Running non-priviledged code with the MMU on works. Dumping the FDT and the 5s delay both worked, although they tend to stress IPMI a lot. The delay seems to correspond well with real time as well. It does tend to error out and reboot quite often, usually on the menu screen, for reasons that are not clear to me. It usually starts with something entirely uninformative from Hostboot, like this:1 2 1.41801|ERRL|Dumping errors reported prior to registration 2.89873|Ignoring boot flags, incorrect version 0x0 That may be easy to fix, but again I haven’t had time to trace it. All in all, it’s very exciting to see something come out of the simulator and in to real hardware. Hopefully with the proliferation of OpenPOWER hardware, prices will fall and these sorts of systems will become increasingly accessible to people with cool low level projects like this! The way autoboot behaves in Petitboot has undergone some significant changes recently, so in order to ward off any angry emails lets take a quick tour of how the new system works.Old & Busted For some context, here is the old (or current depending on what you’re running) section of the configuration screen. This gives you three main options: don’t autoboot, autoboot from anything, or autoboot only from a specific device. For the majority of installations this is fine, such as when you have only one default option, or know exactly which device you’ll be booting from. A side note about default options: it is important to note that not all boot options are valid autoboot options. A boot option is only considered for auto-booting if it is marked default, eg. ‘set default’ in GRUB and ‘default’ in PXE options.New Hotness Below is the new autoboot configuration. The new design allows you to specify an ordered list of autoboot options. The last two of the three buttons are self explanatory - clear the list and autoboot any device, or clear the list completely (no autoboot). Selecting the first button, ‘Add Device’ brings up the following screen: From here you can select any device or class of device to add to the boot order. Once added to the boot order, the order of boot options can be changed with the left and right arrow keys, and removed from the list with the minus key (‘-’). This allows you to create additional autoboot configurations such as “Try to boot from sda2, otherwise boot from the network”, or “Give priority to PXE options from eth0, otherwise try any other netboot option”. You can retain the original behaviour by only putting one option into the list (either ‘Any Device’ or a specific device). Presently you can add any option into the list and order them how you like - which means you can do silly things like this:IPMI Slightly prior to the boot order changes Petitboot also received an update to its IPMI handling. IPMI ‘bootdev’ commands allow you to override the current autoboot configuration remotely, either by specifying a device type to boot (eg. PXE), or by forcing Petitboot to boot into the ‘setup’ or ‘safe’ modes. IPMI overrides are either persistent or non-persistent. A non-persistent override will disappear after a successful boot - that is, a successful boot of a boot option, not booting to Petitboot itself - whereas a persistent override will, well, persist! If there is an IPMI override currently active, it will appear in the configuration screen with an option to manually clear it: That sums up the recent changes to autoboot; a bit more flexibility in assigning priority, and options for more detailed autoboot order if you need it. New versions of Petitboot are backwards compatible and will recognise older saved settings, so updating your firmware won’t cause your machines to start booting things at random. (I wrote this blog post a couple of months ago, but it’s still quite relevant.) Hi, I’m Daniel! I work in OzLabs, part of IBM’s Australian Development Labs. Recently, I’ve been assigned to the CAPI project, and I’ve been given the opportunity to give you an idea of what this is, and what I’ll be up to in the future!What even is CAPI? To help you understand CAPI, think back to the time before computers. We had a variety of machines: machines to build things, to check things, to count things, but they were all specialised — good at one and only one thing. Specialised machines, while great at their intended task, are really expensive to develop. Not only that, it’s often impossible to change how they operate, even in very small ways. Computer processors, on the other hand, are generalists. They are cheap. They can do a lot of things. If you can break a task down into simple steps, it’s easy to get them to do it. The trade-off is that computer processors are incredibly inefficient at everything. Now imagine, if you will, that a specialised machine is a highly trained and experienced professional, a computer processor is a hungover university student. Over the years, we’ve tried lots of things to make student faster. Firstly, we gave the student lots of caffeine to make them go as fast as they can. That worked for a while, but you can only give someone so much caffeine before they become unreliable. Then we tried teaming the student up with another student, so they can do two things at once. That worked, so we added more and more students. Unfortunately, lots of tasks can only be done by one person at a time, and team-work is complicated to co-ordinate. We’ve also recently noticed that some tasks come up often, so we’ve given them some tools for those specific tasks. Sadly, the tools are only useful for those specific situations. Sometimes, what you really need is a professional. However, there are a few difficulties in getting a professional to work with uni students. They don’t speak the same way; they don’t think the same way, and they don’t work the same way. You need to teach the uni students how to work with the professional, and vice versa. Previously, developing this interface – this connection between a generalist processor and a specialist machine – has been particularly difficult. The interface between processors and these specialised machines – known as accelerators – has also tended to suffer from bottlenecks and inefficiencies. This is the problem CAPI solves. CAPI provides a simpler and more optimised way to interface specialised hardware accelerators with IBM’s most recent line of processors, POWER8. It’s a common ‘language’ that the processor and the accelerator talk, that makes it much easier to build the hardware side and easier to program the software side. In our Canberra lab, we’re working primarily on the operating system side of this. We are working with some external companies who are building CAPI devices and the optimised software products which use them. From a technical point of view, CAPI provides coherent access to system memory and processor caches, eliminating a major bottleneck in using external devices as accelerators. This is illustrated really well by the following graphic from an IBM promotional video. In the non-CAPI case, you can see there’s a lot of data (the little boxes) stalled in the PCIe subsystem, whereas with CAPI, the accelerator has direct access to the memory subsystem, which makes everything go faster.Uses of CAPI CAPI technology is already powering a few really cool products. Firstly, we have an implementation of Redis that sits on top of flash storage connected over CAPI. Or, to take out the buzzwords, CAPI lets us do really, really fast NoSQL databases. There’s a video online giving more details. Secondly, our partner Mellanox is using CAPI to make network cards that run at speeds of up to 100Gb/s. CAPI is also part of IBM’s OpenPOWER initiative, where we’re trying to grow a community of companies around our POWER system designs. So in many ways, CAPI is both a really cool technology, and a brand new ecosystem that we’re growing here in the Canberra labs. It’s very cool to be a part of! I wrote this blog post late last year, it is very relevant for this blog though so I’ll repost it here. With the launch of TYAN’s OpenPOWER reference system now is a good time to reflect on the team responsible for so much of the research, design and development behind this very first ground breaking step of OpenPOWER with their start to finish involvement of this new Power platform. ADL Canberra have been integral to the success of this launch providing the Open Power Abstraction Layer (OPAL) firmware. OPAL breathes new life into Linux on Power finally allowing Linux to run on directly on the hardware. While OPAL harnesses the hardware, ADL Canberra significantly improved Linux to sit on top and take direct control of IBMs new Power8 processor without needing to negotiate with a hypervisor. With all the Linux expertise present at ADL Canberra it’s no wonder that a Linux based bootloader was developed to make this system work. Petitboot leverage’s all the resources of the Linux kernel to create a light, fast and yet extremely versatile bootloader. Petitboot provides a massive amount of tools for debugging and system configuration without the need to load an operating system. TYAN have developed great and highly customisable hardware. ADL Canberra have been there since day 1 performing vital platform enablement (bringup) of this new hardware. ADL Canberra have put all the work into the entire software stack, low level work to get OPAL and Linux to talk to the new BMC chip as well as the higher level, enabling to run Linux in either endian and Linux is even now capable of virtualising KVM guests in either endian irrespective of host endian. Furthermore a subset of ADL Canberra have been key to getting the Coherent Accelerator Processor Interface (CAPI) off the ground, enabling more almost endless customisation and greater diversity within the OpenPOWER ecosystem. ADL Canberra is the home for Linux on Power and the beginning of the OpenPOWER hardware sees much of the hard work by ADL Canberra come to fruition. Let’s say your child is currently a classic consumer – they love watching TV, reading books, but they don’t really enjoy making things themselves. Or maybe they are making some things but it’s not really technological. We think any kind of making is awesome, but one of our favourite kinds is the kind where kids realize that they can build and influence the world around them. There’s an awesome Steve Jobs quote that I love, which says: .” Imagine if you can figure this out as a child.Ep. Recently, there’s been (actually two) ports of TianoCore (the reference implementation of UEFI firmware) to run on POWER on top of OPAL (provided by skiboot) – and it can be run in the Qemu PowerNV model. More details:!?s and 0?s from the input audio. The lower section deals with framing and plucking out valid telemetry packets: A couple of interesting features:.
https://luv.asn.au/aggregator?page=5
CC-MAIN-2017-26
refinedweb
5,017
61.97
Great summary. Two comments: 1) I think we (anyone needing to move drives between Mac and PC) should write to Apple and Microsoft and ask for better support for this. Specifically, Apple needs to (license the tech if necessary) allow formatting and writing NTFS. 2) I have a few external drives that I swap on a regular basis between my Mac and PC. They are large drives (120GB, 250GB 400GB). I format those as FAT32, but the key is to format them as FAT32 *on the Mac*, if you want a single full sized partition. In other words, my iTunes collection is about 250GB. Windows XP limits the maximum partition size to 32GB when formatting. However, I can format the drive as a 250GB single FAT32 partition in OS X, copy my entire library, and then Windows XP will handle it just fine. Writing to NTFS has plagued linux users for some time now as the file system is almost entirely proprietary. There are a few programs for a linux that can write to NTFS using the Windows dll driver, but this would not work on Mac (unless it could be recompiled for ppc or support could be added for Intel Macs with some kind of Wine support installed). I would say that Mac doesn't have much of a choice until MS decides to license their filesystem for other OS's to use, and knowing MS's giving nature with their proprietary code...well, let's just say we may have to wait until someone hacks their code first... --- Jayson --When Microsoft asks you, "Where do you want to go today?" tell them "Apple." Windows does not have a max partition size on FAT32 av 32GB! FAT16 has got a max limit, but FAT32 does NOT! But FAT32 has a filesize limit at 4GB. Windows. The lack of write capability for NTFS volumes is a real thorn in the side of OS X. Currently windows and OS X don't share write capability on any sort of "professional" disk format. If you use FAT32 you will quickly run into the max file size limit (especially if you do video work). This is really a problem with most any no-MS OS. it's just hideously dangerous to do writes. " The transfer would always choke somewhere in between" I've never understood why OS X "prechecks" file transfers for things like available space, but doesn't check the file names. I've never understood why OS X "prechecks" file transfers for things like available space, but doesn't check the file names. robdew wrote: hayne wrote:. hayne wrote: I've never understood why OS X "prechecks" file transfers for things like available space, but doesn't check the file names.. “Substantial effort” my ahhhh...foot. A simple regular expression is all you need. It's the sort of thing a first-semester programming student can handle. Cheers, b& Please supply this "simple regular expression" for all likely file system candidates, current, past and future. Not only will remote systems have differing filename conventions (lengths, allowable characters, etc), but then you'd have to check every single file before copying to see if it will work. Keeping track of space needed is simple, just have a long long of the total bytes needed. Checking the file names would either require creating a buffer of all of the names (fast, but memory-intensive), or checking each one as the algorithm goes (slow, but easier on memory). Either way, it will hurt performance *significantly*. Why yes, I *do* have a BA in Computer Science.) MOUNT_NTFS(8) BSD System Manager's Manual MOUNT_NTFS(8) NAME mount_ntfs -- mount an NTFS file system SYNOPSIS mount_ntfs [-a] [-s] [-u uid] [-g gid] [-m mask] special node DESCRIPTION The mount_ntfs utility attaches the NTFS file system residing on the device special to the global file system namespace at the location indi- cated by node. This command is normally executed by mount(8) at boot time, but can be used by any user to mount an NTFS file system on any directory that they own (provided, of course, that they have appropriate access to the device that contains the file system). The options are as follows: -a Force behaviour to return MS-DOS 8.3 names also on readdir(). -s Make name lookup case sensitive. WRITING There is limited writing ability. Limitations: file must be nonresident and must not contain any sparces (uninitialized areas); compressed files are also not supported. SEE ALSO mount(2), unmount(2), fstab(5), mount(8), mount_msdosfs(8) CAVEATS This utility is primarily used for read access to an NTFS volume. See the WRITING section for details about writing to an NTFS volume. HISTORY The mount_ntfs utility first appeared in FreeBSD 3.0. AUTHORS The NTFS kernel implementation, mount_ntfs utility, and manual were writ- ten by Semen Ustimenko <semenu@FreeBSD.org>. BSD November 11, 2004 BSD --- k:.) ... WRITING There is limited writing ability. Limitations: file must be nonresident and must not contain any sparces (uninitialized areas); compressed files are also not supported. I submitted the hint more than two months ago [but it was published only now because there were apparently some problems with the hint queue on the the site - Rob is aware of this and wants to publish any other hints soon that have experienced similar problems as mine ]. Thus, and because I don't have 10.4.1 anymore, I am not totally sure, but I believe I saw the same in the manual at that time, but I decided not to try that. I had done some research on the net about NTFS etc. and saw that Linux has been struggling with writing support for a while and that NTFS is a rather advanced file system (many features). My guess was that Apple probably wouldn't be ahead of all those Linux hackers, so I decided that writing support would probably not realiably work OS X. Personally, I won't try it out because it seems too risky to me (and I don't recommend anyone trying it with important data). Interesting. | # ] Hello All Well, I'm running 10.4.3 and NTFS support is some what lacking :( While the system profiler shows the external firewire harddrive as a NTFS volume, there is no way for me to access it. Thanks for all the hints/tips, got any more? Don't have an account yet? Sign up as a New User Visit other IDG sites:
http://hints.macworld.com/article.php?story=20050521110452194&lsrc=osxh
CC-MAIN-2014-10
refinedweb
1,084
71.14
The QStyle class is an abstract base class that encapsulates the look and feel of a GUI. More... #include <QStyle> Inherits QObject. Inherited by QCommonStyle. The QStyle class is an abstract base class that encapsulates the look and feel of a GUI. Qt contains a set of QStyle subclasses that emulate the styles of the different platforms supported by Qt (QWindowsStyle, QMacStyle, QMotifStyle, etc.). By default, these styles are built into the QtGui library. Styles can also be made available as plugins. Qt's built-in widgets use QStyle to perform nearly all of their drawing, ensuring that they look exactly like the equivalent native widgets. The diagram below shows a QComboBox in eight different styles. Topics: The style of the entire application can be set using QApplication::setStyle(). It can also be specified by the user of the application, using the -style command-line option: ./myapplication -style motif If no style is specified, Qt will choose the most appropriate style for the user's platform or desktop environment. A style can also be set on an individual widget using QWidget::setStyle(). If you are developing custom widgets and want them to look good on all platforms, you can use QStyle functions to perform parts of the widget drawing, such as drawItem(), drawPrimitive(), drawControl(), and drawComplexControl(). Most QStyle draw functions take four arguments: For example, if you want to draw a focus rectangle on your widget, you can write: void MyWidget::paintEvent(QPaintEvent * /* event */) { QPainter painter(this); QStyleOptionFocusRect option; option.init Mac. This is documented for each enum value.); If you want to design a custom look and feel for your application, the first step is to pick one of the base styles provided with Qt to build your custom style from. The choice will depend on which existing style resembles your style the most.WindowsStyle { Q_OBJECT public: CustomStyle() {} ~CustomStyle() {} void drawPrimitive(PrimitiveElement element, const QStyleOption *option, QPainter *painter, const QWidget *widget) const; }; The PE_IndicatorSpinUp and PE_IndicatorSpinDown primitive elements are used by QSpinBox to draw its up and down arrows. Here's how to reimplement drawPrimitive() to draw them differently: void CustomStyle::drawPrimitive(PrimitiveElement element, const QStyleOption *option, QPainter *painter, const QWidget *widget) const { if (element == PE_IndicatorSpinUp || element == PE_IndicatorSpinDown) { QPointArray & Style_Enabled) { painter->setPen(option->palette.mid().color()); painter->setBrush(option->palette.buttonText()); } else { painter->setPen(option->palette.buttonText().color()); painter->setBrush(option->palette.mid()); } painter->drawPolygon(points); } else { QWindowsStyle::drawPrimitive(element, option, painter, widget); } } Notice that we don't use the widget argument, except to pass it on to QWindowStyle::drawPrimitive().: QSpinBox *spinBox = qobject_cast<QSpinBox *>(widget); if (spinBox) { ... } When implementing a custom style, you cannot assume that the widget is a QSpinBox just because the enum value is called PE_IndicatorSpinUp or PE_IndicatorSpinUp. The documentation for the Styles example covers this topic in more detail. There are several ways of using a custom style in a Qt application. The simplest way is call the QApplication::setStyle() static function before creating the QApplication object: #include <QtGui> style available for use in other applications, some of which may not be yours and areTDIR/plugins/styles. We now have a pluggable style that Qt can load automatically. To use your new style with existing applications, simply start the application with the following argument: ./myapplication : See also QStyleOption and QStylePainter. This enum represents a ComplexControl. ComplexControls have different behavior depending upon where the user clicks on them or which keys are pressed. See also SubControl and drawComplexControl(). This enum represents a ContentsType. It is used to calculate sizes for the contents of various widgets. See also sizeFromContents(). This enum represents a ControlElement. A ControlElement is part of a widget that performs some action or displays information to the user. See also drawControl(). This enum represents a PixelMetric. A PixelMetric is a style dependent size represented as a single pixel value. See also pixelMetric(). This enum represents a style's PrimitiveElements. A PrimitiveElement is a common GUI element, such as a checkbox indicator or button bevel. See also drawPrimitive(). This enum represents a StandardPixmap. A StandardPixmap is a pixmap that can follow some existing GUI style or guideline. See also standardPixmap(). This enum represents flags for drawing PrimitiveElements. Not all primitives use all of these flags. Note that these flags may mean different things to different primitives. The State type is a typedef for QFlags<StateFlag>. It stores an OR combination of StateFlag values. See also drawPrimitive(). This enum represents a StyleHint. A StyleHint is a general look and/or feel hint. See also styleHint(). This enum represents a SubControl within a ComplexControl. The SubControls type is a typedef for QFlags<SubControl>. It stores an OR combination of SubControl values. See also ComplexControl. This enum represents a sub-area of a widget. Style implementations use these areas to draw the different parts of a widget. See also subElementRect(). Constructs a style object. Destroys the style object. Returns a new rectangle of size that is aligned to rectangle according to alignment and based on direction. Draws the ComplexControl control using painter with the style options specified by option. The widget argument is optional and may contain a widget to aid in drawing control. The option parameter is a pointer to a QStyleOptionComplex structure that can be cast to the correct structure. Note that the rect member of option must be in logical coordinates. Reimplementations of this function should use visualRect() to change the logical coordinates into screen coordinates before calling drawPrimitive() or drawControl(). Here is a table listing the elements and what they can be cast to, along with an explaination of the flags. See also ComplexControl, SubControl, and QStyleOptionComplex. Draws the ControlElement element with painter with the style options specified by option. The widget argument is optional and may contain a widget that may aid in drawing the control. What follows is a table of the elements and the QStyleOption structure the option parameter can cast to. The flags stored in the QStyleOption state variable are also listed. If a ControlElement is not listed here, it uses a plain QStyleOption. See also ControlElement, State, and QStyleOption. Draws the pixmap in rectangle rect with alignment alignment using painter. Draws the text in rectangle rect using painter and palette pal. Text is drawn using the painter's pen. If an explicit textRole is specified, then the text is drawn using the color specified in pal for the specified role. The enabled bool indicates whether or not the item is enabled; when reimplementing this bool should influence how the item is drawn. The text is aligned and wrapped according to alignment. See also Qt::Alignment. Draw the primitive option elem with painter using the style options specified by option. The widget argument is optional and may contain a widget that may aid in drawing the primitive. What follows is a table of the elements and the QStyleOption structure the option parameter can be cast to. The flags stored in the QStyleOption state variable are also listed. If a PrimitiveElement is not listed here, it uses a plain QStyleOption. The QStyleOption is the the following for the following types of PrimitiveElements. See also PrimitiveElement, State, and QStyleOption. Returns a pixmap styled to conform to iconMode description out of pixmap, and taking into account the palette specified by option. The option can pass extra information, but it must contain a palette. Not all types of pixmaps will change from their input in which case the result will simply be the pixmap passed in. Returns the SubControl in the ComplexControl control with the style options specified by option at the point pos. The option argument is a pointer to a QStyleOptionComplex structure or one of its subclasses. The structure can be cast to the appropriate type based on the value of control. See drawComplexControl() for details. The widget argument is optional and can contain additional information for the functions. Note that pos is expressed in screen coordinates. See also drawComplexControl(), ComplexControl, SubControl, subControlRect(), and QStyleOptionComplex. Returns the appropriate area within rectangle rect in which to draw the pixmap with alignment defined in alignment. Returns the appropriate area (see below) within rectangle rect in which to draw text using the font metrics metrics. The text is aligned in accordance with alignment. The enabled bool indicates whether or not the item is enabled. If rect is larger than the area needed to render the text the rectangle that is returned will be offset within rect in accordance with the alignment alignment. For example, if alignment is Qt::AlignCenter, the returned rectangle will be centered within rect. If rect is smaller than the area needed, the rectangle that is returned will be larger than rect (the smallest rectangle large enough to render the text). See also Qt::Alignment. Returns the pixel metric for the given metric. The option and widget can be used for calculating the metric. The option can be cast to the appropriate type based on the value of metric. Note that option may be zero even for PixelMetrics that can make use of option. See the table below for the appropriate option casts: In general, the widget argument is not used. Initializes the appearance of widget. This function is called for every widget at some point after it has been fully created but just before it is shown for the very first time. Reasonable actions in this function might be to call QWidget::setBackgroundMode() for the widget. An example of highly unreasonable use would be setting the geometry! Reimplementing this function gives you a back-door through which you can change the appearance of a widget. With Qt 4.0's style engine you will rarely need to write your own polish(); instead reimplement drawItem(), drawPrimitive(), etc.
http://doc.trolltech.com/4.0/qstyle.html
crawl-001
refinedweb
1,615
50.23
#include <hallo.h> peter karlsson wrote on Sun Nov 26, 2000 um 02:10:44PM: > I have a program that is building against Qt 2.1 that I would like to > upload, but I notice that only Qt 2.2 is available in woody, and I > would prefer not to install Qt 2.2, since it requires XFree86 4.0.1, > which removes a number of programs that I use regurarly. > > Should I > > 1) upload the package linked against Qt 2.1? > 2) find a "bleeding-edge" woody i386 debian.org machine that I can build on? > 3) only upload the sources? 4) got the source of libqt2.2 and recompile it in your environment. But don't forget, this is not the best solution. You should develop your packages in the same environment as the package should later run in.
https://lists.debian.org/debian-mentors/2000/11/msg00193.html
CC-MAIN-2015-18
refinedweb
141
86.91
Gateway MQTT with RFM69 Hello, I'm trying to make an MQTT gateway with an arduino uno, a HanRun HR911105A shield and an RFM69. I am in version 2.3.1 of MySensors. I realized for several years this type of gateway with NRF24 + which works without problem. With the RFM69, if I put the MISO cable at 12, the MQTT connection tries to wait for incomprehensible IP addresses but the RFM69 starts up well. If I remove the MISO, the MQTT connection is at the correct address and works but of course the RFM69 does not start. I tried cabling the RST on the GND of the shield but always the same. I looked a lot of post but I did not find anything to deal with the problem. Any ideas ? an advice ? an experiment ? Thanks in advance. @miclane HR911105A is known to have trouble sharing the SPI bus. The general recommendation is to use softspi. See for the wiring and defines. Thank you for your reply. But I do not see anything on the page indicated which corresponds to the pinning for an RFM69. I tried to compose with this page and the RFM69 pinouts but I get the same results. Either the ethernet part works but not the radio part. Either the radio part works but not the ethernet part. Is this configuration really operational? Because a lot of post does not seem to have managed to make it work. Cordially. @miclane yes you are right, there are no instructions for rfm69. Sorry. I have very little experience with rfm69, no experience with the ethernet shield, no experience with softspi and no experience with mqtt. Hopefully someone else can help. @miclane said in Gateway MQTT with RFM69: HanRun HR911105A This only tells what ethernet jack is on the board. Do you have more information on your ethernet shield? (specs, schematics, or even a picture?) I realized for several years this type of gateway What changed (apart from the mySensors library version) since then? Please share your sketch and a startup debug log. - Yes, there is of course the RJ45 plug on the shield which fits directly by the arduino UNO. The only indications are HanRun HR911105A 15/10. - What has changed is that I have always used NRF24 +. The gateway that I use in operation at home is also in version MySensors 2.3.1 with NRF24+ - I use the basic sketch on the site : #define MY_DEBUG #define MY_TRANSPORT_WAIT_READY_MS 10000 #define MY_RADIO_RFM69 #define MY_RF69_SPI_CS 9 #define MY_GATEWAY_W5100 #define MY_GATEWAY_MQTT_CLIENT #define MY_MQTT_PUBLISH_TOPIC_PREFIX "mhs-pub-1" #define MY_MQTT_SUBSCRIBE_TOPIC_PREFIX "mhs-sub-1" #define MY_MQTT_CLIENT_ID "MHS_GW_02" #define MY_MQTT_USER "xxxxxx" #define MY_MQTT_PASSWORD "yyyyyyyy" #define MY_IP_ADDRESS 12,9,0,101 #define MY_IP_GATEWAY_ADDRESS 12,9,0,1 #define MY_IP_SUBNET_ADDRESS 255,255,255,0 #define MY_CONTROLLER_IP_ADDRESS 12,9,0,8 #define MY_PORT 1883 #define MY_MAC_ADDRESS 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED #define MY_DEFAULT_LED_BLINK_PERIOD 300 #include <SPI.h> #include <Ethernet.h> #include <MySensors.h> void setup() {} void presentation() {} void loop() {} Thank you. Hello @miclane, I'm currently building an MQTT Gateway with RFM69 and Wiznet W5100 module. Are you using the same ethernet module? Please read about the chip select/SEN issue: I've modified my module and both chips (RFM69, W5100) are starting up. Mapping: W5100-MO = RFM69-MOSI = D11 W5100-MI = RFM69-MISO = D12 W5100-SCK = RFM69-SCK = D13 W5100-NSS = D10 RFM69-NSS = A0 RFM69-DI00 = D02 Defines: #define MY_RADIO_RFM69 #define MY_IS_RFM69HW // Mandatory if you radio module is the high power version (RFM69HW and RFM69HCW), Comment it if it's not the case #define MY_RF69_SPI_CS 14 // W5100 Ethernet module SPI enable (optional if using a shield/module that manages SPI_EN signal) #define MY_W5100_SPI_EN 10 #define MY_GATEWAY_MQTT_CLIENT Thank you for your very clear answer. I will test by getting a MOSFET. Have a good day. just wanted to add that my W5100/RFM69 ethernet gateway works - but I had to patch two lines in the Ethernet library (w5100.h), see Now I'll see if the MQTT gateway works, too...
https://forum.mysensors.org/topic/10071/gateway-mqtt-with-rfm69/3?lang=en-US
CC-MAIN-2022-21
refinedweb
666
65.22
Type: Posts; User: raj1986 This is part of a program from multiple files. What is done is a object is returned from a function with extern "C". An so(shared object) file is created with this to be used by client programs.... Hi, I have a class with a string variable like #ifdef __GNUC__ #define DLL_PUBLIC __attribute__ ((visibility ("default"))) #elif DLL_PUBLIC_IMPORT #define DLL_PUBLIC __declspec(dllimport)... Hi, I am using Regex to evaluate a match. I see var lines = System.IO.File.ReadAllLines(fileName); foreach (var line in lines) { MatchCollection... Thanks you explained so nicely. Hi, I am very new to c#. While i was trying to declare a variable variab as private and use the getters and setters to access it. I see the syntax like namespace MyTest{ class Person Thanks a lot. It is clear now I have got some doubt on destructor call. When the object is deleted then the destructor is called. When the object is deleted? stack object when goest out of scope and heap object when we.
https://forums.codeguru.com/search.php?s=44fccc4bb0a109d68ee73c6bda543eb9&searchid=22106014
CC-MAIN-2021-43
refinedweb
172
77.13
04 October 2013 17:16 [Source: ICIS news] LONDON (ICIS)--European methyl tertiary butyl ether (MTBE) prices edged down slightly this week, in line with a generally weak energy complex, sources said on Friday. Trading activity in the spot market was again at low levels, and the factor against Eurobob gasoline cash barges remained steady throughout the week at 1.15-1.16. “Gasoline demand is weak. [There’s] not a lot of blending going on,” one trader said. ?xml:namespace> The trader also noted that the switch to winter grade gasoline has led to a lot of blenders switching to the more competitively priced butane as their octane-booster of choice. With other consumers sourcing their requirements on term contracts, it is thought there is currently limited demand for further volumes in the spot market. Sources have seen a narrowing of the naphtha and gasoline spread, leading to expectations that MTBE premiums over gasoline will also decline as a result. MTBE prices were at $1,070/tonne (€781/tonne) FOB (free on board) AR (Amsterdam-Rotterdam) at the close of business on Friday, a fall of $30-31/tonne from last week. Prices have steadily declined over the past month losing $122/tonne since 6 September, pressured by ailing demand from consumers in Euro
http://www.icis.com/Articles/2013/10/04/9712608/Europe-MTBE-trades-lower-on-weak-energy-limited-demand.html
CC-MAIN-2014-42
refinedweb
215
57.4
Re: Python-list Digest, Vol 81, Issue 63 - From: Chris Rebert <clp2@xxxxxxxxxxxx> - Date: Wed, 9 Jun 2010 00:09:09 -0700 On Tue, Jun 8, 2010 at 11:57 PM, madhuri vio <madhuri.vio@xxxxxxxxx> wrote: import tkinter root = tkinter.Tk() #initialize tkinter and get a top level instance root.title("madhuri is a python") canvas = tkinter.Canvas(root) #creating the canvas under the root canvas.pack() #to call the packer geometry canvas.create_rectangle(20,10,120,80,fill=colors[0]) root.close() tk.destroy() this is the program i have written and i am unable to execute it as i get an attribute error like this... $ python tkinter.py Traceback (most recent call last): File "tkinter.py", line 4, in <module> import tkinter File "/home/manoj/tkinter.py", line 6, in <module> root = tkinter.tk() #initialize tkinter and get a top level instance AttributeError: 'module' object has no attribute 'tk' where is the mistake and what do i do ???its a ll urgent *Don't name your module the same name as a built-in module.* Rename your /home/manoj/tkinter.py file to something else. Also, it seems that line should be "root = tkinter.Tk()" with a capital T; your actual code doesn't match the code snippet you posted. Finally, to start a new topic/thread on the mailinglist, *please don't reply to a random digest*. Follow the instructions in the digest message itself: On Tue, Jun 8, 2010 at 3:30 PM, <python-list-request@xxxxxxxxxx> wrote:<actual unnecessary digest snipped> Send Python-list mailing list submissions to python-list@xxxxxxxxxx Regards, Chris -- Netiquette; sigh. . - Prev by Date: Re: why any( ) instead of firsttrue( ) ? - Next by Date: Re: GUIs - A Modest Proposal - Previous by thread: how to get a reference to the "__main__" module - Next by thread: Re: help me - Index(es):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2010-06/msg00708.html
CC-MAIN-2013-20
refinedweb
308
58.18
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. On Thu, Nov 29, 2001 at 04:20:08PM -0500, Phil Edwards wrote: > >. Right. I guess what I'm saying is that in most cases as stay strictly standard as much as possible, but I'd like to have a nice memory pool allocator. The library has one that I'm willing to use as an extension. My alternative is to write/find another one, which also wouldn't be standard, so I'm no worse off. > Even if the off-the-cuff idea above were implemented -- and it /is/ ugly > as sin :-) -- we couldn't recommend that users make use of names in the > implementation namespace. They'd have to be wrapped and exposed somehow. Perhaps in ext? > >.. In summary, what I'm asking is that since the memory pool logic in the library is nice, well-optimized, well-maintained, and bug-free (?), it'd be nice to expose it as an extension since I'd have to go make one or find one anyway. I would be forever greatful :) -- ----------------------------------------------------------------- Brad Spencer - spencer@infointeractive.com - "It's quite nice..." Systems Architect | InfoInterActive Corp. | An AOL Company
http://gcc.gnu.org/ml/libstdc++/2001-11/msg00334.html
crawl-001
refinedweb
204
67.76
14 Downloads Updated 18 Oct 2020View Version History ### G1 and G2 fitting with clothoids, spline of clothods, circle arc and biarc **by Enrico Bertolazzi and Marco Frego** for the documentation see `manual.md` This is the NEW Object Oriented (OO) version of the Clothoids library. For the old NON OO interface look at the branch `old_interface`. **Authors:** Enrico Bertolazzi and Marco Frego Department of Industrial Engineering University of Trento enrico.bertolazzi@unitn.it m.fregox@gmail.com Enrico Bertolazzi (2020). ebertolazzi/Clothoids (), GitHub. Retrieved . Inspired: Polynomialspirals Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting! Create scripts with code, output, and formatted text in a single executable document. The software is developed on OSX. To compile the MEX library execute CompileLib.m on the directory matlab. To compile the c++ lib (to be used in c++) there is a makefile (that work on linux and OSX) and a CMakeLists.txt to be used with cmake on all the platforms. Has anyone compiled this with macOS on a macbookair? Can the instructions be posted here please, thanks. Thank you for sharing and following people's requests. Definitely a good practice! I hope to benefit from this work for my research. I found a solution by changing line 398 in Triangle2D.cc to include parentheses : return (std::max)(d1,(std::max)(d2,d3)) ; (solution found here:) if you are compiling on windows with visual studio compiler probably the error is due to some version of the compiler which defined max and min as macros. I added a workaround for that in the last version of the library. It is correct, in the latest version I added the header inclusion. The error is due to that fact I develop mainly on OSX which include headers for STL in a slight different way. Hi,I tried to compile your Lib. There is an error in Triangle2D I think you forgot to include the Header <algorithm>. After I fixed that i was able to compile it in Matlab.
https://www.mathworks.com/matlabcentral/fileexchange/64849-ebertolazzi-clothoids
CC-MAIN-2020-45
refinedweb
340
58.48
An Extensive Examination of Data Structures Scott Mitchell 4GuysFromRolla.com November 2003 Summary: This article, the second in a six-part series on data structures in the .NET Framework, examines three of the most commonly studied data structures: the Queue, the Stack, and the Hashtable. As we'll see, the Queue and Stack are specialized ArrayLists, object. (17 printed pages) Contents Introduction Providing First Come, First Served Job Processing A Look at the Stack Data Structure: First Come, Last Served The Limitations of Ordinal Indexing The System.Collections.Hashtable Class Conclusion Introduction In Part in memory as a contiguous block, thereby making reading from or writing to a specific array element very fast. Two downsides of arrays are their homogeneity and the fact that their size must be specified explicitly. To combat this, the .NET Framework Base Class Library offers an ArrayList data structure, which provides a heterogeneous collection of elements that needs not be explicitly resized. As discussed in the previous article, the ArrayList works by using an array of type object. Each call to the ArrayList's Add() method checks to see if an internal object array can hold any more elements, and if not, the array is automatically doubled in size. In this second installment of the article series, we'll continue our examination of array-like data structures by first examining the Queue and Stack. These two data structures, like the ArrayList, hold a contiguous block of heterogeneous elements. However, both the Queue and Stack place limitations on how their data can be accessed. Following our look at the Queue and Stack, we'll spend the rest of this article digging into the Hashtable data structure. A Hashtable, which is sometimes referred to as an associative array, stores a collection of heterogeneous with which the incoming requests will be handled. The two most common approaches used are: - First come, first served - Priority-based processing First come, first served is the job-scheduling task you'll find at your grocery store, the bank, and the DMV. Those waiting for service stand in a line. The people in front of you. Since the number of incoming requests might happen quicker than you can process them, you'll need to place the requests in some sort of buffer that can reserve the order with which they have arrived. One option is to use an ArrayList and an integer variable called nextJobPos to indicate the array position of the next job to be completed. When each new job request comes in, simply use the ArrayList's Add() method to add it to the end of the ArrayList. Whenever you are ready to process a job in the buffer, grab the job at the nextJobPos position in the ArrayList and increment nextJobPos. The following program illustrates this algorithm: using System; using System.Collections; public class JobProcessing { private static ArrayList jobs = new ArrayList(); private static int nextJobPos = 0; public static void AddJob(string jobName) { jobs.Add(jobName); } public static string GetNextJob() { if (nextJobPos > jobs.Count - 1) return "NO JOBS IN BUFFER"; else { string jobName = (string): While this approach is fairly simple and straightforward, it is horribly inefficient. For starters, the ArrayList continues ArrayList's Add() method. As the Add() method is continually called, the ArrayList's internal array's size is continually redoubled as needed. After five minutes (300 seconds) the ArrayList's internal array is dimensioned for 512 elements, even though there has never been more than one job in the buffer at a time! This trend, of course, continues so long as the program continues to run and the jobs continue to come in. The reason the ArrayList grows to such ridiculous proportion is because the buffer locations used for old jobs is not reclaimed. That is, when the first job is added to the buffer, and then processed, the first spot in the ArrayList is ready to be reused again. Consider the job schedule presented in the previous code sample. After the first two lines— AddJob("1") and AddJob("2")—the ArrayList will look Figure 1. Figure 1. The ArrayList after the first two lines of code Note that there are 16 elements in the ArrayList at this point because, by default, the ArrayList, when initialized, creates its internal object array with 16 elements. Next, the GetNextJob() method is invoked, which removes the first job, resulting in something like Figure 2. Figure 2. Program after the GetNextJob() method is invoked When AddJob("3") executes, we need to add another job to the buffer. Clearly the first ArrayList element (index 0) is available for reuse. Initially it might make sense to put the third job in the 0 index. However, this approach can be forgotten by considering what would happen if after AddJob("3") we did AddJob("4"), followed by two calls to GetNextJob(). If we placed the third job in the 0 index and then the fourth job in the 2 index, we'd have the problem demonstrated ArrayList represents the list of jobs in a linear ordering. That is, we need to keep adding the new jobs to the right of the old jobs to guarantee that the correct processing order is maintained. Whenever we hit the end of the ArrayList, the ArrayList is doubled, even if there are unused ArrayList elements due to calls to GetNextJob(). To fix this problem, we need to make our ArrayList circular. A circular array is one that has no definite start or end. Rather, we have to use variables to maintain the beginning and end equals: Note The modulus operator, %, when used like x % y, calculates the remainder of xdivided by y. The remainder is always between 0 and y – 1. This approach works well if our buffer never has more than 16 elements, but what happens if we want to add a new job to the buffer when there are already 16 jobs present? Like with the ArrayList's Add() method, we'll need to redimension the circular array appropriately, by, say, doubling the size of the array. The System.Collections.Queue Class The functionality we have just described—adding and removing items to a buffer in first come, first served order while maximizing space utilization—is provided in a standard data structure, the Queue. The .NET Framework Base Class Library includes such a built-in class, the System.Collections.Queue class. Whereas our earlier code provided AddJob() and GetNextJob() methods, the Queue class provides identical functionality with its Enqueue() and Dequeue() methods, respectively. Behind the scenes, the Queue class maintains an internal circular object array and two variables that serve as markers for the beginning and ending of the circular array: head and tail. By default, the Queue's initial capacity is 32 elements, although this is customizable when calling the Queue's constructor. Since the Queue is maintained with an object array, variables of any type can be queued up. The Enqueue() method determines if there is sufficient capacity for adding the new item to the queue. If so, it merely adds it to the element of the circular to the tail index and then "increments" tail using the modulus operator to ensure that tail does not exceed the internal array's length. If just look at the head element, but not actually dequeue it, the Queue class also provides a Peek() method. What is important to realize is that the Queue, unlike the ArrayList, does not allow random access. That is, you cannot look at the third item in the queue without dequeing the first two items. (However, the Queue class does have a Contains() method, so you can determine whether or not a specific item exists in the Queue.) If you know you will need random access, the Queue is not the data structure to use—the ArrayList is. The Queue is, however, ideal for situations where you are only interested in processing items in the precise order the System.Collections.Stack class, which, like the Queue class, maintains its elements internally using a circular array of type object. Note that the Stack class's default capacity is 10 elements, as opposed to the Queue's 32. Like the Queue and ArrayList, this default capacity can optionally be specified by the constructor. Like the ArrayList,—lines at the DMV, that,® .NET, set a breakpoint, and go to Debug/Start. When the breakpoint hits, display the Call Stack window from Debug/Windows/Call Stack.) Stacks are also commonly used in parsing grammar . (Bear in mind that constant-time was denoted as O(1).) Rarely do we know the ordinal position of the data we are interested in, though. For example, consider an employee database. Each employee is to sort the employees by their social security numbers, which reduces the asymptotic search time down to O(log n). Ideally, what we would like to be able to do is access an employee's records in O(1) time. One way to accomplish this is to build a huge array, with an entry for each possible social security number value. That is, our array would start at element 000-00-0000 and go to element 999-99-9999, as shown in the figure below: >, which is one billion (1,000,000,000)—different social security numbers. For a company with 1,000 employees, only 0.0001% of this array would be utilized. (To put things in perspective, your company would have to employ roughly one-sixth of the world's population in order to make this array near fully utilized.) Compressing Ordinal Indexing with a Hash Function Clearly creating a one billion element array to store information about 1,000 employees is unacceptable.: The inputs to H can be any nine-digit social security number, whereas the result of H is a four-digit number, which are merely the last four digits of the nine-digit social security number. In mathematical terms, H maps elements from the set of nine-digit social security numbers to elements from the set of four-digit social security numbers, as shown graphically in Figure 9. Figure 9. Graphical representation of a hash function Figure 9 illustrates a behavior exhibited by all hashing functions—collisions. That is, with. To put it back into the context of our earlier example, consider what would happen if a new employee was added with social security number 123-00-0191. Attempting to add this employee to the array would cause a problem since there already exists an employee at array location 0191 (Jisun Lee). Mathematical Note A hashing function can be described in more mathematically precise terms as a function f : A -> B. Since . Collision Avoidance and Resolution When adding data to a hash table, a collision throws a monkey wrench into the entire operation. Without a collision, we can just. (If this makes no sense, don't worry!) Choosing an appropriate hash function is referred to as collision avoidance. Much study has gone into this field, as the hash function used can greatly impact the overall performance of the hash table. In the next section we'll look the hash function used by the Hashtable class in the .NET Framework. In the case of a collision, there are a number of strategies that can be employed. The task at hand, referred to as, or Figure 10. > – 1235. After Bob, Cal is inserted, his value hashing to 1237. Since no one is currently occupying 1237, Cal is inserted there. Danny is next, and his social security number is hashed to 1235. Since 1235 is taken, 1236 is checked. Since 1236 is open, Danny is placed there. Finally, Edward is inserted, his social security number also hashing to 1235. Since 1235 is taken, 1236 is checked. That's taken too, so 1237 is checked. That's occupied by Cal, so 1238 is checked, which is open, so Edward is placed there. Collisions also present a problem when searching a hash table. For example, given the hash table above,, in which case we can conclude that there does not exist an employee with social security number 111-00-1235.. The System.Collections.Hashtable Class The .NET Framework Base Class Library includes an implementation of a hash table in the Hashtable class. When adding an item to the Hashtable, you must provide not only the item, but also ages = new Hashtable(); public static void Main() { // Add some values to the Hashtable, indexed by a string key ages.Add("Scott", 25); ages.Add("Sam", 6); ages.Add("Jisun", 25); // Access a particular key if (ages.ContainsKey("Scott")) { int scottsAge = (int) ages["Scott"]; Console.WriteLine("Scott is " + scottsAge.ToString()); } else Console.WriteLine("Scott: Realize that the order with which the items are inserted and the order of the keys in the Keys collection are not necessarily the same. The ordering of the Keys collection is based on the slot the key's item was stored. For example, running the above code outputs: Even though the data was inserted into the Hashtable in the order "Scott," "Sam," "Jisun." The Hashtable Class's Hash Function The Hashtable class's hash function is a bit more complex than the social security number hash code we examined earlier. First, keep in mind that the hash function must return an ordinal value. This was easy to do with the social security number example because. Since every type is derived, either directly or indirectly, from Object, all objects have access to this method. Therefore, a string, or any other type, can be represented as a unique number. The Hashtable class's hash function is defined as follows:, which; and when retrieving an item, the actual item must be found if it is not in the expected location. Earlier we briefly examined two collusion resolution strategies—linear and quadratic probing. The Hashtable class uses a different technique referred to as rehasing. (Some sources refer to rehashing as double hashing.) Rehasing works as follows: there is a set of hash different functions, H1 Hn. When inserting or retrieving an item from the hash table, initially the H1 hash function is used. If this leads to a collision, H2 is tried instead, and onwards up to Hn if needed. In the previous section I showed only one hash function—this one hash function is the initial hash function (H1). The other hash functions are very similar to this function, only differentiating by a multiplicative factor. In general, the hash function Hk is defined as:. That is, the other half must remain empty. In an overloaded form of the Hashtable's constructor, you can specify a loadFactor value between 0.1 and 1.0. Realize, however, that whatever value you provide, it is scaled down 72 percent,, confusing, I know). Note I spent a few days asking various listservs and folks at Microsoft why this automatic scaling was applied. I wondered why, if they wanted.) - Since with rehashing the hash value of each item in the hash table is dependent on the number of total slots in the hash table, all of the values in the hash table need to be rehashed (since, you can expect on average 3.5 probes per collision. Since. Conclusion In this article we examined three data structures with inherent class support in the .NET Framework Base Class Library: - The Queue - The Stack - The Hashtable The Queue and Stack provide ArrayList-like capabilities in that they can store an arbitrary number of heterogeneous objects. The Queue and Stack differ from the ArrayList in the sense that while the ArrayList data structure examines was the Hashtable.. - Applied Microsoft .NET Framework Programming by Jeffrey Richter. - Shared Source Common Language Infrastructure 1.0 Release. (Made available November.
http://msdn.microsoft.com/en-us/library/aa289149(v=vs.71).aspx
CC-MAIN-2013-20
refinedweb
2,616
62.88
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Automated Action Birthday Email Can anyone guide me how to auto send a birthday email for every customer on his/her birthday. My idea is to use a scheduler which checks if today is someone's birthdate and sends an email. I am a novice in this and detailed help would be much appreciated.... You can create scheduler and in the method search the record with the domin condition dob is equal to current date month and day. If record find then send mail using mai.mail object. Example, To create the method in custom module:- from openerp.osv import fields,osv import datetime class res_partner(osv.osv): _inherit = 'res.partner' def send_birthday_email(self, cr, uid, ids=None, context=None): partner_obj = self.pool.get('res.partner') mail_mail = self.pool.get('mail.mail') mail_ids = [] today = datetime.datetime.now() today_month_day = '%-' + today.strftime('%m') + '-' + today.strftime('%d') par_id = partner_obj.search(cr, uid, [ ('customer','=',True),('birthdate','like',today_month_day)]) if par_id: try: for val in partner_obj.browse(cr, uid, par_id): name = val.name subject = "Birthday Wishes" body = _("Hello %s,\n" %(name)) body += _("\tWish you Happy Birthday\n") footer = _("Kind regards.\n") footer += _("%s\n\n"%val.company_id.name) mail_ids.append(mail_mail.create(cr, uid, { 'email_to': email_from, 'subject': subject, 'body_html': '<pre><span class="inner-pre" style="font-size: 15px">%s<br>%s</span></pre>' %(body, footer) }, context=context)) mail_mail.send(cr, uid, mail_ids, context=context) except Exception, e: print "Exception", e return None In the xml File Create Scheduler <record forcecreate="True" id="ir_cron_employee_birth" model="ir.cron"> <field name="name">Employee Birthday scheduler</field> <field eval="True" name="active"/> <field name="interval_number">1</field> <field name="interval_type">days</field> <field name="numbercall">-1</field> <field eval="False" name="doall"/> <field eval="'res.partner'" name="model"/> <field eval="'send_birthday_email'" name="function"/> <field eval="'()'" name="args"/> </record> Or In Openerp using Menu Setting - > Configuration -> Scheduler -> Scheduler Action Create scheduler. Just create a scheduler to check dob of customers if date is today create an email. you do not need to send email. there is other scheduler to send email. It will send created email automatically at scheduled interval of time but if you want you can send email too by same scheduler like you said. All depends on you, do it how you want. Let me know if any help is
https://www.odoo.com/forum/help-1/question/automated-action-birthday-email-55646
CC-MAIN-2017-09
refinedweb
416
51.75
This chapter is designed to explore the mechanisms used by C and C++ to access external files residing on disk. C programs have access to two completely different sets of file input- output functions. High level file functions are recognized by the ANSI C standards committee. Low level file access are what the high level functions are built upon. The low level routines reside as intrinsics built into the operating system in most multi-user, multi-tasking operating systems and as such provide improvements in speed and size over functions residing within a library. In MS-DOS, the low-level routines are functions that reside in the standard C library and provide no benefit over the high-level file functions. High-level file access structures the file access through memory buffers. These buffers hold data going to or coming from the file. On a file being accessed for write, the data being written to the file is actually written to the memory buffer for the file. When the buffer is full, then the data is transferred to the disk with one physical write. Therefore, it is proper to say that high-level file access has two distinct views of the file, the logical view and the physical view. The logical view is where the programmer assumes that each time a write is performed the data is written to the file, where in reality the data is placed into a memory buffer for later transfer to the file. The physical view is where the input-output subsystem of the operating system is used to transfer the memory buffer contents to the disk with one input-output operation. High-level file access makes life easier for the programmer because there is no need to know about sector sizes, buffer lengths or other operating system dependent considerations. The only need is to have a pointer to FILE type data and through that pointer access to the caching buffer that holds the data. FILE *fp; High-level file access has the following functions available: **FILE *fopen(char *filename, char *mode);** The argument filename is the physical name of the file on disk, including the drive designator, path, and extension. This argument can be a string literal, or variable. The mode is a string literal or variable holding the mode that the file is to be opened with. Allowable modes for IBM PC are: "r" => read; file must exist "w" => write; file does not exist it is created; if it does exist, it is overwritten "a" => append; data added to the end of the existing data, or new file created "r+" => open for both read and write; the file must exist "w+" => open for both read and write; if file exists contents over-written; if doesn't exist, created "a+" => open for both read and appending; if doesn't exist, create The function returns an address if the file is opened. If the file could not be opened, the function returns a NULL value for the address. #include <stdio.h> ... FILE *fp; ... if((fp=fopen("test.dat","w")) == NULL) { puts("Cannot Open File"); exit(1); } **int fclose( FILE *stream);** This function closes an open stream in an orderly fashion, which includes the flushing of any data held in internal data buffers to the disk. The function returns an EOF if an error is detected, otherwise it returns zero. **int fputc(char c, FILE *fp);** The argument c is a character to be written to the disk and fp is the FILE pointer to the data stream where the data is to be written. The function returns an EOF, usually a -1, to indicate an end-of- file or error. If the function succeeded, the function returns the character that was written. #include <stdio.h> /* * The following statements send a line of output to a stream */ ... for(i = 0; (i < 80) && (fputc( str[i], stream) != EOF) && (str[i] != '\n'); i++); **int fgetc( FILE *fp );** The argument fp is a pointer to a FILE type object that has been opened and is capable of handling output. On success the function returns the integer ASCII value that represents the read. If the function fails, meaning it encountered an end-of-file condition or someother error, it returns an EOF, usually -1. #include <stdio.h> /* * The following statements gathers a line of input from a * stream */ ... for(i = 0; (i < 80) && ((ch = fgetc(stream)) != EOF) && (ch != '\n'); i++) buffer[i] = ch; buffer[i] = '\0'; **char *fgets(char *str, int num, FILE *fp);** The argument str is a character array or pointer to a character array and num is the maximum number of characters to be read into the string str **. The argument **fp is the FILE pointer into the data stream. Characters are read from the input stream into str until On success the function returns the address of the buffer that was filled, in the example, that would be the address of str. If an error or end-of-file is encountered, the function returns a NULL value. #include <stdio.h> /* * The following statement gets a line of input from a stream. * No more than 99 characters, or up to \n, are read from the * stream */ ... result = fgets(line,100,stream); int fputs(char *str, FILE *fp); The argument str is the character array or pointer to an array of null-terminated characters that are to be written to the stream fp. The function returns the last character output, if successful. If the string is empty, the return value is 0 on most systems, but some UNIX implementations return an indeterminate value. A return of EOF, usually -1, indicates an error. #include <stdio.h> /* * The following statement writes a string to a stream */ ... result = fputs(buffer,stream); int fprintf(FILE *stream,char *format-string[,arguments...]); The argument stream is the data stream where the data is to be written. format-string contains escape-sequences and format- specifiers exactly like those used in printf(), and the arguments are data items that correspond to the format-specifiers. On success the function returns the number of characters printed. If the function cannot write to the data stream, the return value is EOF. #include <stdio.h> ... File *stream; int i = 10; double fp = 1.5; char *s = "this is a string"; char c = '\n'; ... stream = fopen("results","w"); ... fprintf(stream,"%s%c",s,c); fprintf(stream,"%d\n",i); fprintf(stream,"%f",fp); int fscanf(FILE *stream,char *format-string[,arguments...]); The argument stream is the data stream to be read. format- string contains the format-specifiers for data conversion, arguments are those variables that data is to be stored into. On success the function returns the number of fields that were successfully converted and assigned. The return value does not include fields that were read but not assigned. The EOF value is returned on attempt to read end-of-file. The value 0 is returned, if no fields were assigned. #include <stdio.h> ... FILE *stream; long l; float fp; char s[81]; char c; ... stream = fopen("data","r"); ... fscanf(stream,"%s",s); fscanf(stream,"%c",); fscanf(stream,"%ld",); fscanf(stream,"%f",); int fread(char *buffer, int size, int count, FILE *stream); This function reads up to count items of length size from the input stream and stores them in the given buffer. The file pointer is incremented by the number of bytes actually read. If the given stream was opened in text mode, carriage- return/linefeed pairs are replaced with single linefeed characters. The replacement has no effect on the file pointer or the return value. On success the function returns the number of full items actually read, which may be less than count if an error occurs or if the end-of-file is encountered before reaching count. #include <stdio.h> ... FILE *stream; long list[100]; int numread; ... stream = fopen("data","r+b"); . . . numread = fread((char *)list,sizeof(long),100,stream); int fwrite(char *buffer,int size,int count,FILE *stream); The function writes up to count items of length size from buffer to the output stream. The file pointer associated with stream is incremented by the number of bytes actually written. If the given stream was opened in text mode, each carriage-return is replaced with a carriage-return/linefeed pair. The replacement has no effect on the return value. On success the function returns the number of full items actually written, which may be less than count if an error occurs. #include <stdio.h> ... FILE *stream; long list[100]; int numwritten; ... stream = fopen("data","w+b"); ... numwritten = fwrite((char *)list,sizeof(long),100,stream); int fseek(stream,offset,origin) stream **to a new location that is **offset bytes from the origin. The next operation on the stream takes place at the new location. On a stream open for update, the next operation can be either a read or a write. The origin can be one of the following defined constants that appear in io.h. constant origin meaning SEEK_SET 0 beginning of file SEEK_CUR 1 current position of file pointer SEEK_END 2 end of file On success the function returns the value 0 if the pointer was successfully moved. A nonzero return value indicates an error. On devices incapable of seeking, the value returned is undefined. #include <stdio.h> ... FILE *stream; int result; ... stream = fopen("data","r"); . . . result = fseek(stream,0L,0); /* beginning of file */ int feof(FILE *stream); This function determines whether the end of the given stream has been reached. Once end-of-file is reached, read operations return an end-of-file indicator until the stream is closed or rewind() is called. On success the function resturns a nonzero value when the current position is end-of-file. The value 0 is returned if the current position is not end-of-file. There is no error return. #include <stdio.h> ... char string[100] FILE *stream; ... while(!feof(stream)) { if(fscanf(stream,"%s",string)) process(string); } int ferror(FILE *stream); This function tests for a reading or writing error on the given stream. If an error has occurred, the error indicator for the stream remains set until the stream is closed, rewound or until clearerr() is called. On success this function returns a nonzero value to indicate an error on the given stream. The return value 0 means no error has occurred. #include <stdio.h> ... FILE *stream; char *string; ... fprintf(stream,"%s\n",string); if(ferror(stream)) { perror("write error"); clearerr(stream); } void perror(char *string); This function prints an error message to stderr. The string argument is printed first, followed by a colon, the system error message for the last library call that produced an error, and a newline. The actual error number is in the variable errno, which should be declared at the external level. The system error messages are accessed through the variable sys_errlist, which is an array of messages ordered by error number. This function has no return values. Although these functions are defined within the ANSI C standard, they are available on all C compilers. The true use of these functions when a multi-user, multi-tasking operating system such as UNIX is used. On such operating systems these routines are built into the operating system kernel and give low-level access to all devices available on the system. All device drivers for such operating systems are written using these system routines to open, close, read and write to the specified device. In the MS- DOS operating system and Microsoft Windows operating environment, these are functions that reside within the standard C library and do not necessarily provide any low-level access to devices. When these routines are used there is no buffering within the operating system between the application and the disk file. The programmer is responsible for setting up a buffer within the application that holds the data to be written or filled on a read. Since there is no operating system buffer to contend with, these routines are more efficient than standard high-level I/O functions. Use of the low-level routines tend to produce smaller programs and faster execution speeds. The functions available: int open(char *pathname, int oflag[, int pmode]); This function opens the file specified by pathname and prepares the file for subsequent reading or writing as defined by oflag. The argument oflag is an integer expression formed by combining one or more of the following manifest constants, defined in fcntl.h. When more than one manifest constant is given, the constants are joined with the bitwise OR operator. O_RDONLY Open for reading only. O_WRONLY Open for writing only. O_RDWR Open for reading and writing. O_APPEND Each write to the file will be at the end of the file. O_CREAT If the file exists, O_CREAT is ignored. However, if the file does not exist, it is created with mode **pmode**. O_TRUNC If the file exists, its contents will be discarded. O_EXCL If O_CREAT and O_EXCL are set, then **open()** fails if the file exists. O_NDELAY When opeing pipes, FIFOs, and communication-line special files, this flag determines whether **open()** waits or returns immediately. Subsequent reads and writes are also affected. This has no effect on ordinary files and directories. O_BINARY Can be given to explicitly open the file in binary mode. O_TEXT Can be given to explicitly open the file in text mode. The argument pmode is only if the O_CREAT flag is in effect. This argument is used in constructing the access permissions to the file. The permissions are found as manifest constants in the sys/stat.h header file. Those constants are defined as follows: S_IWRITE Permission to write for the user. S_IREAD Permission to read for the user. The above two constants are the only ones available on a PC compiler. With UNIX the above constants are defined as S_IRUSR and S_IWUSR and the following constants are also available: S_ISUID set user ID on executions S_ISGID set group ID on execution S_IRWXU read, write, execute permission (owner) S_IRUSR read permission (owner) S_IWUSR write permission (owner) S_IXUSR execute permission (owner) S_IRWXG read, write, execute permission (group) S_IRGRP read permission (group) S_IWGRP write permission (group) S_IXGRP execute permission (group) S_IRWXO read, write, execute permission (other) S_IROTH read permission (other) S_IWOTH write permission (other) S_IXOTH execute permission (other) On success the function returns a file handle or descriptor for the opened file. A return value of -1 indicates an error, and errno is set to an error code value. #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #include <io.h> #include <stdlib.h> ... int fh1, fh2; ... fh1 = open("data1",O_RDONLY); if(fh1 == -1 ) perror("Open Failed"); ... fh2 = open("data2",O_WRONLY|O_TRUNC); if(fh2 == -1) perror("Couldn't Open Output"); int creat(char *pathname, int pmode); This function creates a new file or opens and truncates an existing file. The permission setting, pmode, applies to newly created files only. The new file receives the specified permission setting after it is closed for the first time. The pmode is an integer expression containing one or both of the manifest constants defined in /sys/stat.h. On success the function returns a handle for the created file if the call is successful. A return value of -1 indicates an error, errno is set to one of the manifest constant error codes. int fh; ... fh = creat("data",S_IREAD|S_IWRITE); if(fh = -1) perror("Couldn't Create File"); int read(int handle, char *buffer, int count); This function attempts to read count bytes from the file associated with handle into buffer. After the read, the file pointer points to the next unread character in the file. On success this function returns the number of bytes actually read, which may be less than count if there are fewer than count bytes left in the file or if the file was opened in text mode. A value of 0 indicates end-of-file. A value of -1 indicates an error. int fh, bytesread; unsigned int nbytes = BUFSIZ; char buffer[BUFSIZ]; ... bytesread = read(fh,buffer,nbytes); int write(int handle,char *buffer,int count); This function attempts to write count bytes from buffer into the file associated with handle. On success this function returns the number of bytes actually written. The value of -1 is returned to indicate an error. int fh, byteswritten; unsigned int nbytes = BUFSIZ; char buffer[BUFSIZ]; ... byteswritten = write(fh,buffer,nbytes); long lseek(int handle,long offset,int origin) This function moves the file pointer associated with handle to a new location that is offset bytes form the origin. The origin can be one of the following defined constants that appear in io.h. constant origin meaning SEEK_SET 0 beginning of file SEEK_CUR 1 current position of file pointer SEEK_END 2 end of file On success this function returns the offset, in bytes, of the new position relative to the beginning of the file. A return value of -1L indicates an error. #include <io.h> #include <fcntl.h> #include <stdlib.h> ... int fh; long position; ... fh = open("data",O_RDONLY); ... /* 0 offset from beginning */ position = lseek(fh, 0L, SEEK_BEG); if(position == -1L) perror("lseek failed"); int unlink(char *pathname); This function deletes the file specified by pathname. On success this function returns the value of 0 if the file is successfully deleted. A return value of -1 indicates an error and errno is set to hold the system error number. #include <io.h> #include <stdlib.h> ... int result; ... result = unlink("tmpfile"); if(result == -1) perror("Couldn't Delete File"); /*************************************************************** * Program Name : testio * Source Name : testio.c * Description : Demonstration program to show different * : techniques for writing to and reading * : from the disk. ****************************************************************/ #include <stdio.h> /* all I/O functions */ #include <fcntl.h> /* all UNIX low level functions*/ #include <string.h> /* all string manipulation functions */ #include <stdlib.h> /* permission modes for UNIX low level */ #include <io.h> #include <sys/types.h> #include <sys/stat.h> extern int errno; /* needed to go with perror() */ /* * describe a structure template, no variable of declared */ typedef struct tagPERSON { char name[30]; char street[20]; char city[20]; char state[3]; char zip[6]; char ssn[13]; int age; int height; int weight; } PERSON; /* * function prototypes */ int getdata( PERSON * ); int showdata(PERSON * ); int puts_gets( void ); int fprnt_fscan( void ); int fread_fwrite( void ); int read_write( void ); int err_handler(FILE *, char *, int ); /* * S T A R T O F P R O G R A M */ int main() { char ans[2]; int which; /* * which functions are to be executed */ do { printf("\nWhich set of I/O functions are to be tested?"); printf("\n 1. fputs and fgets"); printf("\n 2. fprintf and fscanf"); printf("\n 3. fread and fwrite"); printf("\n 4. read and write"); printf("\n 5. quit this program"); printf("\nEnter your selection: "); gets(ans); which = atoi(ans); switch(which) { case 1: puts_gets(); break; case 2: fprnt_fscan(); break; case 3: fread_fwrite(); break; case 4: read_write(); break; case 5: return(0); default: printf("\n\nInvalid selection . . . try again!"); break; } }while(1); } /* * read data from screen into structure elements */ int getdata( PERSON *ptr) { int result; printf("\nEnter your name: "); gets(ptr->name); printf("\nEnter your street: "); gets(ptr->street); printf("\nEnter your city: "); gets(ptr->city); printf("\nEnter your state: "); gets(ptr->state); printf("\nEnter your zip code: "); gets(ptr->zip); printf("\nEnter your ssn: "); gets(ptr->ssn); printf("\nEnter your age: "); scanf("%d",>age); printf("\nEnter your height: "); scanf("%d",>height); printf("\nEnter your weight: "); scanf("%d",>weight); /* * flush the input data stream so no newlines are left */ if((result = fflush(stdin)) == EOF) err_handler(stdin,"stdin",1); return 0; } /* * display the data held in structure elements on the screen */ int showdata(PERSON *ptr) { printf("\nPERSON: %s",ptr->name); printf("\n : %s",ptr->street); printf("\n : %s",ptr->city); printf("\n : %s",ptr->state); printf("\n : %s",ptr->zip); printf("\n : %s",ptr->ssn); printf("\n : %d",ptr->age); printf("\n : %d",ptr->height); printf("\n : %d",ptr->weight); return 0; } /* * Using fputs() and fgets() write data to and read data back * from the disk. These functions only work with string data. */ int puts_gets() { FILE *fp; PERSON my; char *val; char ans[2],filename[16],text[80]; int rtnval ,linecnt ,lgth ; /* * load file name */ strcpy(filename,"testfil1.dat"); /* * open test data set */ if((fp = fopen(filename,"w")) == NULL) err_handler(fp,filename,1); do { /* * acquire data */ printf("\nEnter Text:"); gets(text); /* * write to disk */ lgth = strlen(text); text[lgth] = '\n'; /* replace NULL terminator */ text[lgth + 1] = '\0'; /* place NULL terminator */ if((rtnval = fputs(text,fp)) == EOF) err_handler(fp,filename,2); /* * keep going? */ strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); if((rtnval = fclose(fp)) == EOF) { err_handler(fp,filename,3); } /* * open test data set */ if((fp = fopen(filename,"r")) == NULL) err_handler(fp,filename,3); /* * print the data back on the screen */ linecnt = 0; do { /* * read data from disk, newline is only way to * distinguish records */ if((val = fgets(text,sizeof(text),fp)) == NULL) if(err_handler(fp,filename,3)) break; /* * display data on screen */ printf("\nLine %d:%s",linecnt,text); ++linecnt; strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); if((rtnval = fclose(fp)) == EOF) { err_handler(fp,filename,3); } return 0; } /* * Using fprintf() and fscanf() functions write data to and * read data from the disk. Notice that fscanf() has same * limitations as scanf() */ int fprnt_fscan() { FILE *fp; PERSON my; char filename[16],lname[20],tstreet[20],ans[2]; int rtnval ; /* * load filename */ strcpy(filename,"testfil2.dat"); /* * open test data set */ if((fp = fopen(filename,"w")) == NULL) { perror("FPRNT_FSCAN(): cannot open file for write"); exit(3); } do { /* * inform user of limitations */ printf("\nEnter only a single string "); printf("for name, street and city"); /* * acquire data */ getdata(); /* * write to disk */ if((rtnval = fprintf(fp,"%s %s %s %s %s %s %d %d %d\n", my.name, my.street, my.city, my.state, my.zip, my.ssn, my.age, my.height, my.weight)) == EOF) { perror("FPRNT_FSCAN(): cannot fprintf to file"); exit(4); } /* * keep going? */ strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); fclose(fp); /* * open test data set */ if((fp = fopen(filename,"r")) == NULL) { perror("FPRNT_FSCAN(): cannot open file for read"); exit(5); } /* * print the data back on the screen */ do { /* * read data from disk, notice fscanf has same * limitation in scanning disk data as scanf has * in screen data; it delimits values by whitespace */ if((rtnval = fscanf(fp, "%s %s %s %s %s %s %s %s %d %d %d", my.name, lname, my.street, tstreet, my.city, my.state, my.zip, my.ssn, , , )) == NULL) { perror("FPRNT_FSCAN(): cannot fscanf from file"); exit(3); } /* * display data on screen */ showdata(); strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); fclose(fp); return 0; } /* * Using the fwrite() and fread() functions write data to and * read data from the disk. These are block oriented, * high-level buffered I/O functions. */ int fread_fwrite() { FILE *fp; PERSON my; char filename[16],ans[2]; int rtnval ; /* * load filename */ strcpy(filename,"testfil3.dat"); /* * open test data set, binary mode because of integer * values to be written */ if((fp = fopen(filename,"w+b")) == NULL) { perror("FREAD_FWRITE(): cannot open file for write"); exit(6); } do { /* * acquire data */ getdata(); /* * write to disk; * = the address of the buffer to be written * sizeof(my) = the number of bytes to be written * 1 = the number of items of the above size to be * written * fp = the stream pointer */ if((rtnval = fwrite(,sizeof(my),1,fp)) == EOF) { perror("FREAD_FWRITE(): cannot fwrite to file"); exit(7); } /* * keep going? */ strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); fclose(fp); /* * open test data set, allow for binary data because of * integer type values */ if((fp = fopen(filename,"r+b")) == NULL) { perror("FREAD_FWRITE(): cannot open file for read"); exit(8); } /* * print the data back on the screen */ do { /* * read data from disk; must be tested for less than * the count of items (1) to detect EOF */ if((rtnval = fread(,sizeof(my),1,fp)) < 1) if(err_handler(fp,filename,9)) break; /* * display data on screen */ showdata(); strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); fclose(fp); return 0; } /* * Using the write() and read() functions, write data to and * read data from the disk. These are low-level, unbuffered, * UNIX like I/O functions. */ int read_write() { char buf[512]; int fp; PERSON my; char filename[16], ans[2]; int rtnval ; /* * load filename */ strcpy(filename,"testfil4.dat"); /* * open test data set, binary mode because of integer * values to be written; create dataset if not there * O_WRONLY|O_CREAT|O_BINARY = open the file and write * only if the file is not there then create it and * write to the file in binary mode. * S_IWRITE = if the file has to be created then create * it as as read/write file, which implies read and * write capability allowed. */ #ifdef PC if((fp = open(filename,O_WRONLY|O_CREAT|O_BINARY,S_IWRITE)) == EOF) { perror("READ_WRITE(): cannot open file for write"); exit(10); } #else if((fp = open(filename,O_RDWR|O_CREAT,S_IREAD|S_IWRITE)) == EOF) { perror("READ_WRITE(): cannot open file for write"); exit(10); } #endif do { #ifndef PC /* * inform user of limitations */ printf("\nEnter only a single string "); printf("for name, street and city"); #endif /* * acquire data */ getdata(); /* * write to disk */ #ifdef PC if((rtnval = write(fp,,sizeof(my))) == EOF) { perror("READ_WRITE(): cannot write to file"); exit(7); } #else sprintf(buf,"%s %s %s %s %s %s %d %d %d", my.name, my.street, my.city, my.state, my.zip, my.ssn, my.age, my.height, my.weight); if((rtnval = write(fp,buf,sizeof(my))) == EOF) { perror("READ_WRITE(): cannot write to file"); exit(7); } #endif /* * keep going? */ strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); close(fp); /* * open test data set, allow for binary data because of * integer type values */ #ifdef PC if((fp = open(filename,O_RDONLY|O_BINARY)) == EOF) { perror("READ_WRITE(): cannot open file for read"); exit(11); } #else if((fp = open(filename,O_RDONLY)) == EOF) { perror("READ_WRITE(): cannot open file for read"); exit(11); } #endif /* * print the data back on the screen */ do { /* * read data from disk, notice fscanf has same * limitation in scanning disk data as scanf has * in screen data; it delimits values by whitespace */ #ifdef PC if((rtnval = read(fp,,sizeof(my))) < 0) { perror("READ_WRITE(): cannot read from file"); exit(12); } if( rtnval == 0 ) { fprintf(stderr,"\nEnd Of File Reached"); break; } #else if((rtnval = read(fp,buf,sizeof(my))) <0) { perror("READ_WRITE(): cannot write to file"); exit(7); } if( rtnval == 0 ) { fprintf(stderr,"\nEnd Of File Reached"); break; } sscanf(buf,"%s %s %s %s %s %s %d %d %d", my.name, my.street, my.city, my.state, my.zip, my.ssn, , , ); #endif /* * display data on screen */ showdata(); strcpy(ans," "); printf("\nContinue(Y/N)? "); gets(ans); }while(!strcmp(ans,"y")); close(fp); return 0; } /* * Sample error handler for I/O functions. Will determine if * error was encountered or simply end-of-file */ int err_handler(FILE *fileptr, char *filename, int exitnum) { char errmsg[80]; if ferror(fileptr) { sprintf(errmsg,"ERROR - cannot access file:%s",filename); putchar('\n'); perror(errmsg); clearerr(fileptr); exit(exitnum); } if feof(fileptr) { sprintf(errmsg,"End of File reached on file:%s", filename); putchar('\n'); perror(errmsg); clearerr(fileptr); return(1); } return 0; } The C++ input/output mechanism (provided with C++ compilers that support at least AT release 2.0) is comprised of a series of classes that have been created to handle the problem of sending and receiving data. Here is a brief description of these classes: #. The streambuf class provides memory for a buffer along with class methods for filling the buffer, accessing buffer contents, flushing the buffer, and managing the buffer memory. It handles the most primitive functions for streams on a first-in-first-out basis. #. The filebuf class is derived from class streambuf and extends it by providing basic file operations. #. The strstreambuf class is derived from class streambuf and is designed to handle memory buffers. #. The ios class represents general properties of a stream, such as whether it’s open for reading and whether it is a binary or a text stream, and it includes a pointer member to a streambuf class. #. The ostream class derives from the ios class and provides output methods. That is, it formats the data you send to an output device so that it appears in the way you expect. #. The istream class also derives from the ios class and provides input methods. That is, it accepts data from an input device in the way you expect. #. The iostream class is based on the istream and ostream class and thus inherits both input and output methods. Of course, classes in and of themselves do no good unless they are used to create instances, or objects, of those classes. Fortunately, this has already been done at a global scope, so that these objects are immediately available for use. Here is a list of the objects to which messages can be sent: #. The cout object corresponds to the standard output stream. By default, this stream is associated with the standard output device, typically a monitor. cout is an instance of the class ostream. #. The cerr object corresponds to the standard error stream, which can be used for displaying error messages. By default, this stream is associated with the standard output device, typically a monitor, and the stream is unbuffered. Unbuffered means that information is sent directly to the screen without waiting for a buffer to fill or for a newline character. #. The clog object also corresponds to the standard error stream. By default, this stream is associated with the standard output device, typically a monitor, and the stream is buffered. C++ treats the terminal screen as an object in the real world. This object has a state and a public interface. By sending both mutator and accessor messages to this object, the programmer can effectively perform all the same operations that were available using the output functions provided in the stdio library. The global instance cout of class ostream is used to initiate all output operations to the monitor. In the class ostream, the function usee the most often is the overloaded bitwise left-shift operator, <<. It is typically called the insertion operator. The name “insertion” comes from the fact that characters are being “inserted” into an output buffer, as represented by the object cout. Within the class ostream the operator << (left_shift) has been overloaded many times as a binary member function (requires two operands). The one implicit argument is, of course, the instance of cout, and the one explicit argument is the data item that is to be output. It is recommended that a listing of the file iostream.h be printed. Inside the file can be seen the many declarations for this overloaded function within the ostream class. ostream& operator<<( int ); ostream& operator<<( long ); ostream& operator<<( double ); ostream& operator<<( char ); ostream& operator<<( const signed char * ); ostream& operator<<( unsigned char ); ostream& operator<<( short int ); ostream& operator<<( unsigned long ); ostream& operator<<( float ); ostream& operator<<( long double ); ostream& operator<<( void * ); ostream& operator<<( streambuf * ); ostream& operator<<( ostream& (*) (ostream&) ); These are essentially all of the types that printf() can accept. The last item is designed to accommodate a manipulator function. The class istream is derived from the class ios and controls the handling of input from the keyboard. The global instance that the programmer uses is called cin. Think of cin as being the keyboard object, from which data will be extracted. The function used the most often to read input from the keyboard is called the extraction operator, >>. Note that it is the overloaded right-shift operator, whereas the insertion function operator<<() is the overloaded left-shift operator. Within the class istream it has been overloaded many times as a binary member function. Thus, it is declared as: istream& operator>>( ... ); The name “extraction” comes from the fact that data is being “extracted” (taken) from the input data stream. The argument to this function is the variable name that references the storage location for the data. Note that all such overloaded functions ignore leading whitespace characters from the keyboard, and terminate upon encountering a whitespace character within the data. The extraction operator function has been overloaded to accommodate these types: unsigned char * signed char * unsigned char& signed char& unsigned short int& short int& int& unsigned int& long& unsgined long& float& double& long double& streambuf * istream& (*) (istream&) Note that unlike scanf(), there is no need to use the address operator (for a non-array type), nor is there a need to specify any type of conversion specification. The function knows which overloaded function is to be used because it matches the type of function argument to the corresponding overloaded operator>> function that takes the same type of argument. All arguments are passed by reference. Manipulators really are functions, but when the name of a function is stated without writing parentheses, the compiler generates the address of that function. Therefore, in the following statement there are no direct function calls occurring, so the left-to-right order of items to be output when sending data to the cout instance can be guaranteed. This means that when the following in written: cout << arg1 << manipulator << arg2; the compiler assures that first arg1 will be sent, followed by the manipulator and finally arg2. In the case of a manipulator function, since we are supplying the address of a function, that takes as its one argument a reference to cout and returns a reference, the compiler will look for an overloaded operator<<() function that conforms to this scheme. Since a pointer-to-function is usually what is being passed in the cout or cin stream, the function being pointed to can be executed by dereferencing the pointer variable that was passed and enclosing the entire expression within parentheses. Since *this always refers to the invoking instance of any nonstatic member function call, in this case it’s the object cout, which will be passed to the manipulator function as an actual argument. Finally, since the manipulator function itself returns a reference to cout, this reference must, in turn, be passed back to the original statement to allow the function chaining to occur. This is how the insertion function to accommodate manipulators is written: ostream& ostream::operator<< (ostream& (*ptr) (ostream&)) { return (*ptr)(*this); } This is the simplest type of manipulator and only a reference to the data stream be passed and that a reference be returned. #include <iostream.h> ostream& showDollars( ostream& stream ) { stream.setf( ios::fixed ); stream.fill( '$' ); stream.width( 8 ); stream.precision( 2 ); stream.setf( ios::right ); return stream; } int main() { float cash = 123.45; cout << "The amount " << showDollars << cash << " is due." << endl; return 0; } The generic form of an output manipulator that accepts one argument is: ostream& manipulator(ostream& stream, type arg) { // your code goes here that uses arg return stream; } where type is either int or long, and arg is the formal argument name. Next, the following code must be included: OMANIP(type) manipulator( type arg) { return OMANIP(type) (manipulator, arg); } where OMANIP is a class defined in the file iomanip.h. ( NOTE: This only works on Microsoft C++ v7.0, Visual C++ v1.5x and Borland C/C++ v3.1.) As an example, here is a manipulator called set that sets the field width to whatever the argument happens to be, and also sets the fill character to an ‘*’. // // example of manipulator set // source file: setmanip.cpp // #include <iostream.h> #include <iomanip.h> ostream& set( ostream& stream, int length ) { return stream << setw(length) << setfill('*'); } OMANIP(int) set( int length ) { return OMANIP(int) (set, length ); } int main() { cout << set(7) << 123 << endl; cout << set(5) << 45 << endl; return 0; } Here is another manipulator that is designed to tab to an absolute column position on some output device. This would be useful when column alignment of data is needed on a report. If the tab position is less than the current file position marker, then a newline is performed. #include <iostream.h> #include <iomanip.h> #include <string.h> // // Declaration // ostream& TAB( ostream&, long ); OMANIP(long) TAB(long); // // Definition // ostream& TAB( ostream& stream, long col ) { long here = stream.tellp(); if( col < here ) { stream << endl; here = 0L; } return stream << setw(col-here) << " "; } OMANIP(long) TAB(long col) { return OMANIP(long) (TAB, col); } class Person { private: char *name; long age; long income; public: Person( const char * = "", int = 0, float = 0.0 ); ~Person(); friend ostream& operator<<(ostream&, const Person& ); }; inline Person::Person(const char *n, int a, float i) { name = new char[strlen(n) +1]; strcpy(name,n); age = a; income = i; } inline Person::~Person() { delete [] name; } ostream& operator<<(ostream& stream, const Person& p ) { stream.seekp(0L); stream << p.name; stream << TAB(20) << p.age; stream << TAB(30) << p.income; stream << endl; return stream; } int main() { Person staff[] = { Person("John Doe", 21, 34566.67 ) ,Person("Mary Jones", 23, 35700.33) ,Person("Pat Lowry", 20, 33100.10) }; const int size = sizeof( staff ) / sizeof( Person ); for( int i = 0; i < size; ++i ) cout << staff[i]; return 0; } There are no built-in manipulators in the standard C++ language definition that takes two arguments. However, Borland C++ provides a constream class that supplies some manipulators that handle two arguments. Suppose a manipulator that requires two arguments is needed to output a line consisting of a variable number of any given character. This means that the manipulator requires two arguments: (1) the number of characters to be output, and (2) the character itself. The first thing that must be done is to package the arguments into a structure object. Here it is called args. #include <iostream.h> #include <iomanip.h> struct ARGS { char ch; int number; }; Next, the name of the structure becomes the type that is used in the IOMANIPdeclare declaration. IOMANIPdeclare( ARGS ); Next, write the manipulator function as though it were taking just one argument. This argument is of the type of the structure. The name of manipulator is fill. It loops the requisite number of times and outputs the character. ostream& fill( ostream& stream, ARGS a ) { for( int i = 0; i < a.number; ++i ) stream << a.ch; return stream; } Finally, write the OMANIP macro shown below. Note that the arguments are listed individually, and the body of the macro creates an instance of the structure and assigns the input values to it. OMANIP( ARGS ) fill(char ch, int number ) { ARGS a; a.ch = ch; a.number = number; return OMANIP( ARGS )(fill,a); } int main() { cout << "How many characters? "; int number; while( !(cin >> number).eof() ) { if( cin.fail() ) cout << "Invalid entry\n"; else { cout << "Enter the character: "; char ch; cin >> ch; cout << fill(ch, number) << endl; } cout << "How many characters? "; } return 0; } In ANSI C, file I/O is handled by functions such as fopen to open a file, fclose to close it, and fscanf and fprintf to read from and write to a file. In the iostream package, the classes meant for file I/O are defined in the header file ** . There are three classes of interest in ** **: the **ifstream class is meant for input, ofstream for output, and the fstream supports both input and output. The simplest way to open a file for I/O is to create an instance of the ifstream or ofstream class, as follows: #include <fstream.h> // // open file named "infile" for input operations only and // connect it to the istream "ins" // ifstream ins("ifile"); // // open file name "outfile" for output operations only and // connect it to the ostream "outs" // ofstream outs("outfile"); As can be seen, the file can be opened and connected to a data stream when the the instance of the ifstream or ofstream class is declared. There are two distinct streams for input and output. The ANSI C equivalent for connecting a file to an ifstream is to call fopen with the “r” mode. On the other hand, using ofstream in C++ is similar to calling fopen with the “w” mode in ANSI C. Before using the stream connected to a file, it should be checked to see if the stream was successfully created. The logical NOT operator ! is overloaded for the stream classes so that it can be used to check a stream using a test like this: // // open stream // ifstream ins("infile"); // // check whether stream has been opened successfully // if( !ins ) { cerr << "Cannot open : infile" << endl; exit( 1 ); } An ifstream or ofstream does not have to be attached to any file at the time of creation. The instance of the class can be created first and then at a later point in the logic flow the open member function of the stream can be used to connect the class instance to a file: ifstream ins; ins.open( "infile" ); if( !ins ) // open failed When the data stream is close with the close() method the file is disconnected from the stream. When a data stream is opened by simply providing the name of a file to the stream’s constructor, the method is taking advantage of C++’s allowance for default argument values. When an instance of ifstream is declared as follows: ifstream ins("infile"); the constructor that gets invoked is declared as follows: ifstream( const char *, int = ios::in, int = filebuf::openprot); The last two integer-valued arguments are used with the default values. The second argument to the constructor indicates the mode in which the stream operates. For ifstream, the default is ios::in, which means the file is opened for reading. For an ofstream object, the default mode is ios::out, implying that the file is opened for writing. The constructors allow you to declare a file stream without specifying a named file. Later, the file can be associated with the data stream. ofstream ofile; // creates output file stream ... ofile.open("payroll"); // ofile connects to file "payroll" // do some payrolling... ofile.close(); // close the ofile stream ofile.open("employee"); // ofile can be reused By default, files are opened in text mode. This means that on input, carriage-return/linefeed sequences are converted to the ‘n’ character. On output, the ‘n’ character is converted to a carriage- return/linefeed sequence. These translations are not done in binary mode. The file opening mode is set with an optional second parameter to the open function, chosen from the following table: Modes for File Open Mode Name Operation ios::app Appends data to the file. ios::ate When first opened, positions file at end-of-file ( ate stand for at end ). ios::in Opens file for reading. ios::nocreate Fails to open file if it does not already exist. ios::noreplace If file exists, open for output fails unless ios::app or ios::ate is set. ios::out Opens file for writing. ios::trunc Truncates file if it already exists. ios::binary Opens file in binary mode. Note that more than one mode can be specified for a file, simply use a bitwise OR of the required modes. For example, to open a file for output and position it at the end of existing data, bitwise OR the modes ios::out and ios::ate as follows: ofstream outs("outfile",ios::out|ios::ate); As an example of file I/O in C++, consider a utility program that copies one file to another. Assume that the utility is named filecopy and that when the following command is typed: filecopy in.fil out.fil filecopy copies the contents of the file named in.fil to a second file named out.fil. // // filecopy.cpp - Source available on instructors diskette // #include <fstream.h> #include <stdlib.h> const int bufsize=256; main(int argc, char *argv[]) { char ch; ifstream f1; ofstream f2(argv[2]); char buff[bufsize]; // // open the source file that is to be copied // f1.open( argv[1], ios::in ); if( !f1 ) { cerr << "Cannot open " << argv[1] << "for input" << endl; exit( 1 ); } else cout << "File " << argv[1] << " opened for input." << endl; // // check to see if the destination file was opened // if( !f2 ) { cerr << "File " << argv[2] << " was not opened " << "for output" << endl; exit( 2 ); } else cout << "File " << argv[2] << " opened for output." << endl; // copy one file to another // // the following will copy one character at a time // from the file associated with f1 to the file // associated with f2 // // while ( f2 && f1.get( ch ) ) // f2.put( ch ); // // the following will copy a line at a time // while( !f1.eof() ) { f1.read( buff, sizeof( buff ) ); f2.write( buff, sizeof( buff ) ); } return ( 0 ); } There is another way to implement the last while loop that actually copies data between the files. The data can be read a line at a time and written a line at a time. To read a line, use the same get function but with the address of a buffer and the buffer’s size as arguments: const bufsize = 128; char buf[bufsize]; // ... f1.get( buf, bufsize ); The call to get will extract from the input stream into the specified buffer, up to bufsize-1 characters or until a newline character is encountered. The get places a terminating null character in the buffer. By default, the get function stops at the newline character, but another delimiter can be specified as a third argument to the get function. Note that this call to get is similar to the fgets function in C except that unlike fgets, get does not copy the newline character into the buffer. Nor does get skip over the newline character. Therefore, to read lines repeatedly from a file, the newline must be extracted separately after each line is read. Many times files must be read containing binary data that have a specific internal structure. For instance, there may be a 128- byte header followed by blocks of data. Information extracted from the header might tell that the data needed is at a specific location inside the file. To read this data, the program must be able to position the stream properly before reading from the file. In ANSI C, functions such as fseek and ftell can be used for positioning within data streams. The iostream library also enables programs to reposition within streams and, as expected, classes provide member functions that accomplish this task. It is possible to position a stream in the iostream library by calling the member functions seekg or seekp of that stream. Because the same stream may be used for both input and output, the stream classes have the concept of a get position and a put ** **position that respectively indicate the location from which the next read or write will occur. The get position is set using seekg, whereas seekp alters the put position. For example, to position the stream at the 513th byte in the input stream ins, seekg can be called as follows: ins.seekg(512); // next get will start at 513th byte On the other hand, the position can be specified relative to some reference point such as the end of the file. For example, to move 8 bytes backward from the end of the stream, use the following: ins.seekg( -8, ios::end ); There are three reference points identified by constants defined in the ios class: ios::beg is the beginning of the stream, ios::end is the end, and ios::cur represents the current position. The current get or put position in a file can be determined by using the tellg function which returns the current location in an input stream, and tellp returns the corresponding item for an output stream. Both functions return a variable of type streampos. The returned position value can be saved and used with seekg or seekp to return to the old location in a file: The iostream library provides a number of functions for checking the status of a stream. The fail function tells whether something has gone wrong with the last file access method. Thus, it is possible to check for problems by calling fail for the stream as follows: ifstream ins("infile"); if( ins.fail() ) { // stream creation has failed .... } In fact, the logical NOT operator ! has been overloaded to call fail for a stream so that the if test can be written more simply as: When reading from a file, it is desired to know whether the end- of- file is reached. The eof function returns true if the stream is at the end-of-file. Once a stream has reached the end-of- file, it does not perform any I/O even if the next I/O operation is attempted after moving the stream away from the end by using seekg or seekp. This is because the stream’s internal state remembers the encounter with the end-of-file. The method clear must be called to reset the state before any further I/O can take place. Thus, sometimes eof and clear are used as follow: // "ins" is an istream. If the stream reached eof, clear // the state before attempting the read from the stream if( ins.eof() ) ins.clear(); // reposition stream and read again... ins.seekg( -16, ios::cur ); // move back 16 bytes ins.get( buf, 8 ); // read 8 bytes into buffer Two other member functions, good and bad, indicate the general condition of a stream. As the names imply, good returns true (a nonzero value) if no error has occurred on the stream, and bad returns true if an invalid I/O has been attempted or if the stream has an irrecoverable failure. The functions good and bad could be used in tests such as this:
https://docs.aakashlabs.org/apl/cphelp/chap13.html
CC-MAIN-2020-45
refinedweb
8,228
61.46
How to create a button programmatically in iOS Swift 4 : Sometimes, we need to create UI elements programmatically in iOS projects. Creating elements dynamically has many advantage and disadvantages. We are not going to discuss that today. In this tutorial, we will learn how to create one UIButton programmatically. We will also learn how to add properties to the button, like adding a title, changing the color of the button etc. Let’s take a look : Create a simple button : Open XCode and create one SingleView iOS application. Give the project a name and select swift as the programming language. Now, open ViewController.swift file. We will add the code inside viewDidLoad() method. This method is called after loading the view. So, any UI related changes should be done inside it. Add the following code after super.viewDidLoad() line : //1 let buttonX = 150 let buttonY = 150 let buttonWidth = 100 let buttonHeight = 50 //2 let button = UIButton(type: .system) //3 button.frame = CGRect(x: buttonX, y: buttonY, width: buttonWidth, height: buttonHeight) //4 self.view.addSubview(button) Explanation : The commented numbers in the above program denote the step number below : - First of all, define the x position,y position, height and width of the button we will create. - Create one UIButton object and assign it to the button variable. - Add frame to this variable by creating a CGRect using the values we have defined. - Finally, add this button the view. If you run the program, it will produce output like below : Oops !! It is not showing anything on the screen. The reason is we have not added any text or any color to the button. The type of the button is system. Let’s try to change it to infoDark by changing the line as below : let button = UIButton(type: .infoDark) import UIKit class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() let buttonX = 150 let buttonY = 150 let buttonWidth = 100 let buttonHeight = 50 let button = UIButton(type: .system) button.setTitle("Click here", for: .normal) button.tintColor = .white button.backgroundColor = .red button.addTarget(self, action: #selector(buttonClicked), for: .touchUpInside) button.frame = CGRect(x: buttonX, y: buttonY, width: buttonWidth, height: buttonHeight) self.view.addSubview(button) } @objc func buttonClicked(sender : UIButton){ let alert = UIAlertController(title: "Clicked", message: "You have clicked on the button", preferredStyle: .alert) self.present(alert, animated: true, completion: nil) } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } If you run this program, it will give you the below output : Conclusion : In this tutorial, we have learnt how to create a custom button in iOS using swift. Try the above program and create different types of buttons by changing the type of the button. If you have any queries, drop a comment below. You might also like : - Checking Prefix and Postfix of a String in Swift 4 - How to create a Rounded corner UIView in iOS with Swift 4 - How to update application badge count locally in iOS swift - How to dismiss a keyboard on tap outside view in iOS swift - Swift switch tutorial with Example - Swift 4 while and repeat-while loop tutorial with example
https://www.codevscolor.com/ios-create-button-programmatically/
CC-MAIN-2020-40
refinedweb
518
57.77
March 2011 Volume 26 Number 03 Forecast: Cloudy - Cloud Services Mashup By Joseph Fultz | March 2011 Up until now, I’ve spent time on solutions using Microsoft Azure or SQL Azure to augment solution architecture. This month I’m taking a look at how to combine multiple cloud services into a single app. My example will combine Azure, Azure Access Control, Bing Maps and Facebook to provide an example of composing cloud services. For those who are a little put off when thinking about federated identity or the real-world value of the social network, I’d like to introduce Marcelus. He’s a friend of mine who owns a residential and commercial cleaning company. Similar to my father in his business and personal dealings, he knows someone to do or get just about anything you want or need, usually in some form of barter. Some might recognize this as the infamous good ol’ boys’ network, but I look at Marcelus and I see a living, breathing example of the Azure Access Control service (or ACS for short) combined with a powerful social network. In real life I can leverage Marcelus and others like him to help me. However, in the virtual world, when I use a number of cloud services they often need to know who I am before they allow me to access their functionalities. Because I can’t really program Marcelus to serve Web pages, I’m going to use the cloud services in Figure 1 to provide some functionality. Figure 1 Cloud Services and Their Functionalities The scenario is that navigation to my site’s homepage will be authenticated by Facebook and the claims will be passed back to my site. The site will then pull that user’s friends from Facebook and subsequently fetch information for a selected friend. If the selected friend has a hometown specified, the user may click on the hometown name and the Bing Map will show it. Configuring Authentication Between Services The December 2010 issue of MSDN Magazine had a good overview article for ACS, which can be found at msdn.microsoft.com/magazine/gg490345. I’ll cover the specific things I need to do to federate my site with Facebook. To get this going properly, I’m using Azure Labs, which is the developer preview of Azure. Additionally, I’m using Azure SDK 1.3 and I’ve installed Windows Identity Foundation SDK 4.0. To get started, I went to portal.appfabriclabs.com and registered. Once I had access to ACS, I followed the first part of the directions found at the ACS Samples and Documentation page () to get the service namespace set up. The next goal was to get Facebook set up as an Identity Provider, but in order to do that I had to first create a Facebook application, which results in a summary like that in Figure 2. .jpg) Figure 2 Facebook Application Configuration Summary This summary page is important, as I’ll need to use information from it in my configuration of Facebook as an Identity Provider in ACS. In particular, I’ll need the Application ID and the Application secret as can be seen in the configuration information from ACS shown in Figure 3. .jpg) Figure 3 ACS Facebook Identity Provider Configuration Note that I’ve added friends_hometown to the Application permissions text box. I’ll need that address to map it, and without specifying it here I wouldn’t get it back by default. If I wanted some other data to be returned about the user by the Graph API calls, I’d need to look it up at the Facebook Developers site (bit.ly/c8UoAA) and include the item in the Application permissions list. Something worth mentioning when working with ACS: You specify the Relying Parties that will use each Identity Provider. If my site exists at jofultz.cloudapp.net, it will be specified as a relying party on the Identity Provider configuration. This is also true for my localhost. So, in case I don’t want to push to the cloud to test it, I’ll need to configure a localhost relying party and select it, as illustrated in Figure 4. .jpg) Figure 4 ACS Facebook Identity Provider Configuration: Relying Parties Figure 3 and Figure 4 are both found on the same page for configuring the Identity Provider. By the same token, if I only had it configured for localhost, but then attempted to authenticate from my Web site, it wouldn’t work. I can create a custom login page, and there’s guidance and a sample for doing so under Application Integration in the ACS management site. In this sample, I’m just taking the default ACS-hosted page. So far I’ve configured ACS and my Facebook application to get them talking once invoked. The next step is to configure this Identity Provider for my site as a means of authentication. The easiest way to do this is to install the Windows Identity Foundation SDK 4.0 found at bit.ly/ew6K5z. Once installed, there will be a right-click menu option available to Add STS reference, as illustrated in Figure 5. .jpg) Figure 5 Add STS Reference Menu Option In my sample I used a default ASP.NET site created in Visual Studio by selecting a new Web Role project. Once it’s created, I right-click on the site and go about stepping through the wizard. I’ll configure the site to use an existing Security Token Service (STS) by choosing that option in the wizard and providing a path to the WS-Federation metadata. So, for my access control namespace, the path is: jofultz.accesscontrol.appfabriclabs.com/ FederationMetadata/2007-06/ FederationMetadata.xml Using this information, the wizard will add the config section <microsoft.identityModel/> to the site configuration. Once this is done, add <httpRuntime requestValidationMode=“2.0” /> underneath the <system.web/> element. Providing that I specified localhost as a relying party, I should be able to run the application, and upon startup be presented with an ACS-hosted login page that will present Facebook—or Windows Live or Google, if so configured. The microsoft.identityModel element is dependent upon existence of the Microsoft.Identity assembly, so you have to be sure to set that DLL reference in the site to Copy Always. If it isn’t, once it’s pushed to Azure it won’t have the DLL and the site will fail to run. Referring to my previous statement about needing to have configuration for localhost and the Azure hosted site, there’s one more bit of configuration once the wizard is complete. Thus, if the wizard was configured with the localhost path, then a path for the Azure site will need to be added to the <audienceUris> element as shown here: <microsoft.identityModel> <service> <audienceUris> <add value="" /> <add value="" /> </audienceUris> Additionally, the realm attribute of the wsFederation element in the config will need to reflect the current desired runtime location. Thus, when deployed to Azure, it looks like this for me: <federatedAuthentication> <wsFederation passiveRedirectEnabled="true" issuer= "" realm="" requireHttps="false" /> <cookieHandler requireSsl="false" /> </federatedAuthentication> However, if I want to debug this and have it work properly at run time on my localhost (for local debugging), I’ll change the realm to represent where the site is hosted locally, such as the following: <federatedAuthentication> <wsFederation passiveRedirectEnabled="true" issuer=". appfabriclabs.com/v2/wsfederation" realm="" requireHttps="false" /> <cookieHandler requireSsl="false" /> </federatedAuthentication> With everything properly configured, I should be able to run the site, and upon attempting to browse to the default page I’ll be redirected to the ACS-hosted login page, where I can choose Facebook as the Identity Provider. Once I click Facebook, I’m sent to the Facebook login page to be authenticated (see Figure 6). .jpg) Figure 6 Facebook Login Because I haven’t used my application before, Facebook presents me with the Request Permission dialog for my application, as seen in Figure 7. .jpg) Figure 7 Application Permission Request Not wanting to be left out of the inner circle of those who use such a fantastic app, I quickly click Allow, after which Facebook, ACS and my app exchange information (via browser redirects) and I’m finally redirected to my application. At this point I’ve simply got an empty page, but it does know who I am and I have a “Welcome Joseph Fultz” message at the top right of the page. Facebook Graph API For my application, I need to fetch the friends that comprise my social network and then subsequently retrieve information about those friends. Facebook has provided the Graph API to enable developers to do such things. It’s pretty well-documented, and best of all, it’s a flat and simple implementation, making it easy to understand and use. In order to make the requests, I’ll need an Access Token. Fortunately, it was passed back in the claims, and with the help of the Windows Identity Foundation SDK, the claims have been placed into the principal identity. The claims look something like this: identity/claims/nameidentifier identity/claims/expiration identity/claims/emailaddress identity/claims/name accesscontrolservice/2010/07/claims/ identityprovider What I really want out of this is the last part of the full name (for example, nameidentifier, expiration and so on) and the related value. So I create the ParseClaims method to tease apart the claims and place them and their values into a hash table for further use, and then call that method in the page load event: protected void ParseClaims() { string username = default(string); username = Page.User.Identity.Name; IClaimsPrincipal Principal = (IClaimsPrincipal) Thread.CurrentPrincipal; IClaimsIdentity Identity = (IClaimsIdentity) Principal.Identity; foreach (Claim claim in Identity.Claims) { string[] ParsedClaimType = claim.ClaimType.Split('/'); string ClaimKey = ParsedClaimType[ParsedClaimType.Length - 1]; _Claims.Add(ClaimKey, claim.Value); } } I create an FBHelper class where I’ll create the methods to access the Facebook information that I desire. To start, I create a method to help make all of the needed requests. I’ll make each request using the WebClient object and parse the response with the JavaScript Serializer: public static Hashtable MakeFBRequest(string RequestUrl) { Hashtable ResponseValues = default(Hashtable); WebClient WC = new WebClient(); Uri uri = new Uri(String.Format(RequestUrl, fbAccessToken)); string WCResponse = WC.DownloadString(uri); JavaScriptSerializer JSS = new JavaScriptSerializer(); ResponseValues = JSS.Deserialize<Hashtable>(WCResponse); return ResponseValues; } As seen in this code snippet, each request will need to have the Access Token that was passed back in the claims. With my reusable request method in place, I create a method to fetch my friends and parse them into a hash table containing each of their Facebook IDs and names: public static Hashtable GetFBFriends(string AccessToken) { Hashtable FinalListOfFriends = new Hashtable(); Hashtable FriendsResponse = MakeFBRequest(_fbFriendsListQuery, AccessToken); object[] friends = (object[])FriendsResponse["data"]; for (int idx = 0; idx < friends.Length;idx++ ) { Dictionary<string, object> FriendEntry = (Dictionary<string, object>)friends[idx]; FinalListOfFriends.Add(FriendEntry["id"], FriendEntry["name"]); } return FinalListOfFriends; } The deserialization of the friends list response results in a nested structure of Hashtable->Hashtable->Dictionary. Thus I have to do a little work to pull the information out and then place it into my own hash table. Once it’s in place, I switch to my default.aspx page, add a ListBox, write a little code to grab the friends and bind the result to my new ListBox: protected void GetFriends() { _Friends = FBHelper.GetFBFriends((string)_ Claims["AccessToken"]); this.ListBox1.DataSource = _Friends; ListBox1.DataTextField = "value"; ListBox1.DataValueField = "key"; ListBox1.DataBind(); } If I run the application at this point, once I’m authenticated I’ll see a list of all of my Facebook friends. But wait—there’s more! I need to get the available information for any selected friend so that I can use that to show me their hometown on a map. Flipping back to my FBHelper class, I add a simple method that will take the Access Token and the ID of the selected friend: public static Hashtable GetFBFriendInfo(string AccessToken, string ID) { Hashtable FriendInfo = MakeFBRequest(String.Format(_fbFriendInfoQuery, ID) + "?access_token={0}", AccessToken); return FriendInfo; } Note that in both of the Facebook helper methods I created, I reference a constant string that contains the needed Graph API query: public const string _fbFriendsListQuery = "{0}"; public const string _fbFriendInfoQuery = "{0}/"; With my final Facebook method in place, I’ll add a GridView to the page and set it up to bind to a hash table, and then—in the code-behind in the SelectedIndexChanged method for the ListBox—I’ll bind it to the Hashtable returned from the GetFBFriendInfo method, as shown in Figure 8. Figure 8 Adding a GridView protected void ListBox1_SelectedIndexChanged(object sender, EventArgs e) { Debug.WriteLine(ListBox1.SelectedValue.ToString()); Hashtable FriendInfo = FBHelper.GetFBFriendInfo((string)_Claims["AccessToken"], ListBox1.SelectedValue.ToString()); GridView1.DataSource = FriendInfo; GridView1.DataBind(); try { Dictionary<string, object> HometownDict = (Dictionary<string, object>) FriendInfo["hometown"]; _Hometown = HometownDict["name"].ToString(); } catch (Exception ex) { _Hometown = "";//Not Specified"; } } Now that I’ve got my friends and their info coming back from Facebook, I’ll move on to the part of showing their hometown on a map. There’s No Place Like Home For those of my friends who have specified their hometown, I want to be able to click on the hometown name and have the map navigate there. The first step is to add the map to the page. This is a pretty simple task and, to that end, Bing provides a nice interactive SDK that will demonstrate the functionality and then allow you to look at and copy the source. It can be found at bingmapsportal.com/ISDK/AjaxV7. To the default.aspx page, I add a div to hold the map, like this: <div id="myMap" style="position:relative; width:400px; height:400px;" ></div> However, to get the map there, I add script reference and a little bit of script to the SiteMaster page: <script type="text/javascript" src=" mapcontrol/mapcontrol.ashx?v=6.2"></script> <script type="text/javascript"> var map = null; function GetMap() { map = new VEMap('myMap'); map.LoadMap(); } </script> With that in place, when I pull up the page I’ll be presented with a map on the default position—but I want it to move to my friend’s hometown when I select it. During the SelectedIndexChanged event discussed earlier, I also bound a label in the page to the name and added a client-side click event to have the map find a location based on the value of the label: onclick="map.Find(null, hometown.innerText, null, null, null, null, true, null, true); map.SetZoomLevel(6);" In the map.Find call, most of the trailing parameters could be left off if so desired. The reference for the Find method can be found at msdn.microsoft.com/library/bb429645. That’s all that’s needed to show and interact with the map in this simple example. Now I’m ready to run it in all of its glory. If I’ve configured the identityModel properly to work with my localhost as mentioned earlier, I can press F5 and run it locally in debug. So, I hit F5, see a browser window pop up, and there I’m presented with my login options. I choose Facebook and I’m taken to the login page shown in Figure 6. Once logged in, I’m directed back to my default.aspx page, which now displays my friends and a default map like that in Figure 9. .jpg) Figure 9 Demo Homepage Next, I’ll browse through my friends and click one. I’ll get the information available to me based on his security settings and the application permissions I requested when I set up the Identity Provider as seen in Figure 2. Next, I’ll click in the hometown name located above the map and the map will move to center on the hometown, as seen in Figure 10. .jpg) Figure 10 Hometown in Bing Maps Final Thoughts I hope I’ve clearly articulated how to bring together several aspects of the Azure, Bing Maps and Facebook—and that I’ve shown how easy it is. Using ACS, I was able to create a sample application from a composite of cloud technology. With a little more work, it’s just as easy to tie in your own identity service to serve as it’s needed. The beauty in this federation of identity is that using Azure enables you to develop against and incorporate services from other vendors and other platforms—versus limiting you into a single choice of provider and that provider’s services, or having to figure out a low-fidelity integration method. There’s power in the Microsoft Azure, and part of that power is how easily it can be mashed together with other cloud services. Joseph Fultz is an architect at the Microsoft Technology Center in Dallas, where he works with both enterprise customers and ISVs designing and prototyping software solutions to meet business and market demands. He has spoken at events such as Tech·Ed and similar internal training events. Thanks to the following technical expert for reviewing this article: Steve Linehan
https://docs.microsoft.com/en-us/archive/msdn-magazine/2011/march/msdn-magazine-forecast-cloudy-cloud-services-mashup
CC-MAIN-2020-16
refinedweb
2,854
51.89
Unfortunately in c++, there is really no good way go locate lines in your file without reading the entire file 'character at a time.' So, if you have to read only a single line from a file, we know we are at some point going to have to be inefficient... so let's only be inefficient once. The following code traverses the entire file once, char at a time, but saves the positions of all '\n' new lines... so any queries for specific lines from the file can be made efficiently: #include<iostream> #include<cctype> #include<fstream> #include<string> #include<vector> using namespace std; int main() { int linecount = 0; int linetoget = 0; int pos = 0; char c = '\0'; char again = '\0'; string line_str; vector<int> linepos; ifstream infile; //Be sure to open in binary mode when messing about with file pointers infile.open("C:\\Users\\Dave\\Documents\\resume.txt", ios::binary); if(!infile.is_open()) { cout << "\a\nError opening file!"; cin.get(); exit(1); } //Set first line position linepos.push_back(0); linecount++; //Populate line positions from the rest of file //This part sucks, but we only have to do it once. do{ infile.get(c); if(c == '\n') { pos = infile.tellg(); linepos.push_back(pos); linecount++; } }while(infile.good()); //Reset error flags from the failing good() condition infile.clear(); do{ do{ cout << "\nEnter line of file to get: "; cin >> linetoget; if(linetoget < 1 || linetoget > linecount) { cout << "\a\nOut of Range. Please select line number 0 to " << linecount << endl; } }while(linetoget < 1 || linetoget > linecount); infile.seekg(linepos[linetoget-1], ios::beg); getline(infile, line_str); //clear fail bit flag if use selects to read the last line (reads and sets eof) infile.clear(); cout << endl << line_str << endl; cout << "\nWould you like to get another line? "; cin >> again; }while(toupper(again) == 'Y'); infile.close(); return 0; } We know we have to be very inefficient by reading the file character by character.. but at least by saving all the newline positions, it's something we only have to do once.
https://www.daniweb.com/programming/software-development/threads/242275/how-to-read-a-specific-line-from-file
CC-MAIN-2017-34
refinedweb
334
64.51
Arduino WebServer Controlled LED Introduction:. Step 1: Configure Web Server First off we need to configure the web server this is done by calling the Ethernet libraries, setting the Mac Address, IP Address and Server Port. then in the Void Setup you start the server and define the pin you want to plug the LED into. #include <SPI.h> #include <Ethernet.h> byte mac[] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED }; IPAddress ip(192,168,1, 177); EthernetServer server(80); void setup() { Serial.begin(9600); pinMode(8, OUTPUT); Ethernet.begin(mac, ip); server.begin(); } Step 2: Create HTML Form In the Void Loop we define client then check that the web server is connected and available the get it to display some HTML. The first lot checks the status of pin 8 and prints HTML to tell us if the LED is currently turned on or off. Then we use a HTML form to make some radio buttons and a submit button to select the status on or off. if (digitalRead(8)){ client.print(” LED is <font color=’green’>ON</font>”); }else{ client.print(” LED is <font color=’red’>OFF</font>”); } client.println(“<br />”); client.print(“<FORM action=\”\” >”); client.print(“<P> <INPUT type=\”radio\” name=\”status\” value=\”1\”>ON”); client.print(“<P> <INPUT type=\”radio\” name=\”status\” value=\”0\”>OFF”); client.print(“<P> <INPUT type=\”submit\” value=\”Submit\”> </FORM>”); break; } Step 3: Read LED Status and Turn on or Off Now all that is left is to read the input from the HTML Form and turn the led on or off. When you select one of the radio buttons and click submit the form adds a status=1 or status=0 to the end of the URL. We can now use the GET function to read the value and run through am IF statement to set the digital write on pin 8 to either High or Low (On or Off). if (c == ‘\n’) { currentLineIsBlank = true; buffer=”"; } else if (c == ‘\r’) { if(buffer.indexOf(“GET /?status=1″)>=0) digitalWrite(8,HIGH); if(buffer.indexOf(“GET /?status=0″)>=0) digitalWrite(8,LOW); } else { For more information head to are u able to do the same thing if you are not using the local wifi? i am getting an error about stray '\342' in program. if(buffer.indexOf(“GET /?status=0″)>=0) Can anyone explain why? you need to change the quotation marks to type out yourself those quotation marks instead of just copy and paste. so change those " in 'if(buffer.indexOf(“GET /?status=0″)>=0)' to you type yourself ". i am getting an error about a stray "/" in this line.. if(buffer.indexOf(“GET /?status=0″)>=0) Can anyone explain why? try to rewrite the quotation marks if you have copy pasted the code. Nicely done. How hard would be to use the buttons to trigger other functions on the arduino? what the buffer thingy do? Wonder if you have made any progress on web controlled relays? Thanks! i think buffer = " "; >> clear string in buffer i have problem this line , buffer = "" ; very cool, do you plan on expanding this platform? Hi there, I sure do, Next I will be setting up a web form to control relays. I will post it once i am finished.
http://www.instructables.com/id/Arduino-WebServer-controlled-LED/
CC-MAIN-2017-39
refinedweb
543
75.1
Flutter Drawer - A sliding widget that generally contains important links of the application. Flutter Drawer is a side screen widget which is invisible to you mobile app. Generally it might occupies half of the screen when displayed. Flutter Drawer Widget: Flutter uses a drawer widget to create a slide able left menu . We can customize the menu layout by using its property . Now a days tabs are consuming more space, thats the reason people are becoming more familiar with drawer and it also become the primary navigation method. In Flutter, we can define drawer inside scaffold. Therefore before adding a navigation drawer we need to define a Scaffold.When we creating a sample flutter app, it comes with the default scaffold. GetWidget Drawer Widget for Flutter: It allows Drawer widget and how we implement this on Flutter app to build awesome Flutter Drawer widget for an app. GetWidget Drawer is not only a simple sidebar . we can customize it according to us. Basic GFDrawer: Basic GFDrawer will slides in and out with having a container body.The appearance of the GFDrawer can be customized using the GFDrawer properties. we can use the below code to build the Flutter Drawer Widget. import 'package:getwidget/getwidget.dart'; GFDrawer( child: ListView( padding: EdgeInsets.zero, children: <Widget>[ ListTile( title: Text('Item 1'), onTap: null, ), ListTile( title: Text('Item 2'), onTap: null, ), ], ), ), GFDrawer Header: Header is a simple widget where we can use any component to display and modify according to us. The given code of a basic GFDrawerHeader with GFDrawer is as shown below that help you to build a Flutter Drawer Header Widget. import 'package:getwidget/getwidget.dart'; GFDrawer( child: ListView( padding: EdgeInsets.zero, children: <Widget>[ GFDrawerHeader( currentAccountPicture: GFAvatar( radius: 80.0, backgroundImage: NetworkImage(""), ), otherAccountsPictures: <Widget>[ Image( image: NetworkImage(""), fit: BoxFit.cover, ), GFAvatar( child: Text("ab"), ) ], child: Column( mainAxisAlignment: MainAxisAlignment.start, crossAxisAlignment: CrossAxisAlignment.start, children: <Widget>[ Text('user name'), Text('[email protected]'), ], ), ), ListTile( title: Text('Item 1'), onTap: null, ), ListTile( title: Text('Item 2'), onTap: null, ), ], ), ), How to Start: Now here is the guide about how we should start developing GFDrawer Widget with the use of GetWidget UI Library. Getting started will guide.3 Keep playing with the pre-built UI components. Conclusion: Here we discuss what GFDrawer is? And how we can use it in our Flutter app through GetWidget Drawer component. FAQ's: Q. What are the customizable properties used? Ans. The customizable properties include the color , gradient , backgroundimage , colorfilter , elevation of Drawer component . Hence customizable properties are full flexible. You can check out all custom properties on GetWidget Drawer Widget Properties. Q. Can we resize the width of GFDrawer? Ans: Yes, we can give the size of drawer according to us.
https://www.getwidget.dev/blog/flutter-drawer-widget/
CC-MAIN-2021-43
refinedweb
451
50.23
The most common way to test whether a large number is prime is the Miller-Rabin test. If the test says a number is composite, it’s definitely composite. Otherwise the number is very likely, but not certain, to be prime. A pseudoprime is a composite number that slips past the Miller-Rabin test. (Actually, a strong pseudoprime. More on that below.) Miller-Rabin test The Miller-Rabin test is actually a sequence of tests, one for each prime number. First you run the test associated with 2, then the test associated with 3, then the one associated with 5, etc. If we knew the smallest numbers for which these tests fail, then for smaller numbers we know for certain that they’re prime if they pass. In other words, we can turn the Miller-Rabin test for probable primes into test for provable primes. Lower bound on failure A recent result by Yupeng Jiang and Yingpu Deng finds the smallest number for which the Miller-Rabin test fails for the first nine primes. This number is N = 3,825,123,056,546,413,051 or more than 3.8 quintillion. So if a number passes the first nine Miller-Rabin tests, and it’s less than N, then it’s prime. Not just a probable prime, but definitely prime. For a number n < N, this will be more efficient than running previously known deterministic primality tests on n. Python implementation Let’s play with this in Python. The SymPy library implements the Miller-Rabin test in a function mr. The following shows that N is composite, and that it is a false positive for the first nine Miller-Rabin tests. from sympy.ntheory.primetest import mr N = 3825123056546413051 assert(N == 149491*747451*34233211) ps = [2, 3, 5, 7, 11, 13, 17, 19, 23] print( mr(N, ps) ) This doesn’t prove that N is the smallest number with these properties; we need the proof of Jiang and Deng for that. But assuming their result is right, here’s an efficient deterministic primality test that works for all n less than N. def is_prime(n): N = 3825123056546413051 assert(n < N) ps = [2, 3, 5, 7, 11, 13, 17, 19, 23] return mr(n, ps) Jiang and Deng assert that N is also the smallest composite number to slip by the first 10 and 11 Miller-Rabin tests. We can show that N is indeed a strong pseudoprime for the 10th and 11th primes, but not for the 12th prime. print( mr(N, [29, 31]) ) print( mr(N, [37]) ) This code prints True for the first test and False for the second. That is, N is a strong pseudoprime for bases 29 and 31, but not for 37. Pseudoprimes and strong pseudoprimes Fermat’s little theorem says that if n is prime, then an-1 = 1 mod n for all 0 < a < n. This gives a necessary but not sufficient test for primality. A (Fermat) pseudoprime for base a is a composite number n such that the above holds, an example of where the test is not sufficient. The Miller-Rabin test refines Fermat’s test by looking at additional necessary conditions for a number being prime. Often a composite number will fail one of these conditions, but not always. The composite numbers that slip by are called strong pseudoprimes or sometimes Miller-Rabin pseudoprimes. Miller and Rabin’s extra testing starts by factoring n-1 into 2sd where d is odd. If n is prime, then for all 0 < a < n either ad = 1 mod n or a2kd = -1 mod n for all k satisfying 0 ≤ k < s. If one of these two conditions holds for a particular a, then n passes the Miller-Rabin test for the base a. It wouldn’t be hard to write your own implementation of the Miller-Rabin test. You’d need a way to work with large integers and to compute modular exponents, both of which are included in Python without having to use SymPy. Example 561 is a pseudoprime for base 2. In fact, 561 is a pseudoprime for every base relatively prime to 561, i.e. it’s a Carmichael number. But it is not a strong pseudoprime for 2 because 560 = 16*35, so d = 35 and 235 = 263 mod 561, which is not congruent to 1 or to -1. In Python, >>> pow(2, 560, 561) 1 >>> pow(2, 35, 561) 263 2 thoughts on “Testing for primes less than a quintillion” Jiang & Deng’s paper seems to be behind a paywall, but a preprint is available at arXiv: Your explanation of the Miller-Rabin test says that we need a^{2^kd}=-1 for *all* k while actually it requires only a^{2^kd}=-1 for *some* k. In particular, for a=2 and n=561 the problem is that 2^{4*35}≡61 (mod 561) while 2^{8*35}≡1 (mod 561).
https://www.johndcook.com/blog/2019/02/25/prime-test/comment-page-1/
CC-MAIN-2019-30
refinedweb
824
69.62
c|oClon0/lat0/lonp/latp/scale[+v] or Oc|OClon0/lat0/lonp/latp/width[+v] The projection is set with o or O. The central meridian is set by lon0/lat0. The projection pole is set by lonp/latp in option three. Align the y-axis with the optional +v. The figure size is set with scale or width. Out: <IPython.core.display.Image object> import pygmt fig = pygmt.Figure() # Using the origin projection pole fig.coast( projection="Oc280/25.5/22/69/12c", # Set bottom left and top right coordinates of the figure with "+r" region="270/20/305/25+r", frame="afg", land="gray", shorelines="1/thin", water="lightblue", ) fig.show() Total running time of the script: ( 0 minutes 1.050 seconds) Gallery generated by Sphinx-Gallery
https://www.pygmt.org/v0.5.0/projections/cyl/cyl_oblique_mercator_3.html
CC-MAIN-2022-27
refinedweb
129
62.34
SRE GetMember From Nemerle Homepage There is quite severe problem when generating code for a language with type inference using System.Reflection.Emit API. For example consider: def dict = Dictionary (); def count = dict.Count; dict.Add ("foo", 42); This piece of Nemerle code creates a generic dictionary, reads number of elements in it and finally add some entry. The same code in C# looks like this: Dictionary<string,int> dict = new Dictionary<string,int> (); int count = dict.Count; dict.Add ("foo", 42); The key difference is that you need to specify type parameters to Dictionary constructor. Therefore the type of dict local variable is immediately known to the compiler. Nemerle compiler infers this type (this is exactly the same type, therefore the C# example can be considered redundant to some extent here). However the type can be inferred only after the call to Add. Therefore it works on generic, uninstantiated type Dictionary<K,V>. It then looks up the Count and Add members in it and handles references to generic K and V parameters itself. The problems shows up when we want to generate IL for this code using S.R.E. We already know the values of K and V parameters of Dictionary so we can create the instantiated type. However next we need to lookup members Count (or exactly get_Count) and Add again, because the members from uninstantiated type cannot be reused. Because member lookup is a very involved process we just try to lookup the same member in instantiated type using signature comparison. This is not very easy as string needs to be substituted for K, and int for V. There are also several more complications with member lookup and equality testing in TypeBuilders. Note how this problem doesn't occur in C# code -- the compiler can use instantiated type for the member lookup in the first place. TypeBuilder supports special methods that can be used to retrieve corresponding members from instantiated types, but the methods do not work on RuntimeTypes. We would therefore propose to either make these methods work on all types, not only on TypeBuilders or add some special GetMember(MemberInfo) overload to System.Type with the same function. This feature doesn't seem hard to implement in the runtime, but it makes compiler writer work much easier. This problems is probably going to affect any language with only a very little more type inference than in C#. Related bugreports
http://nemerle.org/SRE_GetMember
crawl-001
refinedweb
406
53.61
Bugzilla – Bug 894 isPositive returns false for SPD matrix after compute() on an already initialized LDLT object Last modified: 2014-10-20 10:53:58 UTC If you initialize an LDLT object with a non positive definite matrix and then call compute() with a positive definite matrix argument, LDLT keeps returning isPositive() == false. Example code: #include <Eigen/Core> #include <Eigen/Cholesky> #include <iostream> int main() { Eigen::MatrixXd M = Eigen::MatrixXd::Random(10, 10); Eigen::LDLT<Eigen::MatrixXd> chol(M); std::cout << chol.isPositive() << std::endl; M = (M * M.transpose()).eval(); M = 0.5 * (M + M.transpose()).eval(); chol.compute(M); std::cout << chol.isPositive() << std::endl; M = Eigen::MatrixXd::Random(10, 10); return 0; } Output: 0 0 Expected Output: 0 1 The problem appears both on linux and mac, for version 3.2.2 and the latest mercurial revision, haven't tested other configurations. If I compile against the eigen 3.2.0 shipped with ubuntu 14.04 the example correctly displays 0 1, so I suppose this is a regression. PS: first bug report ever, so I apologize if it's not clear/proper Thank you for the report, fixed: Changeset: 3e1f580fdb6c User: ggael Date: 2014-10-20 08:48:40+00:00 Summary: Fix bug 894: the sign of LDLT was not re-initialized at each call of compute() Branch: 3.2 btw, your bug report is very clear, a self-compilable example is a must to have to get bugs fixed quickly and save us time!
http://eigen.tuxfamily.org/bz/show_bug.cgi?id=894
CC-MAIN-2017-43
refinedweb
249
55.74
<PLEASE USE THE FOLLOWING TEMPLATE TO HELP YOU CREATE A GREAT POST!> I have created ( and received help creating ) a program that will allow a user to answer questions and get a response that they need to fix their phone. This is meant to be a troubleshooting code. However, I want to adapt this to allow the code to ask the user a question, such as " Describe your issue" and then the system will pick out keywords that they have answered with. For example if they answered with, " the display and software is messing up" then the system would recognise the words " display" and " software". Any ideas on how I would do this? Thanks, Peter <Below this line, add a link to the EXACT exercise that you are stuck at.> <In what way does your code behave incorrectly? Include ALL error messages.> <What do you expect to happen instead?>```python questions = {“Device”: “What is your device\n”,#At the start of the code, questions will begin and in a random order asking what is wrong with the users phone. “Display”: “Is your display working?\n”, “Dropped”: “Has your device been dropped?\n”, “Start”: “Is your device turning on?\n”, “Update”: “Have you installed the lastst update on your device?\n”, “Charge”: “Is your device having battery issues?\n”, “Water”: “Has your device been exposed to water?\n”, “Jailbreak”: “Is your device Jailbroken or Routed?\n”, “Camera”: “Is your devices camera broken?\n”, “Slow”: “Is your device slowing down?\n” } #At this point the questions end and will start to display solutions to the problem that they have stated they have had. responses = {“Device”: “Your device is %s.\n”, “Display”: {“no”: “You entered no to display. \n” "Try charging the phone for a couple hours and " "let it charge, if this doesn’t work contact your " “nearest supplier.\n”, “yes”: None}, “Dropped”: {“yes”: “You entered Yes to the phone being dropped. If it is dented take it to your supplier.\n”, “no”: None}, "Start": {"no": "You entered the device isn't turning on. You may need to let the device charge for a couple hours, if this doesn't work, contact your nearest supplier.\n", "yes": None}, "Update": {"no": "You havent updated to the latest software on your device. Please do that soon to fix your issue.\n", "yes": None}, "Charge": {"yes": "You entered Yes to having battery issues, you may need a new battery for your device to make it work sufficiently. Best advice would be to let the battery die down and recharge it later.\n", "no": None}, "Water": {"yes": "You enetered you have exposed your device to water. A common solution for water damage is to leave it in rice over night. This will soak up any excess water in your device.\n", "no": None}, "Jailbreak": {"yes": "You have jailbroken/ routed your phone. This can commonly mess up operating systems. Please restore your device to factory settings.\n", "no": None}, "Camera": {"yes": "You have entered that your camera is not working. No common solutions for this issue. Please take it to the nearest supplier.\n", "no": None}, "Slow": {"yes": "You entered that your device is slowing down. Best advice would consist of deleting many useless things on the device and restore if that doesn't work.\n", "no": None}, } def user_input(question, override=False):#This will help the user answer the questions if they type them in wrong. “”" Sanitize user input :param question: Question to be asked :param override: Override check for yes/no response :return: response to question “”" while question: try: response = str(input(question)) if not override: assert response in [‘yes’, ‘no’], “Please enter your answer in the format of [yes, no].\n” question = False return response except AssertionError as error: print(error) def main(): “”" Loops through all available questions and stores the response, then loops through all answers and displays the corresponding information. :return: None “”" answers = { topic: user_input(question, override=True if topic in “Device” else False)#This demonstates Iteration as it loops overrides to help the user work the system. for topic, question in questions.items() } for k, i in answers.items(): if k is "Device": print(responses[k] % i) else: print(responses[k][i]) print("Thank you for using our service.") main() <do not remove the three backticks above>
https://discuss.codecademy.com/t/keyword-searching-help/60619
CC-MAIN-2022-33
refinedweb
716
75
Hide Forgot Description of problem: When reading a core file, the 'disassemble' command doesn't work, while the 'x' command works fine. If you run the code inside gdb you get the correct disass output. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: Take a RHL9 install and compile: #include <string.h> int main (void) { stpcpy (NULL, "dd"); return 0; } Nothing special about stpcpy, it's a function gcc has no inlines for. Now run it and create a core (ulimit -c unlimited is your friend). Finally start gdb with the core file and run "disass". The result: (gdb) disass Dump of assembler code for function stpcpy: 0x4207be30 <stpcpy+0>: Cannot access memory at address 0x4207be30 Actual results: Expected results: Additional info: This was a problem in the disassembly command: --- gdb+dejagnu-20021129/gdb/disasm.c.1 Mon Mar 31 22:33:18 2003 +++ gdb+dejagnu-20021129/gdb/disasm.c Mon Mar 31 22:35:53 2003 @@ -360,7 +360,8 @@ gdb_disassembly (struct ui_out *uiout, if (strcmp (target_shortname, "child") == 0 || strcmp (target_shortname, "procfs") == 0 || strcmp (target_shortname, "vxprocess") == 0 - || strstr (target_shortname, "-threads") != NULL) + || strcmp (target_shortname, "core") == 0 + || strstr (target_shortname, "-thread") != NULL) gdb_disassemble_from_exec = 0; /* It's a child process, read inferior mem */ else gdb_disassemble_from_exec = 1; /* It's remote, read the exec file */ Fixed now in gdb-5.3post-0.20021129.29
https://bugzilla.redhat.com/show_bug.cgi?id=87677
CC-MAIN-2020-50
refinedweb
224
53.51
Red Hat Bugzilla – Bug 52047 RFE: Install dies on machines i810 + 2nd video card Last modified: 2007-04-18 12:36:02 EDT Hi guys, I like the fix you had for 7.2, but figured I should put in another type of solution as a RFE for 8.0. Machine is Dell Optiplex GX100 128 Mb Ram Voodoo 3 3000 w/16Mb Creative Labs SB Live! EMU10000 Adaptec 2940 CDrom writer on SCSI chain Built in i810 video *always on* 3Com Corporation 3c905C-TX [Fast Etherlink] The first problem is that the system seems to get confused with the initial frame buffer mode and goes into an ultra lowres 320x200. The previous workaround was to do a text install which of course dropped off functionality. This time the install worked by doing a nofb as the installer tried the VGA16 XFree86.. and I limped along in that (yeah) I figured that since it saw I had two video cards.. would it be possible for it to do something like this on an expert install: Detected 2 video cards Which card do you wish to be primary for install? [] Voodoo [] i810 [] vga16 The framebuffer problem.. I dont know what to do with.. I dont know if it is a Voodoo problem or an i810 voodoo conflict problem. How does text mode drop off functionality? Its IDENTICAL to GUI mode now. We do not support motherboards currently where the i810 cannot be disabled if another card is present. You will have to manually configure X for the card in use. We may reconsider for future releases. BTW, what does kudzu say when you: cd /usr/lib/python1.5/site-packages $python import kudzu print kudzu.probe(kudzu.CLASS_VIDEO, kudzu.BUS_UNSPEC, kudzu.PROBE_ALL) I figured it would be several releases, which is why the RFE was put on it.. Here is what kudzu says >>> print kudzu.probe(kudzu.CLASS_VIDEO, kudzu.BUS_UNSPEC, kudzu.PROBE_ALL) [Desc: Intel Corporation|82810-DC100 CGC [Chipset Graphics Controller] Driver: Card:Intel 810 Device: None , Desc: 3Dfx Interactive, Inc.|Voodoo 3 Driver: Card:Voodoo3 (generic) Device: None ] I'll try to pick this up for the next release. *** Bug 16123 has been marked as a duplicate of this bug. *** Deferring to future release. Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.
https://bugzilla.redhat.com/show_bug.cgi?id=52047
CC-MAIN-2016-44
refinedweb
383
73.98
Last restarted. I asked when this was occurring and put on my detective’s hat. In this blog, I will. Diagnosing the Bug I read through my service’s log files around the the time the 500 errors started happening. They quickly showed a pretty serious problem: a little before midnight on a Saturday my service would start throwing errors. At first there was a variety of errors occurring, all SQLException, but eventually the root cause became the same: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) This went on for several hours until early the following morning when the service was restarted and the service went back to normal. Checking with the cave trolls DBAs, I found the database I was connecting to went down for maintenance. The exact details escape me, but I believe it was a roughly 30-minute window the database was down. So, clearly, my service had an issue re-connecting to a database once the database recovered from an outage. Fixing the Bug the Wrong Way The most straightforward way of fixing this bug (and one which I have often went to in the past), would had been to Google “recovering from database outage,” which would likely lead me to a Stack Overflow thread that answers my question. I would then have “copied and pasted” in the provided answer and pushed the code to be tested. If production was being severely affected by a bug, this approach might be necessary in the short-term. That said, time should be set aside in the immediate future to cover the change with an automated test. Fixing the Bug the Right Way So as is often the case, doing the things the “right way” often means a significant font-loaded time investment, and this adage is definitely true here. The return on investment, however, is less time later spent fixing bugs, increased confidence in the correctness of the code, and, in addition, tests can be an important form of documentation as to how the code should behave in a given scenario. While this specific test case is a bit esoteric, it’s an important factor to keep in mind when designing and writing tests, be they unit or integration: give tests good names, make sure test code is readable, etc. Solution 1: Mock Everything My first crack at writing a test for this issue was to try to “mock everything.” While Mockito and other mocking frameworks are quite powerful and are getting ever easier to use, after mulling over this solution, I quickly came to the conclusion I just wouldn’t ever have the confidence that I wouldn’t be testing anything beyond the mocks I have written. Getting a “green” result would not increase my confidence in the correctness of my code, the whole point of writing automated tests in the first place! On to another approach. Solution 2: Use An In-Memory Database Using an in-memory database was my next attempt at writing this test. I’m a pretty big proponent of H2, I’ve used H2 extensively in the past and was hoping it might address my needs here once again. I probably spent more time here than I should have. While ultimately this approach doesn’t pan out, the time spent isn’t entirely wasted, I did learn a decent bit more about H2. One of the advantages of doing things the “right way” (though often painful in the moment) is that you learn a lot. The knowledge gained might not be useful at the time, but could prove valuable later. The Advantages of Using an In-Memory Database Like I said, I probably spent more time here than I should have, but I did have my reasons for wanting this solution to work. H2, and other in-memory databases, had a couple of very desirable traits: - Speed: Starting and stopping H2 is quite fast, sub-second. So while a little slower than using mocks, my tests would still be plenty fast. - Portability: H2 can run entirely from an imported jar, so other developers can just pull down my code and run all the tests without performing any additional steps. Additionally my eventual solution had a couple non-trivial disadvantages which I will cover as part of that solution below. Writing the Test Somewhat meaningful, but to this point I still hadn’t written a single line of production code. A central principle of TDD is to write the test first and production code later. This methodology along with ensuring a high level of test coverage also encourages the developer to only make changes that are necessary. This goes back to the goal increasing confidence in the correctness of your code. Below is the initial test case I built to test my PROD issue: @RunWith(SpringRunner.class) @SpringBootTest(classes = DataSourceConfig.class, properties = {"datasource.driver=org.h2.Driver", "datasource.url=jdbc:h2:mem:;MODE=ORACLE", "datasource.user=test", "datasource.password=test" }) public class ITDatabaseFailureAndRecovery { @Autowired private DataSource dataSource; @Test public void test() throws SQLException { Connection conn = DataSourceUtils.getConnection(dataSource); conn.createStatement().executeQuery("SELECT 1 FROM dual"); ResultSet rs = conn.createStatement().executeQuery("SELECT 1 FROM dual"); assertTrue(rs.next()); assertEquals(1, rs.getLong(1)); conn.createStatement().execute("SHUTDOWN"); DataSourceUtils.releaseConnection(conn, dataSource); conn = DataSourceUtils.getConnection(dataSource); rs = conn.createStatement().executeQuery("SELECT 1 FROM dual"); assertTrue(rs.next()); assertEquals(1, rs.getLong(1)); } } Initially I felt I was on the right path with this solution. There is the question of how do I start the H2 server back up (one problem at a time!) But when I run the test, it is failing and giving an error analogous to what my service is experiencing in PROD: org.h2.jdbc.JdbcSQLException: Database is already closed (to disable automatic closing at VM shutdown, add ";DB_CLOSE_ON_EXIT=FALSE" to the db URL) [90121-192] However, if I modify my test case and simply attempt a second connection to the database: conn = DataSourceUtils.getConnection(dataSource); The exception goes away and my test passes without me making any changes to my production code. Something isn’t right here… Why This Solution Didn’t Work So using H2 won’t work. I actually spent quite a bit more time trying to get H2 to work than what the above would suggest. Troubleshooting attempts included; connecting to a file based H2 server instance instead of just an in-memory one, a remote H2 server; I even stumbled up the H2 Server class that would had addressed the server shutdown/startup issue from earlier. None of those attempts worked obviously. The fundamental problem with H2, at least for this test case, is attempting to connect to a database will cause that database to start up if it currently isn’t running. There is a bit of a delay, as my initial test case shows, but obviously this poses a fundamental problem. In PROD, when my service attempts to connect to a database, it does not cause the database to start up (no matter how many times I attempt connecting to it). My service’s logs can certainly attest to this fact. So on to another approach. Solution 3: Connect to a Local Database Mocking everything won’t work. Using an in-memory database didn’t pan out either. It looks like the only way I will be able to properly reproduce the scenario my service was experiencing in PROD was by connecting to a more formal database implementation. Bringing down a shared development database is out of the question, so this database implementation needs to run locally. The Problems With This Solution So everything before this should give a pretty good indication that I really wanted to avoid going down this path. There are some good reasons for my reticence: - Decreased portability: If another developer wanted to run this test she would need to download and install a database on her local machine. She would also need to make sure her configuration details match what the test is expecting. This is time-consuming task and would lead to at least some amount of “out of band” knowledge. - Slower: Overall my test still isn’t too slow, but it does take several seconds to startup, shutdown, and then startup again even against a local database. While a few seconds doesn’t sound like much, time can add up with enough tests. This is a major concern as integration tests are allowed to take longer (more on that later), but the faster the integration tests, the more often they can be run. - Organizational wrangling: To run this test on the build server means I would now need to work with my already-overburdened DevOps team to setup a database on the build box. Even if the ops team wasn’t overburden, I just like to avoid this if possible as it’s just one more step. - Licensing: In my code example, I am using MySQL as my test database implementation. However, for my client, I was connecting to an Oracle database. Oracle does offer Oracle Express Edition (XE) for free, however it does come with stipulations. One of those stipulations is two instance of Oracle XE cannot be running at the same time. The specific case of Oracle XE aside, licensing can become an issue when it comes to connecting to specific products offerings, it’s something to keep in mind. Success! … Finally Originally this article was a good bit longer, which also gave a better impression of all the blood, sweat, and tears work that went into getting to this point. Ultimately such information isn’t particularly useful to readers, even if cathartic for the author to write about. So, without further ado, a test that accurately reproduces the scenario my service was experiencing in PROD: @Test public void testServiceRecoveryFromDatabaseOutage() throws SQLException, InterruptedException, IOException { Connection conn = null; conn = DataSourceUtils.getConnection(datasource); assertTrue(conn.createStatement().execute("SELECT 1")); DataSourceUtils.releaseConnection(conn, datasource); LOGGER.debug("STOPPING DB"); Runtime.getRuntime().exec("/usr/local/mysql/support-files/mysql.server stop").waitFor(); LOGGER.debug("DB STOPPED"); try { conn = DataSourceUtils.getConnection(datasource); conn.createStatement().execute("SELECT 1"); fail("Database is down at this point, call should fail"); } catch (Exception e) { LOGGER.debug("EXPECTED CONNECTION FAILURE"); } LOGGER.debug("STARTING DB"); Runtime.getRuntime().exec("/usr/local/mysql/support-files/mysql.server start").waitFor(); LOGGER.debug("DB STARTED"); conn = DataSourceUtils.getConnection(datasource); assertTrue(conn.createStatement().execute("SELECT 1")); DataSourceUtils.releaseConnection(conn, datasource); } Full code here: The Fix So I have my test case. Now it’s time to write production code to get my test showing green. Ultimately I got the answer from a friend, but likely would stumbled upon it with enough Googling. Initially the DataSource I set up in my service’s configuration effectively looked like this: ")); return dataSource; } CodeProject The underlying problem my service was experiencing is when a connection from the DataSource’s connection pool failed to connect to the database, it became “bad.” The next problem then was my DataSource implementation would not drop these “bad” connections from the connection pool. It just kept trying to use them over and over. The fix for this is luckily pretty simple. I needed to instruct my DataSource to test a connection when the DataSource retrieved it from the connection pool. If this test failed, the connection would be dropped from the pool and a new one attempted. I also needed to provide the DataSource with a query it could use to test a connection. Finally (not strictly necessary but useful for testing), by default my DataSource implementation would only test a connection every 30 seconds. However it would be nice for my test to run in less than 30 seconds. Ultimately the length of this period isn’t really meaningful, so I added a validation interval that is provided by a property file. Here is what my updated DataSource looks like: ")); dataSource.setValidationQuery("SELECT 1"); dataSource.setTestOnBorrow(true); dataSource.setValidationInterval(env.getRequiredProperty("datasource.validation.interval")); return dataSource; } One final note for writing integration tests. Initially I created a test configuration file that I used to configure the DataSource to use in my test. However this is incorrect. The problem is that if someone were to remove my fix from the production configuration file, but left it in the test configuration file, my test would still be passing but my actual production code would once again be vulnerable to the problem I spent all this time fixing! This is a mistake that would be easy to imagine happening. So be sure to use your actual production configuration files when writing integration tests. Automating the Test So the end is almost in sight. I have a test case that accurately reproduces the scenario I am experiencing in PROD. I have a fix that then takes my test from failing to passing. However the point of all this work wasn’t to just have confidence that my fix works for the next release, but for all future releases. Maven users: hopefully you are already familiar with the surefire plugin. Or, at least hopefully your DevOps team already has your parent pom set up so that when a project is being built on your build server, all those unit tests you took the time to write are being run with every commit. This article however isn’t about writing unit tests, but about writing integration tests. An integration test suite will typically take much longer to run (sometimes hours) than an unit test suite (which should take no more than 5-10 minutes). Integration tests are also typically more subject to volatility. While the integration test I wrote in this article should be stable –if it breaks, it should be cause for concern– when connecting to a development database, you can’t always be 100% confident the database will be available or that your test data will be correct or even present. So a failed integration test doesn’t necessarily mean the code is incorrect. Luckily the folks behind Maven have already addressed this and that is with the failsafe plugin. Whereas the surefire plugin, by default, will look for classes that are pre or post-fixed with Test, the failsafe plugin will look for classes pre or post-fixed with IT (Integration Test). Like all Maven plugins, you can configure in which goals the plugin should execute. This gives you the flexibility to have your unit tests run with every code commit, but your integration tests to only run during a nightly build. This can also prevent a scenario in which a hot-fix needs to be deployed, but a resource that an integration test depends upon isn’t present. Final Thoughts Writing integration tests can be a time consuming and difficult task. It requires extensive thought into how your service will interact with other resources in PROD. This task is even more difficult and time consuming when you are specifically testing for failure scenarios which often requires more control of the resource your test is connecting with and drawing on past experience and knowledge on what scenarios to test for. Despite this high cost in time and effort, this investment will pay itself back many times over in time. Increasing confidence in the correctness of code, which is only possible through automated testing, is central to shortening the development feedback cycle. The code that I used in this article can be found here:. Atta boy, Billy! Too many people blow off any form of unit or automated testing. The time investment costs more up front, but the experience you gained as a developer, and the coverage gained by automated testing, saves time in the long run. Imagine you change connection providers, database drivers, connection pools, etc. Now you know those cases are covered! And trust me; that is going to happen sooner than later. Author Thanks for the kind words Dave! Automated testing is something I have come back around to recently. Used to do it half heartedly, but from reading and increased experience, extensive automated test coverage is key to the both the short and long term success of a product.
https://keyholesoftware.com/2016/10/03/automated-integration-tests-failure-scenarios/
CC-MAIN-2017-09
refinedweb
2,716
53.61
Migrating RabbitMQ Tasks with Python & Kombu It’s not common, but every now and then you may need to move tasks that have already been enqueued from one RabbitMQ instance to another — for example, if you need to upgrade your plan, or if you’re switching providers. I’ve had to do this a couple times, and the process is usually something like this: - Spin up a new plan/instance and start routing new tasks to this as they’re produced - Migrate tasks that didn’t get consumed over to the new plan for consumption - Remove the old plan We’re going to focus on that second piece here We’ve used Kombu for this, a project built around celery, the task queuing system we use for our Django app. If you’re new to RabbitMQ and some of the terminology, I found this post super helpful. Let’s dive in and migrate tasks! Cheers to my colleague Omar, for writing the majority of this code. Setup — Getting CLI Input This was written as a Python script — here are the docs for argparse, if you haven’t created a Python script with command line arguments before. def _get_cli_input(): parser = argparse.ArgumentParser() parser.add_argument('source_broker') parser.add_argument('destination_broker') parser.add_argument('--queues', required=True, help='queues comma separated') args = parser.parse_args() brokers = [args.source_broker, args.destination_broker] queues = args.queues.split(',') return brokers, queues The first arguments are the source (where your’re migrating from) and destination (where you’re migrating to) brokers. These are expected to be formatted like: amqp://user:password@domain:port/vhost, and adding some validation here to ensure that they are may not be a bad idea — I’ll leave that to you. The final argument is a comma separated list of which queues you want to migrate tasks from. The function then returns a list of the brokers, and a list of the queues. Connection BROKERS, QUEUES = _get_cli_input() SRC_BROKER = BROKERS[0] DEST_BROKER = BROKERS[1] src_conn = Connection(SRC_BROKER) dest_conn = Connection(DEST_BROKER) This just does the basic setup needed to do any producing or consuming of messages. Connections take in the broker url as an argument. Consuming from the source queues First the code: for q in QUEUES: src_channel = src_conn.channel() src_exchange = Exchange(q, 'direct') src_queue = Queue(q, exchange=src_exchange, routing_key=q) dest_channel = dest_conn.channel() dest_exchange = Exchange(q, 'direct') producer = Producer(dest_channel, exchange=dest_exchange, serializer='json', routing_key=q) consumer = Consumer(src_channel, src_queue, callbacks=[_process_message(producer)]) consumer.consume() For each queue we need to migrate tasks from, we need to set up an exchange. If you’re curious about exchange types, I found this to be a helpful explanation. We also need a consumer to take messages off the source broker, and a producer to add messages to the destination broker. The consumer is created with a callback, which we’ll talk about next — this is how each message that is consumed is handled. It needs to take in the destination producer as an argument so that we have access to that in the callback. The callback The signature of a callback is always a body and a message — because we also need access to the producer for the destination broker, we’ve defined this as a class that’s initialized with a producer — the __call__ function takes the default arguments and runs after initialization. class _process_message(): def __init__(self, producer): self.producer = producer def __call__(self, body, message): task = body.get('task') eta = body.get('eta') kwargs = body.get('kwargs') dest_queue = Queue(q, exchange=self.producer.exchange, routing_key=q) self.producer.publish(body, declare=[dest_queue]) message.ack() Creating an instance of a Queue here and declaring it when we publish the new message means that if a queue with the name given to your producer as the routing key doesn’t already exist on your broker, it will be created. If it does already exist, the existing one will be used. Calling message.ack() removes a message from the source broker — if you skip this step, once your migration is complete, the tasks will exist on both the source and destination queues. Leaving this off can be useful if you’re testing and don’t want to risk losing messages, but be careful to add it if there’s a chance messages will be consumed from both brokers and your tasks aren’t idempotent. The loop Finally, we create a loop that times out after 10 seconds and exits if there aren’t any more messages to move over. This of course assumes new messages aren’t being published to your source broker anymore — the script won’t ever exit if they are! while True: timeout = 10 try: src_conn.drain_events(timeout=10) except socket.timeout: src_conn.release() exit(0) Happy migrating!
https://adriennedomingus.medium.com/migrating-rabbitmq-tasks-with-python-kombu-880f90816c7d
CC-MAIN-2022-21
refinedweb
792
54.12
Re: Add Windows User to ADAM Role using LDIFDE.exe - From: "Joe Richards [MVP]" <humorexpress@xxxxxxxxxxx> - Date: Fri, 19 Jan 2007 19:22:23 -0500 Note you can use admod to do this right at the command line without the encoding. admod -h server:port -b group_dn "member:+:<SID=Blah>" -- Joe Richards Microsoft MVP Windows Server Directory Services Author of O'Reilly Active Directory Third Edition ---O'Reilly Active Directory Third Edition now available--- Jeremy Wiebe wrote: Hi Lee,. Thanks for the response. I missed the part that you have to encode the "<SID=XYZ>". I gave that a quick try and that works! As to the existing foreignSecurityPrincipal collision, I don't think that should ever be an issue because I'm always importing into a brand new application partition. Thanks for the help! Jeremy Wiebe On Jan 19, 3:20 pm, "Lee Flight" <l...@xxxxxxxxxxxxxxx> wrote:Hi unfortunately <SID=....> syntax only works with base64 encoding the #member line in Dmitri's post indicates a comment. More here:... Note that if you having been testing this by against your ADAM instance and already imported the Windows user the foreignSecurityPrincipal will have already been created in your ADAM instance and that will cause a violation when you try the ldf import even using the correct encoding. For a clean test delete any matching FSP, the ldf import will create it for you as you say. Lee Flight "Jeremy Wiebe" <jeremy.wi...@xxxxxxxxx> wrote in messagenews:1169228507.640926.157200@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx I'm trying to add a Windows user to an ADAM role by creating an LDIF file and importing it into ADAM using ldifde.exe. I found this post which seems to be exactly what I need, but I can't get it to work (...). Here's my LDIF file: dn: CN=Readers,CN=Roles,CN=MyApp,DC=MyCompany,DC=COM changetype: modify add: member # member: <SID=S-1-5-21-1644491937-113007714-1957994488-1007> - I got the SID by manually adding a windows user to a role using ADAM-AdsiEdit and then exporting that role using ldifde.exe The error I'm getting is: === There is a syntax error in the input file Failed on line 5. The last token starts with 'm'. An error has occurred in the program === In the post I mentioned above Dmitri's (last poster) LDIF specifies the SID using both the <SID=XYZ> and base64 encoded method. Is that required? (If it is, I couldn't get that working either). So, am I missing something obvious here or does LDIFDE.exe actually not support this? Also, I'm under the impression that LDIFDE.exe (or probably ADAM) will automatically create a ForeignSecurityPrincipal for me, if needed, when I add the user to the role. - References: - Add Windows User to ADAM Role using LDIFDE.exe - From: Jeremy Wiebe - Re: Add Windows User to ADAM Role using LDIFDE.exe - From: Lee Flight - Re: Add Windows User to ADAM Role using LDIFDE.exe - From: Jeremy Wiebe - Prev by Date: Re: Using Many Small Group Objects for Group Policy - Next by Date: Re: Delegation of groups admin. - restricted to a subset of object - Previous by thread: Re: Add Windows User to ADAM Role using LDIFDE.exe - Next by thread: Re: Add Windows User to ADAM Role using LDIFDE.exe - Index(es):
http://www.tech-archive.net/Archive/Windows/microsoft.public.windows.server.active_directory/2007-01/msg01583.html
crawl-002
refinedweb
553
64.71
The Curious Case of Not Hiring Directly into Software Engineer V (Or Whatever) “What about the principal consultant role?” “Oh, we don’t hire into that position.” This exchange occurred over 6 years ago. At the time, I was interviewing for a role with a consulting agency (or something calling itself that, anyway). They had four salary bands for their developer/consultants, and shooting for the top wasn’t out of bounds with my experience level. So I asked. And then they told me about this policy by way of dismissing the question. I’m not exactly sure why this instance stands out so much in my mind. Perhaps because it occurred so explicitly. But when I think about it, I think every salaried gig I ever had featured some kind of unique role like this at the top of the individual contributor set of software people. In other words, at every job I ever had, there was always exactly one titular band — Senior Software Engineer II or Principal Developer or whatever — reserved only for the company’s dues-payers. Not having been a wage worker for a long time, I hadn’t thought about this for years. But I heard someone mention a corporate policy like this in passing the other day, and it got me thinking. I dunno. Call it nostalgia or whatever, but given my recent opting for whimsy with the blog, I figured I’d riff on this. The Curious Case of “We Don’t Hire for X” Stop for a moment, at this point, and think about how deeply weird this little corporate quirk is. I mean, you’re probably used to it, so it might be a little hard to do this mental exercise. Like the job interview, fundamentally nonsensical practices can create a sort of Stockholm Syndrome in corporate denizens. So let’s blast away the cobwebs with a helpful graphic. Below is the skeleton of what some org chart might look like. As the GIF flashes, you’ll see color coordination. In red, you have the positions that the company will staff from the outside. Remaining clear, you’ll see all of the positions that the company has a policy to staff only via promotion. Interesting, eh? Trickling down from CEO to C-suite to VPs to directors to managers, you have positions that a company will staff from outside, if need be. Sure, sometimes they may want to promote from within, and often they’ll do exactly that. But companies will bring in outsiders for leadership roles. They’ll also typically look for outsiders to fill any of the roles at the individual contributor level, from Software Engineer I through Software Engineer XVI, or whatever, depending on the richness and thickness of the HR matrix. But then you’ve got that one… it’s always an impressive sounding title, and it’s always where individual contributors go to max out, collecting COLAs and generally demurring against nudges to management. Strange, huh? The Elusive Salary Band: The Stock Explanation I’m going to dig back into my corporate hierarchy terms for the remainder of this post. You can read about those here in more detail, but this graphic should help. Here’s the explanation that the organization’s idealists and pragmatists offer. (The opportunists mainly stand aside, in sort of a benign, corporate version of the adage, “never interfere with an enemy while he is in the process of destroying himself.”) It’s a superficially plausible one, if you don’t bring too much scrutiny to bear. The idea is that the top salary band represents the most valuable software developers in the group to the company. These developers combine technical skill with a certain je ne sais quoi that combines experience, domain knowledge, inside company baseball, and embodying the company’s values and training. You don’t hire into this position because that would be impossible. Take Jenny, your most senior rainmaker in the group. She knows every namespace in the code, can fix anything in the field in short order, and has the trust and love of all the customers. No matter how skilled an outsider may be, they’ll need to spend some time under her tutelage to get to her level. Outsiders Getting to The Next Level Doesn’t Pass the Opportunist Smell Test This is an excellent explanation, as long as it’s manufactured by pragmatists and idealists, for pragmatists and idealists. It’s a nice, self-reinforcing narrative that raises loyalty to the company and humbleness as virtues. The good guys win. (Except for the fact that these types of policies are expert beginner incubators, but let’s leave that for another day.) Except, none of this squares with how leadership in buying positions tends to see the world. I’ve done a lot of management consulting over the last 4 years. And here’s an observation that may surprise you. While everyone might love and celebrate Jenny in both your group and client situations, leadership generally does not love Jenny. At best, they appreciate her contributions, respect her and are wary of her. And often, she terrifies them. Having Jennies is a serious organizational weakness. When you’re running a business, you don’t want employees that can hold you hostage. If the success of your organization depends on some individual not leaving, that’s trouble. If people have to come on and log 2-3 years of work, “paying their dues,” because that’s how long it takes to max out their efficiency, that’s systemic trouble. Good organizations don’t work that way. And consider that for every awesome Jenny, there’s one who has single-handedly created this expertise bottleneck by building some monstrous 30,000 line singleton that only she understands, and which would explode like a suitcase nuke if she didn’t come in once per week to type in her secret code. So while you might like the elusive “must pay dues to get here” salary band because it reinforces a feel-good narrative, this is categorically not why your organization’s opportunists tolerate and encode it. Opportunist Motives Demystified: Why They Create This Policy and Role Rather than drawing this out to the end of the story for a grand reveal, I’ll just get to it right here. The reasoning for it doesn’t make the whole situation any less interesting. But let me offer a brief caveat. If you read my stuff about the denizens of the corporate world — pragmatists, idealists, and opportunists — you might conceive of the opportunists as Illuminati-like figures whose every move is diabolically intentional in a 3-dimensional chess game. Not so. Opportunists usually operate at a more intuitive office-political level. They might do something for reasons they can’t exactly articulate, simply thinking that it “feels right” or “is the right play.” Here we have one of these situations. Few if any opportunists would state explicitly why they allow this company policy: because it depresses salary across the board while weaponizing mastery and purpose against longer-tenured employees to keep them in place. Instead, they’d say something like, “it seems like a win for everyone. It rewards people for staying, makes us seem more prestigious, and it’s a non-monetary reward. They love it, and it doesn’t cost the company anything to do.” In short, the opportunists do this to place a governor on top-end salary in the department. It’s a way to keep giving nominal COLAs to veterans who stay around forever, but without letting that distort the standard pay brackets. If there’s a “special, inside-promotion only” band at the top, you can let that band increase 2% a year forever with the top earning person’s COLA, while still keeping the pay for your workforce capped at something not affected by unbounded seniority. There’s a Lot More to Unpack Here, So Tune in Next Time It’s about 1 AM, and I’m tired. This is shaping up to be a 3,000-ish word post, so I think I’ll break here and declare a to-be-continued. Next week, I’ll cover some concepts that explain why software folks and the unique journeyman idealist caste are particularly susceptible to this dynamic. As I’ve said, the opportunists mostly just stand back and watch us do this to ourselves. To understand how that works, I’ll cover some interesting topics: - A concept that Venkat Rao coins on Ribbonfarm, known as status illegibility. - An appropos Groucho Marx quote from that post, “I don’t care to belong to any club that will have me as a member.” - How immediately unattainable positions manufacture collective organizational status. - “We never give a grade of A” as a bargaining weapon. And we’ll probably meander through another few topics as well. So stay tuned for the wrap of examining the realpolitik dynamics of the elusive internal-promotion-only position.
https://daedtech.com/the-curious-case-of-not-hiring-directly-into-software-engineer-v-or-whatever/
CC-MAIN-2020-24
refinedweb
1,506
61.56
perfmetrics 0.9.3 Send performance metrics about Python code to Statsd Introduction The perfmetrics package provides a simple way to add software performance metrics to Python libraries and applications. Use perfmetrics to find the true bottlenecks in a production application. The perfmetrics package is a client of the Statsd daemon by Etsy, which is in turn a client of Graphite (specifically, the Carbon daemon). Because the perfmetrics package sends UDP packets to Statsd, perfmetrics adds no I/O delays to applications and little CPU overhead. It can work equally well in threaded (synchronous) or event-driven (asynchronous) software. Usage Use the @metric and @metricmethod decorators to wrap functions and methods that should send timing and call statistics to Statsd. Add the decorators to any function or method that could be a bottleneck, including library functions. Sample: from perfmetrics import metric from perfmetrics import metricmethod @metric def myfunction(): """Do something that might be expensive""" class MyClass(object): @metricmethod def mymethod(self): """Do some other possibly expensive thing""" Next, tell perfmetrics how to connect to Statsd. (Until you do, the decorators have no effect.) Ideally, either your application should read the Statsd URI from a configuration file at startup time, or you should set the STATSD_URI environment variable. The example below uses a hard-coded URI: from perfmetrics import set_statsd_client set_statsd_client('statsd://localhost:8125') for i in xrange(1000): myfunction() MyClass().mymethod() If you run that code, it will fire 2000 UDP packets at port 8125. However, unless you have already installed Graphite and Statsd, all of those packets will be ignored and dropped. Dropping is a good thing: you don’t want your production application to fail or slow down just because your performance monitoring system is stopped or not working. Install Graphite and Statsd to receive and graph the metrics. One good way to install them is the graphite_buildout example at github, which installs Graphite and Statsd in a custom location without root access. Threading While most programs send metrics from any thread to a single global Statsd server, some programs need to use a different Statsd server for each thread. If you only need a global Statsd server, use the set_statsd_client function at application startup. If you need to use a different Statsd server for each thread, use the statsd_client_stack object in each thread. Use the push, pop, and clear methods. Graphite Tips Graphite stores each metric as a time series with multiple resolutions. The sample graphite_buildout stores 10 second resolution for 48 hours, 1 hour resolution for 31 days, and 1 day resolution for 5 years. To produce a coarse grained value from a fine grained value, Graphite computes the mean value (average) for each time span. Because Graphite computes mean values implicitly, the most sensible way to treat counters in Graphite is as a “hits per second” value. That way, a graph can produce correct results no matter which resolution level it uses. Treating counters as hits per second has unfortunate consequences, however. If some metric sees a 1000 hit spike in one second, then falls to zero for at least 9 seconds, the Graphite chart for that metric will show a spike of 100, not 1000, since Graphite receives metrics every 10 seconds and the spike looks to Graphite like 100 hits per second over a 10 second period. If you want your graph to show 1000 hits rather than 100 hits per second, apply the Graphite hitcount() function, using a resolution of 10 seconds or more. The hitcount function converts per-second values to approximate raw hit counts. Be sure to provide a resolution value large enough to be represented by at least one pixel width on the resulting graph, otherwise Graphite will compute averages of hit counts and produce a confusing graph. It usually makes sense to treat null values in Graphite as zero, though that is not the default; by default, Graphite draws nothing for null values. You can turn on that option for each graph. Reference Documentation Decorators - @metric - Notifies Statsd using UDP every time the function is called. Sends both call counts and timing information. The name of the metric sent to Statsd is <module>.<function name>. - @metricmethod - Like @metric, but the name of the Statsd metric is <class module>.<class name>.<method name>. - Metric(stat=None, rate=1, method=False, count=True, timing=True) A decorator or context manager with options. stat is the name of the metric to send; set it to None to use the name of the function or method. rate lets you reduce the number of packets sent to Statsd by selecting a random sample; for example, set it to 0.1 to send one tenth of the packets. If the method parameter is true, the default metric name is based on the method’s class name rather than the module name. Setting count to False disables the counter statistics sent to Statsd. Setting timing to False disables the timing statistics sent to Statsd.. Functions - statsd_client() - Return the currently configured StatsdClient. Returns the thread-local client if there is one, or the global client if there is one, or None. - set_statsd_client(client_or_uri) - Set the global StatsdClient. The client_or_uri can be a StatsdClient, a statsd:// URI, or None. Note that when the perfmetrics module is imported, it looks for the STATSD_URI environment variable and calls set_statsd_client() if that variable is found. - statsd_client_from_uri(uri) - Create a StatsdClient from a URI, but do not install it as the global StatsdClient. A typical URI is statsd://localhost:8125. Supported optional query parameters are prefix and gauge_suffix. The default prefix is empty and the default gauge_suffix is .<host_name>. See the StatsdClient documentation for more information about gauge_suffix. StatsdClient Methods Python code can send custom metrics by first getting the current StatsdClient using the statsd_client() function. Note that statsd_client() returns None if no client has been configured. Most of the methods below have optional rate and buf parameters. The rate parameter, when set to a value less than 1, causes StatsdClient to send a random sample of packets rather than every packet. If the buf parameter is a list, StatsdClient appends the packet contents to the buf list rather than send the packet, making it possible to send multiple updates in a single packet. Keep in mind that the size of UDP packets is limited (the limit varies by the network, but 1000 bytes is usually a good guess) and any extra bytes will be ignored silently. - timing(stat,.3 (2012-09-08) - Support the STATSD_URI environment variable..3.xml
https://pypi.python.org/pypi/perfmetrics/0.9.3
CC-MAIN-2016-30
refinedweb
1,089
62.07
Lower Ninth Ward Lower Ninth Ward is a neighborhood of the city of New Orleans, Louisiana, United States. As the name implies, it is part of the Ninth Ward of New Orleans. The Lower Ninth Ward is often thought of as the entire area within New Orleans downriver of the Industrial Canal; however, the City Planning Commission divides this area into the Lower Ninth Ward and Holy Cross neighborhoods. The term "Lower" refers to its location farther towards the mouth of the Mississippi River, downriver, "down" or "below" the rest of the city. The 9th Ward, like all wards of New Orleans, is a voting district. The 9th Ward was added as a voting district in 1852.[1] The Lower 9th Ward is composed of Ward 9 Districts 1, 2, 4, and 7 which make up the Holy Cross Area and Ward 9 Districts 3, 5, 6, and 8. Higher voting district numbers in the 9th Ward (8-27) are on the upriver side of the Industrial Canal.[2] The area came to international attention for its devastation in the aftermath of Hurricane Katrina in 2005. Contents Geography Excluding the industrial and swamp areas north of the Florida Canal, the neighborhood of the Lower 9th Ward is about 1.25 mi (2.01 km) from east to west and 2 mi (3.2 km) from north to south. Three major avenues cross the developed portion of the neighborhood, each with bridges over the Industrial Canal. Closest to the River is St. Claude Avenue; about midway through the neighborhood is Claiborne Avenue; Florida Avenue crosses at the northern edge of the historically populated portion of the Lower 9th. Most major businesses serving the neighborhood are located on St. Claude or Claiborne, although a smattering of additional neighborhood business is located throughout the area. While the first two of these three avenues continue into St. Bernard Parish; a continuation of Florida Avenue through and beyond the parish line has been repeatedly proposed but at present does not exist. Adjacent Neighborhoods - Viavant/Venetian Isles (north) - St. Bernard Parish (east) - Holy Cross (south) - Bywater (west) Boundaries The City Planning Commission defines the boundaries of Lower Ninth Ward as these streets: Florida Avenue, St. Bernard Parish, St. Claude Avenue and the Industrial Canal.[3] The Lower Ninth Ward is also commonly used to describe a slightly larger area. This area borders the Mississippi River to the South and St. Bernard Parish to the east. To the west is the Industrial Canal, across which is the Bywater section of New Orleans. The northern or inland boundary is often given as the Florida Canal with Florida Avenue, a levee, and railroad tracks running beside it. Alternatively, the industrial area north of Florida Avenue is sometimes included as being part of the Lower 9th Ward, extending the boundary to the southern edge of the Gulf Intracoastal Waterway. History. In 1834 the United States Army established the Jackson Barracks here. As late as the 1870s, the area behind St. Claude was still mostly small farms with scattered residences. The area on the "woods" (away from the river) side of Claiborne was mostly undeveloped cypress swamp. What became the Lower 9th Ward did not become distinct from the upriver parts of the 9th Ward until the start of the 1920s, when the Industrial Canal was dredged. This development bisected the 9th Ward. At this time, people started referring to the area "above" (upriver) from the Canal as the "Upper" 9th Ward, and this area as the "Lower". The section on the River side of Saint Claude Avenue, which developed as an urban area first, is sometimes called the "Holy Cross Neighborhood" for Holy Cross High School, the large Catholic school. For many years, it attracted students not only from the Lower 9th but from throughout the city. Construction of the Industrial Canal led to development of the land farther back along the Canal; it provided steady work for labor in the area. As shipping became containerized in the later 20th century, however, demand for labor declined, with negative economic consequences for the neighborhood. Some people left to find work in other areas; others struggled with lower-paying jobs. Hurricane Betsy In 1965, Hurricane Betsy hit New Orleans. A levee on the Industrial Canal collapsed, and much of the Lower 9th Ward was flooded. President Lyndon B. Johnson visited the devastated flooded area shortly after the storm, and ordered aid for the storm victims. Hurricane Katrina At the end of August 2005, Hurricane Katrina made landfall just east of New Orleans, the fifth deadliest hurricane and the costliest natural disaster in the history of the United States. Multiple breaches in the levees of at least four canals resulted in catastrophic flooding in a majority of the city; see Effect of Hurricane Katrina on New Orleans. Nowhere in the city was the devastation greater than in the Lower 9th Ward, especially the portion from Claiborne Avenue back. This was largely due to the storm surge generated in the Mississippi River Gulf Outlet, a deep-draft shipping channel built by the Army Corps of Engineers in the late 1950s. The construction destroyed tens of thousands of acres of protective coastal wetlands that once acted as a storm surge buffer for the community. Storm surge flood waters appear to have poured into the Lower Ninth Ward from at least three sources. To the east, water flowed in from Saint Bernard Parish, while to the west the Industrial Canal suffered two major breaches: one a block in from Florida Avenue, the second back from Claiborne Avenue. The force of the water did not only flood homes, but smashed or knocked many off their foundations. A large barge, the ING 4727 (owned by the Ingram Barge Company), was swept by flood waters into the neighborhood through the breach near Claiborne Avenue, leveling homes beneath it. The storm surge was so great that even the highest portions of the Lower 9th were flooded; Holy Cross School, which had served as a dry refuge after Hurricane Betsy, was inundated. The foot of the Mississippi River levee, the area's highest point, took on some 2 to 3 feet (0.91 m) of water. The Lower 9th Ward was flooded again by Hurricane Rita a month later in September. In December 2005, Common Ground Collective volunteers gutted the first house in the area. Volunteers and residents began gutting other houses in the community. Soon after, the Common Ground Collective opened the first distribution center in the area, in order to provide returning residents with water, food and other necessities. No stores had yet re-opened in the area. Due to the great devastation and lack of population and services, the Lower Ninth Ward was the last area of the city still under a curfew half a year after the disaster. Officially, residents were allowed in during daylight hours to look, salvage possessions, and leave, although some few had already done extensive work gutting and repairing their damaged homes in preparation to move back. By January 2006, the widespread damages and difficulties in restoring basic utilities and city services still prevented the official reopening of the Lower 9th Ward to residents who wished to return to live. The most severely damaged section of the Ward is the lower elevation section, north of Claiborne Avenue. A Bring Back New Orleans Commission preliminary report suggested making this area in whole or part into park space because of the high risk of future flooding. Most Lower 9th Ward residents have strongly objected to this proposal, but outsiders worry about the high risk of future flooding in the area. In March 2006 a group of residents and Common Ground Collective volunteers broke into Martin Luther King Elementary School to begin cleanup efforts. Not long after, the state school officials agreed to repair the school. The school has subsequently become a Recovery School District charter school and is running at full capacity. It is a rarity in that it has no management company. The school is operated by the faculty and administration. When asked about it, Dr. Hicks, the school's long-time principal said, "We didn't have a management company before and we don't need one now."[citation needed] As of late 2006, a small number of local businesses in the area reopened, and residents began to return (many are living in FEMA trailers as they try to rebuild). But, much of the area was still little-populated and in ruined condition. Work crews continued to remove debris and demolish unrepairable houses daily, but hundreds if not thousands were vacant and gutted. Many more buildings have hardly been touched since the waters were drained, and ruined possessions are inside severely damaged buildings. In 2006, Mayor Ray Nagin threatened to use his powers of eminent domain to seize vacant, severely damaged properties in all of New Orleans that had not been gutted or scheduled to be gutted before early 2007. Such blighted properties had been creating serious problems for returned New Orleanians, including infestations of rats and other vermin. Similar actions to seize abandoned blighted property are in effect in other Louisiana parishes, as well as in Mississippi counties affected by the storm. However, as hundreds of thousands of locals were still waiting for promised insurance or Road Home money, many of the poor lacked resources to work on their houses. The neighborhood had few stores and only a handful of schools reopened. Recovery Efforts On December 3, 2007, Make It Right Foundation, founded by the actor Brad Pitt, committed to rebuild 150 houses in the Lower Ninth Ward. The houses are sustainable, energy-efficient and safe.[4] Make It Right homes were designed by award-winning architects from New Orleans and around the world, including Frank Gehry, Shigeru Ban, Hitoshi Abe and Thom Mayne. Said Pitt: “I walked into it blind, just thinking, ‘People need homes; I know people who make great homes.'"[5] In the spring of 2008, Build Now,[6] a local, non-profit homebuilder, began working to bring New Orleans families back home. It constructed site-built, stilt houses on hurricane-damaged lots. The homes reflect the style and quality of traditional New Orleans architecture but are built above potential flood waters. Build Now is in the process of bringing more than a dozen New Orleans families back home; nine houses are currently under construction in the Upper and Lower Ninth Ward areas, Lakeview and Gentilly. The organization has moved three New Orleans families back home. As of March 2009, hundreds of houses have been rebuilt, and dozens of new homes have been constructed. Residents are returning home. Volunteers continue to come to the area en masse, working for dozens of organizations including Common Ground Relief, formerly Common Ground Collective; and lowernine.org, a grassroots organization that coordinates volunteers' and residents' efforts in rebuilding homes in the Lower Ninth Ward. As of February 2010, lowernine.org enabled 29 families to move back home to the Lower Ninth Ward. Residents and volunteers are striving to make the Lower Ninth Ward a sustainable community. They are working to restore the local wetlands. It is widely believed that were it not for the extensive canal dredging to support commercial development, resulting in subsequent wetlands subsidence, the Lower Ninth Ward would not have suffered such extensive flooding during Katrina. In March 2012, the New York Times reported that "[t]he neighborhood has become a dumping ground for many kinds of unwanted things," and "it no longer resembled an urban, or even suburban environment. Where once there stood orderly rows of single-family homes with driveways and front yards, there was jungle."[7] As of December 2015, the Lower Ninth Ward still has many empty lots and vacant heavily damaged houses. Demographics As of the census of 2000, there were 14,008 people, 4,820 households, and 3,467 families residing in the neighborhood.[8] The population density was 9,731 /mi² (3,730 /km²). As of the census of 2010, there were 2,842 people, 1,061 households, and 683 families residing in the neighborhood.[8] Notable Buildings The Lower Ninth Ward is home to the Jackson Barracks. The barracks now serve as headquarters for the Louisiana National Guard. The complex had an extensive military museum in the old powder magazine and in a new annex, with a large collection of military items from every American war. The 2000 NRA Shooting Sports Camp and Coaches School was held at Jackson Barracks from June 28 – July 2, 2000. The Doullut steamboat houses are located on either side of Egania Street at numbers 400 and 503. The first house, closer to the river, was built in 1905 by Captain Milton P. Doullut, a riverboat pilot, as his home. The second was built in 1913 for his son Paul Doullut. In 1977 both houses were designated historic landmarks. The houses have two notable design influences, the first being the steamboats of the period, the second being the Japanese exhibit at the 1904 World's Fair in St. Louis (Louisiana Purchase Exposition). Notably, Mary Doullut (wife of Milton) was also a river boat captain, who worked on the river for over 30 years; she is believed to be the first woman to have held a Mississippi riverboat pilot's license. Notable Natives and Residents - Pat Barry, kickboxer and mixed martial artist - Fats Domino, musician-singer-songwriter - Marshall Faulk, NFL star - Kalamu ya Salaam, poet and author - Magic, rapper and musician - Fred Luter, Baptist minister, elected president of the Southern Baptist Convention in 2012 Education New Orleans Public Schools operates district public schools, while Recovery School District oversees charter schools. Dr. King Charter School (K-12) is located in the Lower 9th.[9] Alfred Lawless High School was the only public high school that operated in the Lower 9th until Hurricane Katrina affected New Orleans in August 2005. The previous Holy Cross High School campus was located in the Lower Ninth Ward. In August 2007 students from Carver High School and Marshall Middle School began studying at temporary trailers on the site of Holy Cross. In September of that year the students were to move to another set of trailers in the original Carver/Marshall campus in the Desire Area.[10] Representation In Other Media - The 1994 film adaptation of Anne Rice's Interview with the Vampire, starring Brad Pitt and Tom Cruise, was filmed along sections of the Mississippi River embankment and inside the Jackson Barracks. - When the Levees Broke (2006), a documentary about the Katrina disaster directed by Spike Lee, was produced and shown on HBO. The film covered damage in the Lower Ninth Ward and other areas of the city. See Also External Links Lower 9th Ward travel guide from Wikivoyage - Lower 9th Ward neighborhood snapshot dealing with the area in back of St. Claude Avenue - Holy Cross neighborhood snapshot dealing with the area from St. Claude to the River - Photographs of Hurricane Katrina's aftermath in the Lower Ninth Ward - Holy Cross School closes at the Wayback Machine (archived September 26, 2007) - Rebuilding the 9th Ward at the Wayback Machine (archived October 9, 2013) A Slideshow of photographic portraits and interview recordings of residents of the 9th Ward, focusing on the post-Katrina rebuilding process References - ↑ "New Orleans Districts and Wards". Mardi Gras Digest. Retrieved 2010-04-22.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ "New Orleans City Council District E" (PDF). New Orleans City Council. Retrieved 2010-04-22.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ Greater New Orleans Community Data Center. "Lower Ninth Ward Neighborhood". Retrieved 2008-06-21.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ Make It Right 9 - ↑ Thompson, Richard. "Brad Pitt sees his Lower 9th Ward homebuilding efforts as a model for other areas". The New Orleans Advocate. Published online August 16, 2015. Retrieved September 3, 2015. - ↑ - ↑ Rich, Nathaniel. "Jungleland: The Lower Ninth Ward in New Orleans Gives New Meaning to "Urban Growth"". The New York Times. Retrieved 23 March 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ 8.0 8.1 "Lower Ninth Ward Neighborhood". Greater New Orleans Community Data Center. Retrieved 6 January 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> - ↑ Stokes, Stephanie. "MLK school reopens in Lower 9th." Times Picayune. Sunday June 10, 2007. Retrieved on August 4, 2012. - ↑ Maxwell, Lesli A. "Up From the Ruins." Education Week. Published online on September 27, 2007. Published in print on October 3, 2007 as "Up From the Ruins." Retrieved on April 1, 2013.
https://infogalactic.com/info/Lower_9th_Ward
CC-MAIN-2018-47
refinedweb
2,760
54.32
I'm trying to filter data in ASP.NET Core 2.1 using EF Core DbContext. The scenario is as follows: Our main entity is a Movie. A movie has one or more genres and a genre can belong to one or more movies. The same goes for movie and actor. So we have two many-to-many relationships(code first) and the code that describes these is: public class Movie { public int Id {get; set;} public string Title {get; set;} public ICollection<MovieActor> MovieActors {get; set;} public ICollection<MovieGenre> MovieGenres {get; set;} } public class Actor { public int Id {get; set;} public string Name {get; set;} public ICollection<MovieActor> MovieActors {get; set;} } public class Genre { public int Id {get; set;} public string Name {get; set;} public ICollection<MovieGenre> MovieGenres {get; set;} } public class MovieActor { public int MovieId {get; set;} public Movie Movie {get; set;} public int ActorId {get; set;} public Actor Actor {get; set;} } public class MovieGenre { public int MovieId {get; set;} public Movie Movie {get; set;} public int GenreId {get; set;} public Genre Genre {get; set;} } The context responsible for handling queries with the database is MoviesDbContext. I'm trying to filter all the data from the 'Movies' table based on two lists of ints, which represent the Ids of actors and genres in the database. List<int> actorIds; List<int> genreIds; For filtering, we want to get all movies that simultaneously follow the following rules: 1) All movies whose list of actors contain at least one actor whose Id is found in the 'actorIds' list 2) All movies whose list of genres contain at least one genre whose Id is found in the 'genreIds' list The solution that I found is as follows: context.Movies .Include(m => m.MovieActors) .Include(m => m.MovieGenres) .Where(m => actorIds.Any(id => m.MovieActors.Any(ma => ma.Id == id))) .Where(m => genreIds.Any(id => m.MovieGenres.Any(mg => mg.Id == id))); This does the filtering right, but the problem is that when EF Core translates the code into sql commands, it breaks the query into a lot of tiny queries and this causes severe performance issues, some queries taking tens of seconds. How can I refactor this so that it does the filtering in only one query? After some experiments with the extension methods, I've found something that does only N + 1 queries instead of thousands of queries, where N is the number of many-to-many relationships. So, instead of using this lambda in the Where() extension method: m => actorIds.Any(id => m.MovieActors.Any(ma => ma.Id == id)) You have to use this one: m => m.MovieActors.Any(ma => actorIds.Contains(ma.ActorId)) After doing some research, I found that EF Core is still incomplete and cannot overcome the N + 1 problem. But still, N + 1 queries instead of a few thousands of queries is a huge improvement.
https://entityframeworkcore.com/knowledge-base/51389243/filter-data-with-ef-core-in-a-single-query
CC-MAIN-2022-21
refinedweb
479
60.99
Generic definitions for the Transport class. More... #include <Transport.h> Generic definitions for the Transport class. The transport object is created in the Service handler constructor and deleted in the Service Handler's destructor!! The main responsibility of a Transport object is to encapsulate a connection, and provide a transport independent way to send and receive data. Since TAO is heavily based on the Reactor for all if not all its I/O the Transport class is usually implemented with a helper Connection Handler that adapts the generic Transport interface to the Reactor types. One of the responsibilities of the TAO_Transport class is to send out GIOP messages as efficiently as possible. In most cases messages are put out in FIFO order, the transport object will put out the message using a single system call and return control to the application. However, for oneways and AMI requests it may be more efficient (or required if the SYNC_NONE policy is in effect) to queue the messages until a large enough data set is available. Another reason to queue is that some applications cannot block for I/O, yet they want to send messages so large that a single write() operation would not be able to cope with them. In such cases we need to queue the data and use the Reactor to drain the queue. Therefore, the Transport class may need to use a queue to temporarily hold the messages, and, in some configurations, it may need to use the Reactor to concurrently drain such queues. TAO provides explicit policies to send 'urgent' messages. Such messages may put at the head of the queue. However, they cannot be sent immediately because the transport may already be sending another message in a reactive fashion. Consequently, the Transport must also know if the head of the queue has been partially sent. In that case new messages can only follow the head. Only once the head is completely sent we can start sending new messages. One or more threads can be blocked waiting for the connection to completely send the message. The thread should return as soon as its message has been sent, so a per-thread condition is required. This suggest that simply using a ACE_Message_Queue would not be enough: there is a significant amount of ancillary information, to keep on each message that the Message_Block class does not provide room for. Blocking I/O is still attractive for some applications. First, my eliminating the Reactor overhead performance is improved when sending large blocks of data. Second, using the Reactor to send out data opens the door for nested upcalls, yet some applications cannot deal with the reentrancy issues in this case. Some or all messages could have a timeout period attached to them. The timeout source could either be some high-level policy or maybe some strategy to prevent denial of service attacks. In any case the timeouts are per-message, and later messages could have shorter timeouts. In fact, some kind of scheduling (such as EDF) could be required in a few applications. The outgoing data path consist in several components: The Transport object provides a single method to send request messages (send_request_message ()). One of the main responsibilities of the transport is to read and process the incoming GIOP message as quickly and efficiently as possible. There are other forces that needs to be given due consideration. They are The messages should be checked for validity and the right information should be sent to the higher layer for processing. The process of doing a sanity check and preparing the messages for the higher layers of the ORB are done by the messaging protocol. To keep things as efficient as possible for medium sized requests, it would be good to minimize data copying and locking along the incoming path ie. from the time of reading the data from the handle to the application. We achieve this by creating a buffer on stack and reading the data from the handle into the buffer. We then pass the same data block (the buffer is encapsulated into a data block) to the higher layers of the ORB. The problems stem from the following (a) Data is bigger than the buffer that we have on stack (b) Transports like TCP do not guarantee availability of the whole chunk of data in one shot. Data could trickle in byte by byte. (c) Single read gives multiple messages We solve the problems as follows (a) First do a read with the buffer on stack. Query the underlying messaging object whether the message has any incomplete portion. If so, data will be copied into new buffer being able to hold full message and is queued; succeeding events will read data from socket and write directly into this buffer. Otherwise, if if the message in local buffer is complete, we free the handle and then send the message to the higher layers of the ORB for processing. (b) If buffer with incomplete message has been enqueued, while trying to do the above, the reactor will call us back when the handle becomes read ready. The read-operation will copy data directly into the enqueued buffer. If the message has bee read completely the message is sent to the higher layers of the ORB for processing. (c) If we get multiple messages (possible if the client connected to the server sends oneways or AMI requests), we parse and split the messages. Every message is put in the queue. Once the messages are queued, the thread picks up one message to send to the higher layers of the ORB. Before doing that, if it finds more messages, it sends a notify to the reactor without resuming the handle. The next thread picks up a message from the queue and processes that. Once the queue is drained the last thread resumes the handle. We could use the outgoing path of the ORB to send replies. This would allow us to reuse most of the code in the outgoing data path. We were doing this till TAO-1.2.3. We run in to problems. When writing the reply the ORB gets flow controlled, and the ORB tries to flush the message by going into the reactor. This resulted in unnecessary nesting. The thread that gets into the Reactor could potentially handle other messages (incoming or outgoing) and the stack starts growing leading to crashes. The solution that we (plan to) adopt is pretty straight forward. The thread sending replies will not block to send the replies but queue the replies and return to the Reactor. (Note the careful usages of the terms "blocking in the Reactor" as opposed to "return back to the Reactor". See Also: Default creator, requires the tag value be supplied. Destructor. Memory management routines. Forwards to event handler. Allocate a partial message block and store it in our partial_message_ data member. Use the Transport's codeset factories to set the translator for input and output CDRs. Get the bidirectional flag. Set the bidirectional flag. Set the Cache Map entry. Get the Cache Map entry. Can the transport be purged? Cancel handle_output() callbacks. CodeSet Negotiation - Get the char codeset translator factory. CodeSet negotiation - Set the char codeset translator factory. Check if the buffering constraints have been reached. Cleanup the queue. Exactly byte_count bytes have been sent, the queue must be cleaned up as potentially several messages have been completely sent out. It leaves on head_ the next message to send out. Cleanup the complete queue. Call the implementation method after obtaining the lock. Get the connection handler for this transport. These classes need privileged access to: Implemented in TAO_IIOP_Transport. Send some of the data in the queue. As the outgoing data is drained this method is invoked to send as much of the current message as possible. A helper routine used in drain_queue_i() Implement drain_queue() assuming the lock is held. Return the event handler used to receive notifications from the Reactor. Normally a concrete TAO_Transport object has-a ACE_Event_Handler member that functions as an adapter between the ACE_Reactor framework and the TAO pluggable protocol framework. In all the protocols implemented so far this role is fullfilled by an instance of ACE_Svc_Handler. Implemented in TAO_IIOP_Transport. Get the first request flag. Set the state of the first_request_ to flag. Check if the flush timer is still pending. Format and queue a message for stream This is a request for the transport object to write a LocateRequest header before it is sent out. This is a request for the transport object to write a request header before it sends out the request Callback to read incoming data. The ACE_Event_Handler adapter invokes this method as part of its handle_input() operation. Once a complete message is read the Transport class delegates on the Messaging layer to invoke the right upcall (on the server) or the TAO_Reply_Dispatcher (on the client side). All the methods relevant to the incoming data path of the ORB are defined below Is invoked by handle_input operation. It consolidate message on top of incoming_message_stack. The amount of missing data is known and recv operation copies data directly into message buffer, as much as a single recv-invocation provides. Is invoked by handle_input operation. It parses new messages from input stream or consolidates messages whose header has been partially read, the message size being unknown so far. It parses as much data as a single recv-invocation provides. Is invoked by handle_input_parse_data. Parses all messages remaining in message_block. The timeout callback, invoked when any of the timers related to this transport expire. This is the only legal ACT in the current configuration.... Set and Get the identifier for this transport instance. If not set, this will return an integer representation of the this pointer for the instance on which it's called. Request is sent and the reply is received. Idle the transport now. Request has been just sent, but the reply is not received. Idle the transport now. Re-factor computation of I/O timeouts based on operation timeouts. Depending on the wait strategy, we need to timeout I/O operations or not. For example, if we are using a non-blocking strategy, we want to pass 0 to all I/O operations, and rely on the ACE_NONBLOCK settings on the underlying sockets. However, for blocking strategies we want to pass the operation timeouts, to respect the application level policies. This function was introduced as part of the fixes for bug 3647. Is this transport really connected. Return true if the tcs has been set. CodeSet negotiation. Cache management. Initializing the messaging object. This would be used by the connector side. On the acceptor side the connection handler would take care of the messaging objects. Return the messaging object that is used to format the data that needs to be sent. Accessor for the output CDR stream. Accessor for synchronizing Transport OutputCDR access. Hooks that can be overridden in concrete transports. These hooks are invoked just after connection establishment (or after a connection is fetched from cache). The return value signifies whether the invoker should proceed with post connection establishment activities. Protocols like SSLIOP need this to verify whether connections already established have valid certificates. There are no pre_connect_hooks () since the transport doesn't exist before a connection establishment. :-) Perform all the actions when this transport get opened. do what needs to be done when closing the transport Add event handlers corresponding to transports that have RW wait strategy to the handlers set. Called by the cache when the ORB is shutting down. Added event handler to the handlers set. Called by the cache when the cache is closing. Cache management. Get and Set the purging order. The purging strategy uses the set version to set the purging order. Check if there are messages pending in the queue. Check if there are messages pending in the queue. This version assumes that the lock is already held. Use with care! Queue a message for message_block Read len bytes from into buf. This method serializes on handler_lock_, guaranteeing that only thread can execute it on the same instance concurrently. Implemented in TAO_IIOP_Transport. Accessor to recv_buffer_size_. Register the handler with the reactor. Register the handler with the reactor. This method is used by the Wait_On_Reactor strategy. The transport must register its event handler with the ORB's Reactor. Register with the reactor via the wait strategy. Remove the handler from the reactor. Print out error messages if the event handler is not valid. The flush timer expired or was explicitly cancelled, mark it as not pending Schedule handle_output() callbacks. Write the complete Message_Block chain to the connection. This method serializes on handler_lock_, guaranteeing that only thread can execute it on the same instance concurrently. Often the implementation simply forwards the arguments to the underlying ACE_Svc_Handler class. Using the code factored out into ACE. Be careful with protocols that perform non-trivial transformations of the data, such as SSLIOP or protocols that compress the stream. This call can also fail if the transport instance is no longer associated with a connection (e.g., the connection handler closed down). In that case, it returns -1 and sets errno to ENOENT. Implemented in TAO_IIOP_Transport. Send an asynchronous message, i.e. do not block until the message is on the wire Notify all the components inside a Transport when the underlying connection is closed. Assume the lock is held. This method formats the stream and then sends the message on the transport. Once the ORB is prepared to receive a reply (see send_request() above), and all the arguments have been marshaled the CDR stream must be 'formatted', i.e. the message_size field in the GIOP header can finally be set to the proper value. Implemented in TAO_IIOP_Transport. This is a very specialized interface to send a simple chain of messages through the Transport. The only place we use this interface is in GIOP_Message_Base.cpp, to send error messages (i.e., an indication that we received a malformed GIOP message,) and to close the connection. Send a message block chain, assuming the lock is held. Sent the contents of message_block. Implement send_message_shared() assuming the handler_lock_ is held. Send a reply message, i.e. do not block until the message is on the wire, but just return after adding them to the queue. Prepare the waiting and demuxing strategy to receive a reply for a new request. Preparing the ORB to receive the reply only once the request is completely sent opens the system to some subtle race conditions: suppose the ORB is running in a multi-threaded configuration, thread A makes a request while thread B is using the Reactor to process all incoming requests. Thread A could be implemented as follows: 1) send the request 2) setup the ORB to receive the reply 3) wait for the request but in this case thread B may receive the reply between step (1) and (2), and drop it as an invalid or unexpected message. Consequently the correct implementation is: 1) setup the ORB to receive the reply 2) send the request 3) wait for the reply The following method encapsulates this idiom. Implemented in TAO_IIOP_Transport. A helper method used by send_synchronous_message_i() and send_reply_message_i(). Reusable code that could be used by both the methods. Send a synchronous message, i.e. block until the message is on the wire Accessor to sent_byte_count_. These classes need privileged access to: Reimplemented in TAO_IIOP_Transport. Set the flush in post open flag. Transport statistics. Return the protocol tag. The OMG assigns unique tags (a 32-bit unsigned number) to each protocol. New protocol tags can be obtained free of charge from the OMG, check the documents in corbafwd.h for more details. Extracts the list of listen points from the cdr stream. The list would have the protocol specific details of the ListenPoints Reimplemented in TAO_IIOP_Transport. Get the TAO_Tranport_Mux_Strategy used by this object. The role of the TAO_Transport_Mux_Strategy is described in more detail in that class' documentation. Enough is to say that the class is used to control how many threads can have pending requests over the same connection. Multiplexing multiple threads over the same connection conserves resources and is almost required for AMI, but having only one pending request per connection is more efficient and reduces the possibilities of priority inversions. Helper method that returns the Transport Cache Manager. Cache management. Return true if blocking I/O should be used for sending asynchronous (AMI calls, non-blocking oneways, responses to operations, etc.) messages. This is determined based on the current flushing strategy. Return true if blocking I/O should be used for sending synchronous (two-way, reliable oneways, etc.) messages. This is determined based on the current flushing and waiting strategies. Return the TAO_Wait_Strategy used by this object. The role of the TAO_Wait_Strategy is described in more detail in that class' documentation. Enough is to say that the ORB can wait for a reply blocking on read(), using the Reactor to wait for multiple events concurrently or using the Leader/Followers protocol. CodeSet Negotiation - Get the wchar codeset translator factory. CodeSet negotiation - Set the wchar codeset translator factory. Needs priveleged access to event_handler_i () Use to check if bidirectional info has been synchronized with the peer. Have we sent any info on bidirectional information or have we received any info regarding making the connection served by this transport bidirectional. The flag is used as follows: + We dont want to send the bidirectional context info more than once on the connection. Why? Waste of marshalling and demarshalling time on the client. + On the server side -- once a client that has established the connection asks the server to use the connection both ways, we *dont* want the server to pack service info to the client. That is not allowed. We need a flag to prevent such a things from happening. The value of this flag will be 0 if the client sends info and 1 if the server receives the info. Our entry in the cache. We don't own this. It is here for our convenience. We cannot just change things around. Additional member values required to support codeset translation. @Phil, I think it would be nice if we could think of a way to do the following. We have been trying to use the transport for marking about translator factories and such! IMHO this is a wrong encapulation ie. trying to populate the transport object with these details. We should probably have a class something like TAO_Message_Property or TAO_Message_Translator or whatever (I am sure you get the idea) and encapsulate all these details. Coupling these seems odd. if I have to be more cynical we can move this to the connection_handler and it may more sense with the DSCP stuff around there. Do you agree? The queue will start draining no later than <queeing_deadline_> if* the deadline is First_request_ is true until the first request is sent or received. This is necessary since codeset context information is necessary only on the first request. After that, the translators are fixed for the life of the connection. Indicate that flushing needs to be done in post_open() The timer ID. Lock that insures that activities that *might* use handler-related resources (such as a connection handler) get serialized. This is an ACE_Lock that gets initialized from TAO_ORB_Core::resource_factory()->create_cached_connection_lock(). This way, one can use a lock appropriate for the type of system, i.e., a null lock for single-threaded systems, and a real lock for multi-threaded systems. Implement the outgoing data queue. A unique identifier for the transport. This never *never* changes over the lifespan, so we don't have to worry about locking it. HINT: Protocol-specific transports that use connection handler might choose to set this to the handle for their connection. Queue of the consolidated, incoming messages.. Stack of incoming fragments, consolidated messages are going to be enqueued in "incoming_message_queue_" Is this transport really connected or not. In case of oneways with SYNC_NONE Policy we don't wait until the connection is ready and we buffer the requests in this transport until the connection is ready Our messaging object. Global orbcore resource. lock for synchronizing Transport OutputCDR access Used by the LRU, LFU and FIFO Connection Purging Strategies. Size of the buffer received. Number of bytes sent. Statistics. The tcs_set_ flag indicates that negotiation has occured and so the translators are correct, since a null translator is valid if both ends are using the same codeset, whatever that codeset might be. Strategy to decide whether multiple requests can be sent over the same connection or the connection is exclusive for a request. The adapter used to receive timeout callbacks from the Reactor. Strategy for waiting for the reply after sending the request.
http://www.dre.vanderbilt.edu/Doxygen/Stable/libtao-doc/a00518.html
CC-MAIN-2013-20
refinedweb
3,467
66.23
Function Extracts a component from a Field. Syntax #include <dx/dx.h> Object DXExtract(Object o, char *name) Functional Details For each Field in Object o, the routine returns the Object specified by name (typically an Array). Object o can be a simple Field or any Object that can contain Fields (e.g., Groups or Series). If Object o is a single Field, a single Object is returned (typically an Array). If Object o is anything else, the Object hierarchy is preserved, and each Field is replaced by component name. Return Value Returns o or returns NULL and sets an error code. It is an error if no component of the specified name is found in any Field of o. See Also DXExists, DXGetComponentValue, DXInsert, DXRemove, DXRename, DXReplace, DXSwap 12.10 , "Component Manipulation". [Data Explorer Home Page | Contact Data Explorer | Same document on Data Explorer Home Page ]
http://www.cc.gatech.edu/scivis/dx/pages/progu141.htm
crawl-003
refinedweb
147
59.5
#include "config.h" #include <assert.h> #include <ctype.h> #include <errno.h> #include <inttypes.h> #include <limits.h> #include <stdbool.h> #include <stdint.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/stat.h> #include <unistd.h> #include <wchar.h> #include "mutt/lib.h" #include "config/lib.h" #include "email/lib.h" #include "core/lib.h" #include "alias/lib.h" #include "gui/lib.h" #include "mutt.h" #include "lib.h" #include "index/lib.h" #include "menu/lib.h" #include "ncrypt/lib.h" #include "question/lib.h" #include "send/lib.h" #include "commands.h" #include "context.h" #include "format_flags.h" #include "hdrline.h" #include "hook.h" #include "keymap.h" #include "mutt_attach.h" #include "mutt_globals.h" #include "mutt_header.h" #include "mutt_logging.h" #include "mutt_mailbox.h" #include "muttlib.h" #include "mx.h" #include "opcodes.h" #include "options.h" #include "private_data.h" #include "protos.h" #include "recvattach.h" #include "recvcmd.h" #include "status.h" #include "sidebar/lib.h" #include "nntp/lib.h" #include "nntp/mdata.h" #include <libintl.h> Go to the source code of this file._pager.c. No flags are set. Definition at line 96 of file dlg_pager.c. Turn off colours and attributes. Definition at line 97 of file dlg_pager.c. Blinking text. Definition at line 98 of file dlg_pager.c. Bold text. Definition at line 99 of file dlg_pager.c. Underlined text. Definition at line 100 of file dlg_pager.c. Reverse video. Definition at line 101 of file dlg_pager.c. Use colours. Definition at line 102 of file dlg_pager.c. Definition at line 95 of file dlg_pager.c. Check that pager is in correct mode. Definition at line 241 of file dlg_pager.c. checks that mailbox is writable Definition at line 259 of file dlg_pager.c. Check that attach message mode is on. Definition at line 279 of file dlg_pager.c. checks that mailbox is has requested acl flags set Definition at line 300 of file dlg_pager.c. Check for an email signature. Definition at line 322 of file dlg_pager.c. Definition at line 362 of file dlg_pager.c. Set the colour for a line of text. Definition at line 384 of file dlg_pager.c. Add a new Line to the array. Definition at line 517 of file dlg_pager.c. Create a new quoting colour. Definition at line 540 of file dlg_pager.c. Insert a new quote colour class into a list. Definition at line 553 of file dlg_pager.c. Free a quote list. Definition at line 592 of file dlg_pager.c. Find a style for a string. Definition at line 616 of file dlg_pager.c. Check that the unique marker is present. Definition at line 928 of file dlg_pager.c. Check that the unique marker is present. Definition at line 944 of file dlg_pager.c. Check that the unique marker is present. Definition at line 954 of file dlg_pager.c. Is a line of message text a quote? Checks if line matches the $quote_regex and doesn't match $smileys. This is used by the pager for calling classify_quote. Definition at line 968 of file dlg_pager.c. Determine the style for a line of text. Definition at line 1014 of file dlg_pager.c. Is this an ANSI escape sequence? Definition at line 1282 of file dlg_pager.c. Parse an ANSI escape sequence. Definition at line 1296 of file dlg. Fill a buffer from a file. Definition at line 1448 of file dlg_pager.c. Display a line of text in the pager. Definition at line 1497 of file dlg_pager.c. Print a line on screen. Definition at line 1692 of file dlg_pager.c. Reposition the pager's view up by n lines. Definition at line 1980 of file dlg_pager.c. Reset the pager's viewing position. Definition at line 1995 of file dlg_pager.c. Queue a request for a redraw. Definition at line 2006 of file dlg_pager.c. Redraw the pager window. Definition at line 2016 of file dlg_pager.c. determine help mapping based on pager mode and mailbox type Definition at line 2263 of file dlg_pager.c. make sure the bottom line is displayed Definition at line 2301 of file dlg_pager.c. Is it time to mark the message read? Definition at line 2329 of file dlg_pager.c. Display an email, attachment, or help, in a window. This pager is actually not so simple as it once was. But it will be again. Currently it operates in 3 modes: Definition at line 2360 of file dlg_pager.c.
https://neomutt.org/code/dlg__pager_8c.html
CC-MAIN-2021-39
refinedweb
740
66.1
optimal fiscal policy in a linear quadratic setting. We slightly modify a well-known model of Robert Lucas and Nancy Stokey [LS83] so that convenient formulas for solving linear-quadratic models can be applied to simplify the calculations. The economy consists of a representative household and a benevolent government. The government finances an exogenous stream of government purchases with state-contingent loans and a linear tax on labor income. A linear tax is sometimes called a flat-rate tax. The household maximizes utility by choosing paths for consumption and labor, taking prices and the government’s tax rate and borrowing plans as given. Maximum attainable utility for the household depends on the government’s tax and borrowing plans. The Ramsey problem [Ram27] is to choose tax and borrowing plans that maximize the household’s welfare, taking the household’s optimizing behavior as given. There is a large number of competitive equilibria indexed by different government fiscal policies. The Ramsey planner chooses the best competitive equilibrium. We want to study the dynamics of tax rates, tax revenues, government debt under a Ramsey plan. Because the Lucas and Stokey model features state-contingent government debt, the government debt dynamics differ substantially from those in a model of Robert Barro [Bar79]. The treatment given here closely follows this manuscript, prepared by Thomas J. Sargent and Francois R. Velde. We cover only the key features of the problem in this lecture, leaving you to refer to that source for additional results and intuition. Technology¶ Labor can be converted one-for-one into a single, non-storable consumption good. In the usual spirit of the LQ model, the amount of labor supplied in each period is unrestricted. This is unrealistic, but helpful when it comes to solving the model. Realistic labor supply can be induced by suitable parameter values. Households¶ Consider a representative household who chooses a path $ \{\ell_t, c_t\} $ for labor and consumption to maximize $$ -\mathbb E \frac{1}{2} \sum_{t=0}^{\infty} \beta^t \left[ (c_t - b_t)^2 + \ell_t^2 \right] \tag{1} $$ subject to the budget constraint $$ \mathbb E \sum_{t=0}^{\infty} \beta^t p^0_t \left[ d_t + (1 - \tau_t) \ell_t + s_t - c_t \right] = 0 \tag{2} $$ Here - $ \beta $ is a discount factor in $ (0, 1) $. - $ p_t^0 $ is a scaled Arrow-Debreu price at time $ 0 $ of history contingent goods at time $ t+j $. - $ b_t $ is a stochastic preference parameter. - $ d_t $ is an endowment process. - $ \tau_t $ is a flat tax rate on labor income. - $ s_t $ is a promised time-$ t $ coupon payment on debt issued by the government. The scaled Arrow-Debreu price $ p^0_t $ is related to the unscaled Arrow-Debreu price as follows. If we let $ \pi^0_t(x^t) $ denote the probability (density) of a history $ x^t = [x_t, x_{t-1}, \ldots, x_0] $ of the state $ x^t $, then the Arrow-Debreu time $ 0 $ price of a claim on one unit of consumption at date $ t $, history $ x^t $ would be$$ \frac{\beta^t p^0_t} {\pi_t^0(x^t)} $$ Thus, our scaled Arrow-Debreu price is the ordinary Arrow-Debreu price multiplied by the discount factor $ \beta^t $ and divided by an appropriate probability. The budget constraint (2) requires that the present value of consumption be restricted to equal the present value of endowments, labor income and coupon payments on bond holdings. Government¶ The government imposes a linear tax on labor income, fully committing to a stochastic path of tax rates at time zero. The government also issues state-contingent debt. Given government tax and borrowing plans, we can construct a competitive equilibrium with distorting government taxes. Among all such competitive equilibria, the Ramsey plan is the one that maximizes the welfare of the representative consumer. Exogenous Variables¶ Endowments, government expenditure, the preference shock process $ b_t $, and promised coupon payments on initial government debt $ s_t $ are all exogenous, and given by - $ d_t = S_d x_t $ - $ g_t = S_g x_t $ - $ b_t = S_b x_t $ - $ s_t = S_s x_t $ The matrices $ S_d, S_g, S_b, S_s $ are primitives and $ \{x_t\} $ is an exogenous stochastic process taking values in $ \mathbb R^k $. We consider two specifications for $ \{x_t\} $. - Discrete case: $ \{x_t\} $ is a discrete state Markov chain with transition matrix $ P $. - VAR case: $ \{x_t\} $ obeys $ x_{t+1} = A x_t + C w_{t+1} $ where $ \{w_t\} $ is independent zero-mean Gaussian with identify covariance matrix. Equilibrium¶ An equilibrium is a feasible allocation $ \{\ell_t, c_t\} $, a sequence of prices $ \{p_t^0\} $, and a tax system $ \{\tau_t\} $ such that - The allocation $ \{\ell_t, c_t\} $ is optimal for the household given $ \{p_t^0\} $ and $ \{\tau_t\} $. - The government’s budget constraint (4) is satisfied. The Ramsey problem is to choose the equilibrium $ \{\ell_t, c_t, \tau_t, p_t^0\} $ that maximizes the household’s welfare. If $ \{\ell_t, c_t, \tau_t, p_t^0\} $ solves the Ramsey problem, then $ \{\tau_t\} $ is called the Ramsey plan. The solution procedure we adopt is - Use the first-order conditions from the household problem to pin down prices and allocations given $ \{\tau_t\} $. - Use these expressions to rewrite the government budget constraint (4) in terms of exogenous variables and allocations. - Maximize the household’s objective function (1) subject to the constraint constructed in step 2 and the feasibility constraint (3). The solution to this maximization problem pins down all quantities of interest. Solution¶ Step one is to obtain the first-conditions for the household’s problem, taking taxes and prices as given. Letting $ \mu $ be the Lagrange multiplier on (2), the first-order conditions are $ p_t^0 = (c_t - b_t) / \mu $ and $ \ell_t = (c_t - b_t) (1 - \tau_t) $. Rearranging and normalizing at $ \mu = b_0 - c_0 $, we can write these conditions as $$ p_t^0 = \frac{b_t - c_t}{b_0 - c_0} \quad \text{and} \quad \tau_t = 1 - \frac{\ell_t}{b_t - c_t} \tag{5} $$ Substituting (5) into the government’s budget constraint (4) yields $$ \mathbb E \sum_{t=0}^{\infty} \beta^t \left[ (b_t - c_t)(s_t + g_t - \ell_t) + \ell_t^2 \right] = 0 \tag{6} $$ The Ramsey problem now amounts to maximizing (1) subject to (6) and (3). The associated Lagrangian is $$ \mathscr L = \mathbb E \sum_{t=0}^{\infty} \beta^t \left\{ -\frac{1}{2} \left[ (c_t - b_t)^2 + \ell_t^2 \right] + \lambda \left[ (b_t - c_t)(\ell_t - s_t - g_t) - \ell_t^2 \right] + \mu_t [d_t + \ell_t - c_t - g_t] \right\} \tag{7} $$ The first-order conditions associated with $ c_t $ and $ \ell_t $ are$$ -(c_t - b_t ) + \lambda [- \ell_t + (g_t + s_t )] = \mu_t $$ and$$ \ell_t - \lambda [(b_t - c_t) - 2 \ell_t ] = \mu_t $$ Combining these last two equalities with (3) and working through the algebra, one can show that $$ \ell_t = \bar \ell_t - \nu m_t \quad \text{and} \quad c_t = \bar c_t - \nu m_t \tag{8} $$ where - $ \nu := \lambda / (1 + 2 \lambda) $ - $ \bar \ell_t := (b_t - d_t + g_t) / 2 $ - $ \bar c_t := (b_t + d_t - g_t) / 2 $ - $ m_t := (b_t - d_t - s_t ) / 2 $ Apart from $ \nu $, all of these quantities are expressed in terms of exogenous variables. To solve for $ \nu $, we can use the government’s budget constraint again. The term inside the brackets in (6) is $ (b_t - c_t)(s_t + g_t) - (b_t - c_t) \ell_t + \ell_t^2 $. Using (8), the definitions above and the fact that $ \bar \ell = b - \bar c $, this term can be rewritten as$$ (b_t - \bar c_t) (g_t + s_t ) + 2 m_t^2 ( \nu^2 - \nu) $$ Reinserting into (6), we get $$ \mathbb E \left\{ \sum_{t=0}^{\infty} \beta^t (b_t - \bar c_t) (g_t + s_t ) \right\} + ( \nu^2 - \nu) \mathbb E \left\{ \sum_{t=0}^{\infty} \beta^t 2 m_t^2 \right\} = 0 \tag{9} $$ Although it might not be clear yet, we are nearly there because: - The two expectations terms in (9) can be solved for in terms of model primitives. - This in turn allows us to solve for the Lagrange multiplier $ \nu $. - With $ \nu $ in hand, we can go back and solve for the allocations via (8). - Once we have the allocations, prices and the tax system can be derived from (5). Computing the Quadratic Term¶ Let’s consider how to obtain the term $ \nu $ in (9). If we can compute the two expected geometric sums $$ b_0 := \mathbb E \left\{ \sum_{t=0}^{\infty} \beta^t (b_t - \bar c_t) (g_t + s_t ) \right\} \quad \text{and} \quad a_0 := \mathbb E \left\{ \sum_{t=0}^{\infty} \beta^t 2 m_t^2 \right\} \tag{10} $$ then the problem reduces to solving$$ b_0 + a_0 (\nu^2 - \nu) = 0 $$ for $ \nu $. Provided that $ 4 b_0 < a_0 $, there is a unique solution $ \nu \in (0, 1/2) $, and a unique corresponding $ \lambda > 0 $. Let’s work out how to compute mathematical expectations in (10). For the first one, the random variable $ (b_t - \bar c_t) (g_t + s_t ) $ inside the summation can be expressed as$$ \frac{1}{2} x_t' (S_b - S_d + S_g)' (S_g + S_s) x_t $$ For the second expectation in (10), the random variable $ 2 m_t^2 $ can be written as$$ \frac{1}{2} x_t' (S_b - S_d - S_s)' (S_b - S_d - S_s) x_t $$ It follows that both objects of interest are special cases of the expression $$ q(x_0) = \mathbb E \sum_{t=0}^{\infty} \beta^t x_t' H x_t \tag{11} $$ where $ H $ is a matrix conformable to $ x_t $ and $ x_t' $ is the transpose of column vector $ x_t $. Suppose first that $ \{x_t\} $ is the Gaussian VAR described above. In this case, the formula for computing $ q(x_0) $ is known to be $ q(x_0) = x_0' Q x_0 + v $, where - $ Q $ is the solution to $ Q = H + \beta A' Q A $, and - $ v = \text{trace} \, (C' Q C) \beta / (1 - \beta) $ The first equation is known as a discrete Lyapunov equation and can be solved using this function. Finite State Markov Case¶ Next, suppose that $ \{x_t\} $ is the discrete Markov process described above. Suppose further that each $ x_t $ takes values in the state space $ \{x^1, \ldots, x^N\} \subset \mathbb R^k $. Let $ h \colon \mathbb R^k \to \mathbb R $ be a given function, and suppose that we wish to evaluate$$ q(x_0) = \mathbb E \sum_{t=0}^{\infty} \beta^t h(x_t) \quad \text{given} \quad x_0 = x^j $$ For example, in the discussion above, $ h(x_t) = x_t' H x_t $. It is legitimate to pass the expectation through the sum, leading to $$ q(x_0) = \sum_{t=0}^{\infty} \beta^t (P^t h)[j] \tag{12} $$ Here - $ P^t $ is the $ t $-th power of the transition matrix $ P $. - $ h $ is, with some abuse of notation, the vector $ (h(x^1), \ldots, h(x^N)) $. - $ (P^t h)[j] $ indicates the $ j $-th element of $ P^t h $. It can be shown that (12) is in fact equal to the $ j $-th element of the vector $ (I - \beta P)^{-1} h $. This last fact is applied in the calculations below. Other Variables¶ We are interested in tracking several other variables besides the ones described above. To prepare the way for this, we define$$ p^t_{t+j} = \frac{b_{t+j}- c_{t+j}}{b_t - c_t} $$ as the scaled Arrow-Debreu time $ t $ price of a history contingent claim on one unit of consumption at time $ t+j $. These are prices that would prevail at time $ t $ if markets were reopened at time $ t $. These prices are constituents of the present value of government obligations outstanding at time $ t $, which can be expressed as $$ B_t := \mathbb E_t \sum_{j=0}^{\infty} \beta^j p^t_{t+j} (\tau_{t+j} \ell_{t+j} - g_{t+j}) \tag{13} $$ Using our expression for prices and the Ramsey plan, we can also write $ B_t $ as$$ B_t = \mathbb E_t \sum_{j=0}^{\infty} \beta^j \frac{ (b_{t+j} - c_{t+j})(\ell_{t+j} - g_{t+j}) - \ell^2_{t+j} } { b_t - c_t } $$ This version is more convenient for computation. Using the equation$$ p^t_{t+j} = p^t_{t+1} p^{t+1}_{t+j} $$ it is possible to verify that (13) implies that$$ B_t = (\tau_t \ell_t - g_t) + E_t \sum_{j=1}^\infty p^t_{t+j} (\tau_{t+j} \ell_{t+j} - g_{t+j}) $$ and $$ B_t = (\tau_t \ell_t - g_t) + \beta E_t p^t_{t+1} B_{t+1} \tag{14} $$ Define $$ R^{-1}_{t} := \mathbb E_t \beta^j p^t_{t+1} \tag{15} $$ $ R_{t} $ is the gross $ 1 $-period risk-free rate for loans between $ t $ and $ t+1 $. A Martingale¶ We now want to study the following two objects, namely,$$ \pi_{t+1} := B_{t+1} - R_t [B_t - (\tau_t \ell_t - g_t)] $$ and the cumulation of $ \pi_t $$$ \Pi_t := \sum_{s=0}^t \pi_t $$ The term $ \pi_{t+1} $ is the difference between two quantities: - $ B_{t+1} $, the value of government debt at the start of period $ t+1 $. - $ R_t [B_t + g_t - \tau_t ] $, which is what the government would have owed at the beginning of period $ t+1 $ if it had simply borrowed at the one-period risk-free rate rather than selling state-contingent securities. Thus, $ \pi_{t+1} $ is the excess payout on the actual portfolio of state-contingent government debt relative to an alternative portfolio sufficient to finance $ B_t + g_t - \tau_t \ell_t $ and consisting entirely of risk-free one-period bonds. Use expressions (14) and (15) to obtain$$ \pi_{t+1} = B_{t+1} - \frac{1}{\beta E_t p^t_{t+1}} \left[\beta E_t p^t_{t+1} B_{t+1} \right] $$ or $$ \pi_{t+1} = B_{t+1} - \tilde E_t B_{t+1} \tag{16} $$ where $ \tilde E_t $ is the conditional mathematical expectation taken with respect to a one-step transition density that has been formed by multiplying the original transition density with the likelihood ratio$$ m^t_{t+1} = \frac{p^t_{t+1}}{E_t p^t_{t+1}} $$ It follows from equation (16) that$$ \tilde E_t \pi_{t+1} = \tilde E_t B_{t+1} - \tilde E_t B_{t+1} = 0 $$ which asserts that $ \{\pi_{t+1}\} $ is a martingale difference sequence under the distorted probability measure, and that $ \{\Pi_t\} $ is a martingale under the distorted probability measure. In the tax-smoothing model of Robert Barro [Bar79], government debt is a random walk. In the current model, government debt $ \{B_t\} $ is not a random walk, but the excess payoff $ \{\Pi_t\} $ on it is. import sys import numpy as np from numpy import sqrt, eye, zeros, cumsum from numpy.random import randn import scipy.linalg import matplotlib.pyplot as plt from collections import namedtuple from quantecon import nullspace, mc_sample_path, var_quadratic_sum # == Set up a namedtuple to store data on the model economy == # Economy = namedtuple('economy', ('β', # Discount factor 'Sg', # Govt spending selector matrix 'Sd', # Exogenous endowment selector matrix 'Sb', # Utility parameter selector matrix 'Ss', # Coupon payments selector matrix 'discrete', # Discrete or continuous -- boolean 'proc')) # Stochastic process parameters # == Set up a namedtuple to store return values for compute_paths() == # Path = namedtuple('path', ( def compute_paths(T, econ): """ Compute simulated time paths for exogenous and endogenous variables. Parameters =========== T: int Length of the simulation econ: a namedtuple of type 'Economy', containing β - Discount factor Sg - Govt spending selector matrix Sd - Exogenous endowment selector matrix Sb - Utility parameter selector matrix Ss - Coupon payments selector matrix discrete - Discrete exogenous process (True or False) proc - Stochastic process parameters Returns ======== path: a namedtuple of type 'Path', containing The corresponding values are flat numpy ndarrays. """ # == Simplify names == # β, Sg, Sd, Sb, Ss = econ.β, econ.Sg, econ.Sd, econ.Sb, econ.Ss if econ.discrete: P, x_vals = econ.proc else: A, C = econ.proc # == Simulate the exogenous process x == # if econ.discrete: state = mc_sample_path(P, init=0, sample_size=T) x = x_vals[:, state] else: # == Generate an initial condition x0 satisfying x0 = A x0 == # nx, nx = A.shape x0 = nullspace((eye(nx) - A)) x0 = -x0 if (x0[nx-1] < 0) else x0 x0 = x0 / x0[nx-1] # == Generate a time series x of length T starting from x0 == # nx, nw = C.shape x = zeros((nx, T)) w = randn(nw, T) x[:, 0] = x0.T for t in range(1, T): x[:, t] = A @ x[:, t-1] + C @ w[:, t] # == Compute exogenous variable sequences == # g, d, b, s = ((S @ x).flatten() for S in (Sg, Sd, Sb, Ss)) # == Solve for Lagrange multiplier in the govt budget constraint == # # In fact we solve for ν = lambda / (1 + 2*lambda). Here ν is the # solution to a quadratic equation a(ν**2 - ν) + b = 0 where # a and b are expected discounted sums of quadratic forms of the state. Sm = Sb - Sd - Ss # == Compute a and b == # if econ.discrete: ns = P.shape[0] F = scipy.linalg.inv(eye(ns) - β * P) a0 = 0.5 * (F @ (x_vals.T @ Sm.T)**2)[0] H = ((Sb - Sd + Sg) @ x_vals) * ((Sg - Ss) @ x_vals) b0 = 0.5 * (F @ H.T)[0] a0, b0 = float(a0), float(b0) else: H = Sm.T @ Sm a0 = 0.5 * var_quadratic_sum(A, C, H, β, x0) H = (Sb - Sd + Sg).T @ (Sg + Ss) b0 = 0.5 * var_quadratic_sum(A, C, H, β, x0) # == Test that ν has a real solution before assigning == # warning_msg = """ Hint: you probably set government spending too {}. Elect a {} Congress and start over. """ disc = a0**2 - 4 * a0 * b0 if disc >= 0: ν = 0.5 * (a0 - sqrt(disc)) / a0 else: print("There is no Ramsey equilibrium for these parameters.") print(warning_msg.format('high', 'Republican')) sys.exit(0) # == Test that the Lagrange multiplier has the right sign == # if ν * (0.5 - ν) < 0: print("Negative multiplier on the government budget constraint.") print(warning_msg.format('low', 'Democratic')) sys.exit(0) # == Solve for the allocation given ν and x == # Sc = 0.5 * (Sb + Sd - Sg - ν * Sm) Sl = 0.5 * (Sb - Sd + Sg - ν * Sm) c = (Sc @ x).flatten() l = (Sl @ x).flatten() p = ((Sb - Sc) @ x).flatten() # Price without normalization τ = 1 - l / (b - c) rvn = l * τ # == Compute remaining variables == # if econ.discrete: H = ((Sb - Sc) @ x_vals) * ((Sl - Sg) @ x_vals) - (Sl @ x_vals)**2 temp = (F @ H.T).flatten() B = temp[state] / p H = (P[state, :] @ x_vals.T @ (Sb - Sc).T).flatten() R = p / (β * H) temp = ((P[state, :] @ x_vals.T @ (Sb - Sc).T)).flatten() ξ = p[1:] / temp[:T-1] else: H = Sl.T @ Sl - (Sb - Sc).T @ (Sl - Sg) L = np.empty(T) for t in range(T): L[t] = var_quadratic_sum(A, C, H, β, x[:, t]) B = L / p Rinv = (β * ((Sb - Sc) @ A @ x)).flatten() / p R = 1 / Rinv AF1 = (Sb - Sc) @ x[:, 1:] AF2 = (Sb - Sc) @ A @ x[:, :T-1] ξ = AF1 / AF2 ξ = ξ.flatten() π = B[1:] - R[:T-1] * B[:T-1] - rvn[:T-1] + g[:T-1] Π = cumsum(π * ξ) # == Prepare return values == # path = Path(g=g, d=d, b=b, s=s, c=c, l=l, p=p, τ=τ, rvn=rvn, B=B, R=R, π=π, Π=Π, ξ=ξ) return path def gen_fig_1(path): """ The parameter is the path namedtuple returned by compute_paths(). See the docstring of that function for details. """ T = len(path.c) # == Prepare axes == # num_rows, num_cols = 2, 2 fig, axes = plt.subplots(num_rows, num_cols, figsize=(14, 10)) plt.subplots_adjust(hspace=0.4) for i in range(num_rows): for j in range(num_cols): axes[i, j].grid() axes[i, j].set_xlabel('Time') bbox = (0., 1.02, 1., .102) legend_args = {'bbox_to_anchor': bbox, 'loc': 3, 'mode': 'expand'} p_args = {'lw': 2, 'alpha': 0.7} # == Plot consumption, govt expenditure and revenue == # ax = axes[0, 0] ax.plot(path.rvn, label=r'$\tau_t \ell_t$', **p_args) ax.plot(path.g, label='$g_t$', **p_args) ax.plot(path.c, label='$c_t$', **p_args) ax.legend(ncol=3, **legend_args) # == Plot govt expenditure and debt == # ax = axes[0, 1] ax.plot(list(range(1, T+1)), path.rvn, label=r'$\tau_t \ell_t$', **p_args) ax.plot(list(range(1, T+1)), path.g, label='$g_t$', **p_args) ax.plot(list(range(1, T)), path.B[1:T], label='$B_{t+1}$', **p_args) ax.legend(ncol=3, **legend_args) # == Plot risk-free return == # ax = axes[1, 0] ax.plot(list(range(1, T+1)), path.R - 1, label='$R_t - 1$', **p_args) ax.legend(ncol=1, **legend_args) # == Plot revenue, expenditure and risk free rate == # ax = axes[1, 1] ax.plot(list(range(1, T+1)), path.rvn, label=r'$\tau_t \ell_t$', **p_args) ax.plot(list(range(1, T+1)), path.g, label='$g_t$', **p_args) axes[1, 1].plot(list(range(1, T)), path.π, label=r'$\pi_{t+1}$', **p_args) ax.legend(ncol=3, **legend_args) plt.show() def gen_fig_2(path): """ The parameter is the path namedtuple returned by compute_paths(). See the docstring of that function for details. """ T = len(path.c) # == Prepare axes == # num_rows, num_cols = 2, 1 fig, axes = plt.subplots(num_rows, num_cols, figsize=(10, 10)) plt.subplots_adjust(hspace=0.5) bbox = (0., 1.02, 1., .102) bbox = (0., 1.02, 1., .102) legend_args = {'bbox_to_anchor': bbox, 'loc': 3, 'mode': 'expand'} p_args = {'lw': 2, 'alpha': 0.7} # == Plot adjustment factor == # ax = axes[0] ax.plot(list(range(2, T+1)), path.ξ, label=r'$\xi_t$', **p_args) ax.grid() ax.set_xlabel('Time') ax.legend(ncol=1, **legend_args) # == Plot adjusted cumulative return == # ax = axes[1] ax.plot(list(range(2, T+1)), path.Π, label=r'$\Pi_t$', **p_args) ax.grid() ax.set_xlabel('Time') ax.legend(ncol=1, **legend_args) plt.show() The function var_quadratic_sum imported from quadsums is for computing the value of (11) when the exogenous process $ \{ x_t \} $ is of the VAR type described above. Below the definition of the function, you will see definitions of two namedtuple objects, Economy and Path. The first is used to collect all the parameters and primitives of a given LQ economy, while the second collects output of the computations. In Python, a namedtuple is a popular data type from the collections module of the standard library that replicates the functionality of a tuple, but also allows you to assign a name to each tuple element. These elements can then be references via dotted attribute notation — see for example the use of path in the functions gen_fig_1() and gen_fig_2(). The benefits of using namedtuples: - Keeps content organized by meaning. - Helps reduce the number of global variables. Other than that, our code is long but relatively straightforward. The Continuous Case¶ Our first example adopts the VAR specification described above. Regarding the primitives, we set - $ \beta = 1 / 1.05 $ - $ b_t = 2.135 $ and $ s_t = d_t = 0 $ for all $ t $ Government spending evolves according to$$ g_{t+1} - \mu_g = \rho (g_t - \mu_g) + C_g w_{g, t+1} $$ with $ \rho = 0.7 $, $ \mu_g = 0.35 $ and $ C_g = \mu_g \sqrt{1 - \rho^2} / 10 $. Here’s the code # == Parameters == # β = 1 / 1.05 ρ, mg = .7, .35 A = eye(2) A[0, :] = ρ, mg * (1-ρ) C = np.zeros((2, 1)) C[0, 0] = np.sqrt(1 - ρ**2) * mg / 10 Sg = np.array((1, 0)).reshape(1, 2) Sd = np.array((0, 0)).reshape(1, 2) Sb = np.array((0, 2.135)).reshape(1, 2) Ss = np.array((0, 0)).reshape(1, 2) economy = Economy(β=β, Sg=Sg, Sd=Sd, Sb=Sb, Ss=Ss, discrete=False, proc=(A, C)) T = 50 path = compute_paths(T, economy) gen_fig_1(path) The legends on the figures indicate the variables being tracked. Most obvious from the figure is tax smoothing in the sense that tax revenue is much less variable than government expenditure. gen_fig_2(path) # == Parameters == # β = 1 / 1.05 P = np.array([[0.8, 0.2, 0.0], [0.0, 0.5, 0.5], [0.0, 0.0, 1.0]]) # == Possible states of the world == # # Each column is a state of the world. The rows are [g d b s 1] x_vals = np.array([[0.5, 0.5, 0.25], [0.0, 0.0, 0.0], [2.2, 2.2, 2.2], [0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]) Sg = np.array((1, 0, 0, 0, 0)).reshape(1, 5) Sd = np.array((0, 1, 0, 0, 0)).reshape(1, 5) Sb = np.array((0, 0, 1, 0, 0)).reshape(1, 5) Ss = np.array((0, 0, 0, 1, 0)).reshape(1, 5) economy = Economy(β=β, Sg=Sg, Sd=Sd, Sb=Sb, Ss=Ss, discrete=True, proc=(P, x_vals)) T = 15 path = compute_paths(T, economy) gen_fig_1(path) The call gen_fig_2(path) generates gen_fig_2(path) Exercise 1¶ Modify the VAR example given above, setting$$ g_{t+1} - \mu_g = \rho (g_{t-3} - \mu_g) + C_g w_{g, t+1} $$ with $ \rho = 0.95 $ and $ C_g = 0.7 \sqrt{1 - \rho^2} $. Produce the corresponding figures. # == Parameters == # β = 1 / 1.05 ρ, mg = .95, .35 A = np.array([[0, 0, 0, ρ, mg*(1-ρ)], [1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 0, 1]]) C = np.zeros((5, 1)) C[0, 0] = np.sqrt(1 - ρ**2) * mg / 8 Sg = np.array((1, 0, 0, 0, 0)).reshape(1, 5) Sd = np.array((0, 0, 0, 0, 0)).reshape(1, 5) Sb = np.array((0, 0, 0, 0, 2.135)).reshape(1, 5) # Chosen st. (Sc + Sg) * x0 = 1 Ss = np.array((0, 0, 0, 0, 0)).reshape(1, 5) economy = Economy(β=β, Sg=Sg, Sd=Sd, Sb=Sb, Ss=Ss, discrete=False, proc=(A, C)) T = 50 path = compute_paths(T, economy) gen_fig_1(path) gen_fig_2(path)
https://lectures.quantecon.org/py/lqramsey.html
CC-MAIN-2019-35
refinedweb
4,142
65.01
. Update 18Apr While working on this other project, I needed to control the tank with this same Graupner Rx/Tx and hence needed to port this code to C# for the .NET micro framework, so that it runs on the FEZ Domino board. Here’s the code, BUT BEWARE, while it works, it’s UNRELIABLE, and generates A LOT OF NOISE. I’m not sure, but I imagine it’s due to the non real time of the managed code, that can randomly introduce delays in the pulses… or so I think, DO let me know if you believe otherwise and/or find a problem in my code… using System; using System.Threading; using Microsoft.SPOT; using Microsoft.SPOT.Hardware; using GHIElectronics.NETMF.FEZ; public delegate void RadioDelegate(sbyte[] values); public delegate void RadioSignalLostDelegate(); /** * It's here only as an EXAMPLE. * IT DOES WORK, but it's UNRELIABLE ! * */ class RadioPPM : IDisposable { const byte CHANNELS_COUNT = 6; const long SYNC_SIGNAL_LENGHT = 5 * TimeSpan.TicksPerMillisecond; // 5milliseconds in ticks const long TICKS_PER_MICROSEC = TimeSpan.TicksPerMillisecond / 1000; const long ACCEPTABLE_PULSE_DELTA = 30 * TICKS_PER_MICROSEC; //2 signals with less than 30 microsecs difference are considered equal const long MIN_VALID_PULSE = 900 * TICKS_PER_MICROSEC; // below this we consider is a spurious signal const long MAX_VALID_PULSE = 2100 * TICKS_PER_MICROSEC; // above this -> spurious signal const sbyte MAX_INVALID_PULSES = 5; //after this number of consecutive invalid pulses we declare the signal LOST ! const long MIN_PULSE = 1100 * TICKS_PER_MICROSEC; const long MAX_PULSE = 1900 * TICKS_PER_MICROSEC; const long MID_PULSE = (MAX_PULSE + MIN_PULSE) / 2; const long PULSE_SCALE = (MAX_PULSE - MIN_PULSE) / 2 / 100; private readonly InterruptPort _PPMport; private readonly RadioDelegate _newValuesCallback; private readonly RadioSignalLostDelegate _signalLostCallback; private bool _isSignalLost = true; sbyte _rx_ch = 0; sbyte _invalidPulsesCount = 0; long[] _currPulses = new long[CHANNELS_COUNT]; // pulses currently being read by the ISR (width in ticks for performance) long[] _readPulses = new long[CHANNELS_COUNT]; // pulses done reading, waiting for the callbackExecutor to deal with them long[] _sentPulses = new long[CHANNELS_COUNT]; long _previousEdgeTime; public RadioPPM(FEZ_Pin.Interrupt PPMpin, RadioDelegate newValuesCallback, RadioSignalLostDelegate signalLostCallback) { _newValuesCallback = newValuesCallback; _signalLostCallback = signalLostCallback; // whenever there is a raise in the PPM signal, deal with it _PPMport = new InterruptPort((Cpu.Pin)PPMpin, false, Port.ResistorMode.Disabled, Port.InterruptMode.InterruptEdgeHigh); _PPMport.OnInterrupt += new NativeEventHandler(PPM_OnInterrupt); // start the executor thread. ONLY 1 thread, rather than creating and killing them... // also the callbacks are called in this thread rather than in the interrupt ISR ! new Thread(CallbackExecutor).Start(); } private void PPM_OnInterrupt(uint port, uint state, DateTime time) { // 1. update the current pulse length and store current time for future use long currentPulse = time.Ticks - _previousEdgeTime; _previousEdgeTime = time.Ticks; // 2. is it a SYNC signal ? if (currentPulse > SYNC_SIGNAL_LENGHT) { _rx_ch = 0; } else { // 3. is this pulse VALID ? if (currentPulse < MIN_VALID_PULSE || currentPulse > MAX_VALID_PULSE) { // if the count is already negative -> we know about the problem useless to keep incrementing the number... // also increment ONLY if _rc_ch >= 0 means we only increment ONCE per new signal... if(_invalidPulsesCount >= 0 && _rx_ch >= 0) _invalidPulsesCount++; // invalidate this whole signal _rx_ch = -1; } else { // 4. pulse for one of the channels if (_rx_ch >= 0) _currPulses[_rx_ch++] = currentPulse; // 5. Have we read ALL the channels ? This should be the last pulse, don't accept other pulses until we get a new SYNC signal if (_rx_ch >= CHANNELS_COUNT) { // move the fresh data into the array for the CallbackExecutor lock (_readPulses) _readPulses = (long[])_currPulses.Clone(); // end of this transmission, will wait for futur SYNC gap... _rx_ch = -1; _invalidPulsesCount = 0; // only now it's safe to say we got at least one valid signal } } } } private void CallbackExecutor() { sbyte[] values = new sbyte[CHANNELS_COUNT]; bool send = false; while (true) { // do we have a valid signal !? if (_invalidPulsesCount == 0) { send = false; // needs lock as the interrupt thread accesses this too... lock (_readPulses) { // only send IF something has changed if (!areEqual(_readPulses, _sentPulses)) { _sentPulses = (long[])_readPulses.Clone(); send = true; } } // this HAS to be outside the locked block, as we don't know how long it will take... if (send) { _isSignalLost = false; extractValuesFromPulses(_sentPulses, values); _newValuesCallback(values); } } else if (_invalidPulsesCount >= MAX_INVALID_PULSES) { _isSignalLost = true; _signalLostCallback(); // stop calling this method again, _invalidPulsesCount = -1; } Thread.Sleep(10); // take it easy, the pulses are received, time is less of essence now... } } private static bool areEqual(long[] a, long[] b) { if (a == null && b == null) return true; if (a == null || b == null) return false; if (a.Length != b.Length) return false; for (uint i = 0; i < a.Length; i++) if (System.Math.Abs((int)(a[i] - b[i])) > ACCEPTABLE_PULSE_DELTA) return false; return true; } // transforms pulses expresses in ticks into percentages -100 to +100% . private static void extractValuesFromPulses(long[] pulses, sbyte[] values) { long currPulse; for (uint i = 0; i < pulses.Length; i++) { currPulse = pulses[i]; if (currPulse < MIN_PULSE) currPulse = MIN_PULSE; if (currPulse > MAX_PULSE) currPulse = MAX_PULSE; values[i] = (sbyte)((currPulse - MID_PULSE) / PULSE_SCALE); } } public bool getIsSignalLost() { return _isSignalLost; } public void Dispose() { _PPMport.Dispose(); } } ____________________________________________________________________________________ Aside of the 7 channels connectors, it has one extra “battery” connector which actually has a “diagnostic” signal on its signal pin (I have no idea what kind of diagnostic is really providing !). So the clean solution is to : - remove a 470 Ohms resistor to simply disconnect the battery connector from the diagnostic signal - wire the signal pin on this connector to te indicated pin of a transitor to get he PPM signal And here’s some Arduino code, loosely based on Jose’s code from here: #define PPM_PIN 8 // Input Capture Pin (PPM Rx signal reading) // We need to now the number of channels to read from radio // If this value is not well adjusted you will have bad radio readings... (you can check the radio at the begining of the setup process) #define SYNC_GAP_LEN 8000 // we assume a space at least 4000us is sync (note clock counts in 0.5 us ticks) #define MIN_IN_PULSE_WIDTH 750 //a valid pulse must be at least 750us #define MAX_IN_PULSE_WIDTH 2250 //a valid pulse must be less than 2250us volatile uint8_t _rx_ch = 0; volatile unsigned int _timeCaptures[ MAX_CHANNELS + 1]; // timer values for pulses width calculation volatile byte _radio_status = 0; // radio_status = 1 => OK, 0 => No Radio signal volatile unsigned int _timer1_last_value; // to store the last timer1 value before a reset. Needs to be volatile to be accessible from ISR volatile unsigned int _ICR1_old = 0; void setup_radio(){ // PPM signal initialisation pinMode(PPM_PIN, INPUT); // Timer 1 used for PPM input signal //TCCR1A = 0xA0; // Normal operation mode, PWM Operation disabled, clear OC1A/OC1B on Compare Match TCCR1A = 0x00; // COM1A1=0, COM1A0=0 => Disconnect Pin OC1 from Timer/Counter 1 -- PWM11=0,PWM10=0 => PWM TCCR1B = 1 << CS11 | 1 << ICES1; // TCNT1 prescaler/8 (16Mhz => 0.5useg, 8 Mhz => 1useg) | RISING edge TIMSK1 = _BV(ICIE1) | _BV (TOIE1); // Enable interrupts : Timer1 Capture and Timer1 Overflow } // Timer1 Overflow // Detects radio signal lost and generate servo outputs (overflow at 22ms (45Hz)) ISR(TIMER1_OVF_vect){ TCNT1 = 0; _timer1_last_value=0xFFFF; // Last value before overflow... // Radio signal lost... _radio_status = 0; } // Capture RX pulse train using TIMER 1 CAPTURE INTERRUPT // And also send Servo pulses using OCR1A and OCR1B [disabled now] // Servo output is synchronized with input pulse train ISR(TIMER1_CAPT_vect) { if(!bit_is_set(TCCR1B ,ICES1)){ // falling edge? if(_rx_ch == MAX_CHANNELS) { // This should be the last pulse... _timeCaptures[_rx_ch ++] = ICR1; _radio_status = 1; // Rx channels ready... } TCCR1B |= _BV(ICES1); // Next time : RISING edge _timer1_last_value = TCNT1; // Take last value before reset TCNT1 = 0; // Clear counter // Servo Output on OC1A/OC1B... (syncronised with radio pulse train) //TCCR1A = 0xF0; // Set OC1A/OC1B on Compare Match //TCCR1C = 0xC0; // Force Output Compare A/B (Start Servo pulse) //TCCR1C = 0x00; //TCCR1A = 0xA0; // Clear OC1A/OC1B on Compare Match }else { // Rise edge if ((ICR1 - _ICR1_old) >= SYNC_GAP_LEN){ // SYNC pulse? _rx_ch = 1; // Channel = 1 _timeCaptures[0] = ICR1; }else { if(_rx_ch > 1; // The pulse width = _timeCaptures[n] - _timeCaptures[n-1] and move to microsegundos (resolution is 0.5us) // Out of range? if ((result > MIN_IN_PULSE_WIDTH) && (result < MAX_IN_PULSE_WIDTH)) return result; else return -1; } boolean isPulseDifferent(int pulseA, int pulseB){ int delta = pulseA - pulseB; return delta < -20 || delta > 20; } This code does work, however I’m not too pleased with it as : - it seems far too complicated, I’m not sure we need to care about falling edges ! Just count the rising ones as from the diagram is the time elapsed between these that determines the pulses ! (makes sense, the falling of the signal is there only to allow for a new rise later on… 🙂 ) - it uses timer 1, which is also used by Arduino’s Servo library. This forces use to use another timer to control the ESCs, rather than the convenient out of the box Arduino library. I plan on moving this to using Timer2 (which is on 8bits as oposed to 16 for Timer1, but the resolution should still be enough !) - it does some rather “dodgy” double reading of array values to allow concurrency (due to the Interrupt Service Routine that updates this same array) That’s it, let me know if anybody want more details or has any feedback. And finally, here’s my latest version of the Arduino code to read the PPM: - it is shorter - it is clearer (or so I think 🙂 ) - it uses timer2 rather than 1, and counting with a resolution of “only” 16us (which is more than enough as the FM receiver has comparable errors too, and we don’t care about 16us when a pulse is between 1000 and 2000) - it allows me to use the Servo library (based on Timer1) for controling the ESCs ; } P.S. here’s another page on the mikrokopter.de site, where a German guy briefly explains the same hack why when i use this code the PWM 3 output no longer works? Can’t really remember, I’ll may have to try at some point for my quadcopter and will let you know, but there’s no guarantee… Dan thank you for the reply 🙂 ok i’ll explain my problem, i’m using a graupner r700 hacked in your way, and i want to use multiwii software, but when it’s all connected and working some RC input signals glitches from a value of 50 to 200 randomly! so i’ve started to use your code and mixing the method that you use: TCCR2A = 0x00; TCCR2B = 1 << CS22 | 1 < 16usec) TIMSK2 = 1 << TOIE2; //Timer2 Overflow Interrupt Enable and seems that now the RC signal is stable but motors on pin 3 and 11 doesn't sincronize. can you help me? thank you Rob another thing, i’ve tried also to use: TCCR0A = 0×00; TCCR0B = 1 << CS22 | 1 < 16usec) cause if i've understand use pin 5 and 6 instead of 3 and 11 but does'nt work, cause i don't receive anything from RC. Rob Sorry, I’m not sure I understand your setup or what you want to do, and I’m not familiar with the multiwii. Dan I’m trying to implement your PPM decoder above on my own R700. I’m having some trouble getting a reliable signal out though. I opened my R700 and soldered a jumper wire to the transistor that you pointed out in the picture above and plugged it into digital pin 2 on my arduino uno. The output is mostly invalid results “-1” and every now and then I get a seemingly real value but it doesn’t correspond with my Tx. I wrote a short script to just see what the pulses look like and I get mostly random looking pulse widths. Any advice? Brian My test code: Serial.print(“Pin was high: “); Serial.println(pulseIn(2, HIGH)); Serial.print(“Pin was low : “); Serial.println(pulseIn(2, LOW)); Short example of my output: Pin was low : 1082 Pin was high: 7466 Pin was low : 7381 Pin was high: 368 Pin was low : 23 Pin was high: 402 Pin was low : 52 Pin was high: 25 Pin was low : 7734 Pin was high: 407 Pin was low : 661 Hi Brian, I’m not sure what to say… Have you also removed the 470Ohms resistor as per this picture: Have you actually tried my latest version of the code (the last code snipped in the post, that was actually badly copy/pasted and that I have just fixed) which uses INTERRUPTS (still on pin 2) ? Try that and let me know how it goes ? Dan Hey Dan, Thanks for the quick response. I didn’t remove the resistor because I’m trying to avoid permanent damage to the receiver just yet. Instead I just soldered a jumper to the transistor and ran it directly to my arduino pin 2. I did try your code snippet above. I noticed the copy/paste mixup and straightened it out. My code looks the same as how you fixed it above except I added a few things to make it run as an individual arduino sketch. I added this to view output immediately: void loop() { Serial.print(rxGetChannelPulseWidth(0)); Serial.print(“:”); Serial.print(rxGetChannelPulseWidth(1)); Serial.print(“:”); Serial.print(rxGetChannelPulseWidth(2)); // … you get the idea Serial.print(“:”); Serial.println(rxGetChannelPulseWidth(7)); } changed your “void setup_radio()” to “void setup()” and added a define for MAX_CHANNELS that it seems you left out. The response I get is a lot of “-1″s because of the out of range check and every now and then I get a valid number that seems random and for random channels. Thanks for the help Brian Hey Dan, It works! I guess it was a problem with the reference voltage in my arduino. I had been powering the arduino from my +5V usb and powering the receiver with the usual 4.8V battery.. Thanks for posting this article. It saved me a lot of poking around my receiver to find the PPM. Brian I’m really glad you found the problem, and also that my blog was helpful ! I must admit it’s frustrating when things “are supposed to work” but they don’t … so it’s always good to hear that you found the problem ! I think your problem was not the +5V, but simply that you did NOT have a COMMON GROUND. 4.8 and 5V are definitely close enough. However all these voltages are relative to the reference GND, and if that’s not common then all bets are off… dan Indeed. A few minutes after I posted that I realized that I was using two power sources with no common ground and that was most likely my problem. It’s always the simple things that cause the biggest problems. Brian Hi, How do you plug the receiver on the map FEZ DOMINO? Thank you 🙂 Salut, I’m not sure I understand your question … do you want to know how to connect the PPM signal from the Graupner receiver to the FEZ Domino board ? You simply use a 3 wires cable (like for any standard servo), connect the ground, the VCC and the signal wire to any of the Domino’s interrupt pins. Hope this helps, dan But the receiver has what location fix cable You connect the cable where the power / diagnostic cable was. The only hole that is perpendicular to all the others. Look at the picture in the post () , it shows you how you need to open the receiver and reconnect that pin and remove a resistor. dan Hi, I have this : 😦 Pas de bol alors … 🙂 Then try to google the net to see if it’s possible to get a PPM signal from that receiver. Some of them do by default, some others can be hacked (like my Graupner one), etc. … This whole post is about how to hack this specific Graupner receiver, and THEN how to read the PPM signal with a FEZ domino or Arduino. dan Here’s what I’ve found after a quick search: They don’t apply to your specific receiver, but if you’re lucky it might be similar enough… Dan thanks Hi. Great explanation. Thanks a lot. I have a question. If I want to use it with arduino 8Mhz what I have to change in timer ? I’m not very good in timers. The CSxx bits in TCCR2B needs changing so that the prescaler becomes 128 rather than 256. This way the rest of the code should remain identical. I don’t know exactly which bits to set for prescaler 128, but try doing a Google search on “arduino timer 2 prescaler 128″… Or look in the atmega328 doc… Hope this helps, Dan
https://trandi.wordpress.com/2011/04/12/graupner-r700-ppm-signal/
CC-MAIN-2017-26
refinedweb
2,701
59.13
: Examples: Generate 1 million random integers. Report the largest one we see. import System.Random.Mersenne import qualified Data.Judy as J import Control.Monad main = do g <- getStdGen rs <- randoms g j <- J.new :: IO (J.JudyL Int) forM_ (take 1000000 rs) $ \n -> J.insert n 1 j v <- J.findMax j case v of Nothing -> print "Done." Just (k,_) -> print k Compile it: $ ghc -O2 --make Test.hs Running it: $ time ./Test 18446712059962695226 ./Test 0.65s user 0.03s system 99% cpu 0.680 total Notes: - By default this library is threadsafe. - Multiple Haskell threads may operate on the arrays simultaneously. You can compile without locks if you know you're running in a single threaded fashion with: cabal install -funsafe Sun Sep 27 17:12:24 PDT 2009: The library has only lightly been tested. Synopsis - data JudyL a - type Key = Word - new :: JE a => IO (JudyL a) - null :: JudyL a -> IO Bool - size :: JudyL a -> IO Int - member :: Key -> JudyL a -> IO Bool - lookup :: JE a => Key -> JudyL a -> IO (Maybe a) - insert :: JE a => Key -> a -> JudyL a -> IO () - delete :: Key -> JudyL a -> IO () - adjust :: JE a => (a -> a) -> Key -> JudyL a -> IO () - findMin :: JE a => JudyL a -> IO (Maybe (Key, a)) - findMax :: JE a => JudyL a -> IO (Maybe (Key, a)) - keys :: JudyL a -> IO [Key] - elems :: JE a => JudyL a -> IO [a] - class JE a where Basic types A JudyL array is a mutable, finite map from Word to Word values. It is threadsafe by default. A value is addressed by a key. The array may be sparse, and the key may be any word-sized value. There are no duplicate keys. Values may be any instance of the JE class. Instances Construction new :: JE a => IO (JudyL a)Source Allocate a new empty JudyL array. A finalizer is associated with the JudyL array, that will cause the garbage collector to free it automatically once the last reference has been dropped on the Haskell side. Note: The Haskell GC will track references to the foreign resource, but the foreign resource won't exert any heap pressure on the GC, meaning that finalizers will be run much later than you expect. An explicit 'performGC' can help with this. Note: that if you store pointers in the Judy array we have no way of deallocating those -- you'll need to track those yourself (e.g. via StableName or ForeignPtr) Queries lookup :: JE a => Key -> JudyL a -> IO (Maybe a)Source Lookup a value associated with a key in the JudyL array. Return Nothing if no value is found. Insertion and removal insert :: JE a => Key -> a -> JudyL a -> IO ()Source Insert a key and value pair into the JudyL array. Any existing key will be overwritten. adjust :: JE a => (a -> a) -> Key -> JudyL a -> IO ()Source Update a value at a specific key with the result of the provided function. When the key is not a member of the map, no change is made. Min/Max findMin :: JE a => JudyL a -> IO (Maybe (Key, a))Source findMin. Find the minimal key, and its associated value, in the map. Nothing if the map is empty. findMax :: JE a => JudyL a -> IO (Maybe (Key, a))Source findMax. Find the maximal key, and its associated value, in the map. Nothing if the map is empty. Conversion Judy-storable types Class of things that can be stored in the JudyL array. You need to be able to convert the structure to a Word value, or a word-sized pointer. Note: that it is possible to convert any Haskell value into a JE-type, via a StablePtr. This allocates an entry in the runtime's stable pointer table, giving you a pointer that may be passed to C, and that when dereferenced in Haskell will yield the original Haskell value. See the source for an example of this with strict bytestrings. Methods toWord :: a -> IO WordSource Convert the Haskell value to a word-sized type that may be stored in a JudyL fromWord :: Word -> IO aSource Reconstruct the Haskell value from the word-sized type. Instances
https://hackage.haskell.org/package/judy-0.2.2/docs/Data-Judy.html
CC-MAIN-2015-32
refinedweb
686
73.58
30 April 2013 23:17 [Source: ICIS news] MEDELLIN, Colombia (ICIS)--Mexichem’s Q1 net income fell by 54% to Mexican pesos (Ps) 836m ($67m, €51m) as a result of higher production costs and currency exchange loss, the Mexico-based chemical conglomerate said on Tuesday. Quarterly revenues were up by 14% to Ps15.5bn, driven by higher prices in the chlorine-vinyl chain and the integration of Wavin in the consolidated results. Mexichem acquired the PVC pipe maker in June 2012. Increased revenue was partially offset by reduced sales in the company’s chlorine-vinyl and fluorine chains, which were down by 20% and 24%, respectively. The company’s earnings before interest, taxes, depreciation and amortization (EBITDA) stood at Ps2.7bn, down by about 7% from 2.9bn in the prior-year quarter. Mexichem, the largest producer of polyvinyl chloride (PVC), vinyl resins and compounds in ?xml:namespace> The company confirmed the acquisition of the PVC resin operations of US firm PolyOne, adding that the sale was awaiting approval by US regulatory authorities. The company also said that it was awaiting conclusion of a feasibility study for the construction of an ethane cracker in the In August, Occidental Chemical (OxyChem) and Mexichem signed a memorandum of understanding to evaluate the creation of a joint venture to build a 500,000 tonne/year ethane cracker. OxyChem would use nearly all of the ethylene as feedstock to produce 1m tonnes of vinyl chloride monomer (VCM) at its complex in OxyChem would then sell the VCM to Mexichem under a long-term contract. The cracker could begin operations in 2016, Mexichem said. ($1 = €0.76; $1 = Ps12.13)
http://www.icis.com/Articles/2013/04/30/9663987/mexicos-mexichem-q1-income-halved-on-higher-costs-strong.html
CC-MAIN-2015-06
refinedweb
275
52.8
neopixel - NeoPixel strip driver¶ - Author(s): Damien P. George & Scott Shawcroft - class neopixel. NeoPixel(pin, n, *, bpp=3, brightness=1.0, auto_write=True, pixel_order=None)[source]¶ A sequence of neopixels. Example for Circuit Playground Express: import neopixel from board import * RED = 0x100000 # (0x10, 0, 0) also works pixels = neopixel.NeoPixel(NEOPIXEL, 10) for i in range(len(pixels)): pixels[i] = RED Example for Circuit Playground Express) show()[source]¶ Shows the new colors on the pixels themselves if they haven’t already been autowritten. The colors may or may not be showing after this function returns because it may be done asynchronously.
https://circuitpython.readthedocs.io/projects/neopixel/en/latest/api.html
CC-MAIN-2019-51
refinedweb
101
58.99
CS 229: Foundations of Computation, Spring 2018 Homework 6: Regular Expressions Lab and Assignment The class meets on Friday, March 2, in Rosenberg 009 for some work on using regular expressions on the command line. A programming assignment on using regular expressions in Java is due one week after the lab. The assignment consists of two short programs. The Java programs can be submitted to your folder in /classes/cs229/homework by the 3:00 PM on the following Friday, March 9. I will print out any .java files in that folder. There are a number of exercises in the lab that ask you for a command that can be used to accomplish some task. You can copy-and-paste your commands from the Terminal window into a text document. You can hand-write them if you prefer, but please write them up carefully and clearly. You should also answer any other questions that are asked, such as the output of the command. Turn in your hand-written responses or a printout of your file. If you finish all the exercises in class, you can turn in your responses at the end of class. Otherwise, you should turn them in next Friday. About UNIX Utilities and the Bash Shell Linux (like Mac OS X) is a "UNIX-like operating system." Effectively, it is, at heart, versions of UNIX. What that means for us here is that it comes with a number of standard command line utilities—small programs that can be run on the command line. The program that implements the command line itself is called a "shell" or "command shell." On Mac OS and most versions of Linux, the command shell program is bash. You run bash when you open a terminal window or when you log on remotely using a program such as ssh. Some of the commands that you are used to, such as cd, are built into bash, but many are actually small programs. Bash supports a basic programming language for scripting. The language has variables, assignment statements, if statements, loops, and functions. But here, we are interested in some of the more basic syntax. First of all, note that many command-line programs are designed for processing text. These programs read from standard input and write to standard output. Typically, when a command is used to operate on a file, standard input comes from the file and standard output goes to the command line. For example, the command head MyProgram.java will read the first 10 lines from the file MyProgram.java as input, and it will output those lines to the terminal window where the command was given. However, the shell can use input/output redirection. You can redirect the output from a command to a file by adding > filename to the end of the command. For example, the command head MyProgram.java > out.txt copies the first 10 lines of MyProgram.java to a file named out.txt. (Warning: An existing file will be overwritten without warning!) For redirecting output, use < in place of >. But for this lab, we need the fact that you can pipe the output from one program into another program. This means that the standard output from the first program is sent to the standard input of the second program. The symbol for piping is the vertical bar, |. For example, ls | head runs the ls command, which lists the contents of a directory, and sends the output from that command into the head program. The head program then displays just the first ten lines of the output from ls. Another feature of UNIX commands is that they often have many features that can be enabled by adding an "option" to the command. Options have names that begin with "-" or "--". For example, the ls command options include -l for showing extra information for each directory item, -R for listing the contents of directories recursively, and -h, used along with -l for showing file sizes in more "human-readable" form. Single-letter options that begin with "-" can be combined; for example: ls -lh. Here, then are a few of the command-line utilities that can be used in a UNIX-like operating system, along with some of their options. For most of these commands, standard input can be taken from a file or can be piped from a previous program. If neither a file nor a pipe is provided, they might expect the user to type in the input. - cat <files> — copies the contents of all the named files, one after the other, to standard output. - head and tail — Display the first or last ten lines of their input. You can specify a different number of lines as an option. For example, head -100 file shows the first 100 lines of the file, and tail -3 shows just the last three lines of whatever input it is given. - wc — outputs the numbers of lines, words, and characters in the input. To get just the number of lines, use wc -l - sort — sorts lines of input alphabetically and sends the result to output. The option -u causes duplicate lines to be omitted from the output; the "u" stands for "unique". The option -i causes the sort to be case-insensitive. The option -n causes sort to do a numeric instead of an alphabetic sort. - cut — selects just parts of the input line for output, where the parts of the line are divided by a specified delimiter. The default delimiter is tab. To use a space as the delimiter, add the option -d " ". To use a colon as the delimiter, add the option -d ":". To specify which fields you want, use the -f option, which takes a field number or a range of field numbers or a list of field numbers, separated by commas. For example, use -f 1 to print the first field from each line (that is, everything on the line up to the first occurrence of the delimiter). For example, cut -d ":" -f 3 outputs everything between the second and third colon on each line. - perl -pe 's/regex/replacement/' applies a "search-and-replace" command to each line of the input, and prints the result. This is discussed in the handout on regular expressions. - grep and egrep — do regular expression matching. These are the main commands for this lab. More on them below. Most commands can take multiple file names on a line. Note that file names use "globbing", which means that * and ? are treated as wildcard characters, with ? matching exactly one character and * matching any number of characters. For example, javac *.java will compile all .java files in the current directory. Note that *.java is not a regular expression here; the * does not mean repetition. grep and egrep The grep command takes a regular expression as its argument, and it prints every line from its input that contains a substring that matches the regular expression. grep uses a somewhat more limited syntax for regular expressions than we have studied. For the full set of features, use egrep instead of grep. For this lab, you can use egrep to avoid confusion about what features are and are not supported. When using egrep, enclose the regular expression in single quotes (but the quotes are only really necessary if the expression contains characters that are special in the bash shell). For example, egrep '".*"' MyProgram.java will print every line from MyProgram.java that contains a string literal, using the regular expression ".*" to match the strings. (Grep would also work here, but the single quotes are needed because both " and * are special characters for the bash shell.) And, using the pipe syntax discussed above, egrep '".*"' MyProgram.java | wc -l will just output the number of such lines that were found. The grep and egrep commands have several useful options, including - -i — does a case-insensitive match. - -v — prints out lines that do NOT match the regular expression. - -o — prints out just the part of the line that matches the expression. If there are several matching parts in one line, then they are all output, each one on a separate line. - -r — does recursive searching when applied to a directory; that is, all the files in the directory and in its sub-directories are searched. Part 1: Investigate User accounts. (If you are working on your own Mac computer, you will need to ssh to math.hws.edu or to one of our lab computers. If you don't know how to do that, ask. You can only do the lab on a Windows computer if you have an ssh client installed on it.) On the Linux computers in our labs, the command ypcat passwd prints out account information for all the networked user accounts on our system. (Local, non-networked accounts are in the file /etc/passwd.) Try it. If you want to know how many such accounts there are, try the command ypcat passwd | wc -l Each output line from ypcat password contains seven fields, separated by colons. The first field is the user name. A student user name, in particular, is one that matches the regular expression [a-z]{2}[0-9]{4}'. Exercise 1. Write a command that will output the number of student accounts. You will need three individual commands, separated by two pipes. (Be sure to use egrep since grep doesn't support the {n} syntax.) How many student accounts are there? (Your response to this and all exercises should include the exact command that you used, plus your answer to any other question that was asked in the exercise.) Exercise 2. The third field in the output of ypcat passwd is the user account number. Write a single command that will print the smallest student account number in the file, with no other output. (Hint: Use sort -n as one of your commands. And you will need some of the other utilities discussed above.) What's the smallest student account number? Exercise 3. The fifth field in the output of ypcat passwd is the user name. For student accounts, the name should consist a first name and a last name, separated by a space. Write a command that will print out all student names in a form similar to "Smith, John" including the quotation marks. Note that the order of the names is reversed, to give last name first, followed by a command, and then the first name. You can use perl -pe to rearrange the data into the required format. Exercise 4. Now, write a command that will print out the student user name, followed by a space, followed by the name in the same format as the previous exercise. For example: zz9999 "Smith, John". Exercise 5. Finally, write a command that will output the same information in the same format as the previous example, but with the names in alphabetical order by last name. This is harder because you will have to put the information in one format for sorting, then rearrange the information into its final format.. (Note: I frequently do things like this with files. I might use grep and cut to filter some information from the file, but I am more likely to do the actual transformations in a text editor that supports regular expression find-and-replace.) Part 2: Investigate a Web Access Log The 124-megabyte file /classes/cs229/access.log contains the access log for the web server on math.hws.edu for a recent seven-day period. It has one line for each time someone on the Internet sent a request to our web server during that period. In this part of the lab, you will work with that file. Since it is so large, don't copy it! You should cd into the directory /classes/cs229 and work there. The file has 516883 lines. (You could check this with the command wc -l access.log.) As you develop commands to operate on this file, you will sometimes need to see what the output from a command looks like, but you don't want to see 124 megabytes of output. Piping the output into head is a way to see just the first ten lines of output. Exercise 6. The first thing on each line in the file is an IP address that identifies the computer that sent the request to our server. The IP address is followed by a space. Write a command that determines the number of different IP addresses that sent requests from the file; use cut to extract the IP address from the line. How many were there? Now, write a second command that does the same thing, but uses grep -o instead of cut to extract the IP address. The IP address is everything from the start of the line up to the first space. How many different IP addresses did you find? Exercise 7. Lines that represent requests for specific files will contain a string of the form "GET followed by a space (a double quote, followed by GET, followed by a space). This is followed by the path name of the file. The path name cannot contain a space. Write a command that determines how many requests were made for files beginning with /javanotes. Such a request will start with the string: "GET /javanotes. How many requests did you find? Now, write a command that will find out how many different IP addresses sent requests for such files. How many were there? Exercise 8. One of the fields on each line is the "referrer." If the request was generated by a user clicking on a link, the referrer is the web address of the page that contained that link. If the referrer field starts with " or ", then the request was the result of a Google search. You can assume that any line that contains such a string represents a request generated from a Google search. Write a command to determine the number of such requests. (This is easy.) How many were there? Exercise 9. The web site in the referrer field is terminated by the first slash (/) or a double quote ("). For example: "", ", ". The last two are Google's Indian and Singapore sites. Write a command to determine how many different google sites occur in the referrer field. How many were there? (Note: The -o option for egrep will be useful.) For your own interest, you might be curious about some of the other countries where the Google sites were located. If so, see this list of country codes. Exercise 10. Finally, write a command to determine how many google searches led to a page whose name started with /javanotes. How many were there? Program 1: Use Regular Expressions in Java You will write two short Java programs that use regular expressions. The Java support for regular expressions uses the classes Pattern and Matcher, which are in the package java.util.regex. So at the beginning of each program, you should say import java.util.regex.*; The first program is a simple exercise to make sure that you can use the two classes. (You can check the documentation for Pattern and Matcher. In particular, look at the example at the top of the Pattern documentation. Also, see the regular expression handout from class.) Your program should ask the user to type in a regular expression. Read a line of input from the user, and pass it to the Pattern.compile method to create an object of type Pattern. Then ask the user type in lines of text to be matched against the pattern. For each line of text that is input, create a Matcher for the text, using the matcher method in the Pattern object. Call the matcher's matcher.find() method to test whether the string contains a substring matches the regular expression. This returns a boolean value to tell you whether a matching substring was found. Tell the user the result. If the regular expression included parentheses, then matcher.groupCount() is the number of left parentheses, and matcher.group(n) is the substring that matched group n for n between 1 and matcher.groupCount(). Also, matcher.group(0) is in any case the entire matching substring. You should print out the substring for each group. You can end the program when the user inputs an empty string. Here is an example of a session with a sample program: Input a regular expression: ([a-zA-Z]*), ([a-zA-Z]*) Input a string: Doe, John That string matches Group 0 matched Doe, John Group 1 matched Doe Group 2 matched John Input a string: Bond, 007 That string does not match Input a string: Fortunately, the reactor did not explode That string matches Group 0 matched Fortunately, the Group 1 matched Fortunately Group 2 matched the Input a string: Program 2: Extract Information from a Web Page. For the second program, you will use Pattern and Matcher in a more practical way. The program will find links on a web page. The program GetPage.java reads lines from a web page and prints them out. You will modify that program so that instead of printing out the lines from the file, it prints out the web address from any links that appear on the page. You will use regular expressions to find the links and extract the addresses. (There is a copy of GetPage.java in /classes/cs229.) A web page is usually an HTML file, that is, a text file that contains the content of the page as well as special "mark-up" code. In the file, a link can look, for example, like one of these strings: <a href=""> <A target="mainframe" href=’data321.html’> <a id=’link23’ <a href="images/myhouse.png"> The link must start with <a and contain href=. (These are case-insensitive.) There can be spaces around the "=", but not after the "<". The web site is in single or double quotes after href=. For the examples above, the web addresses that your program would extract are data321.html file23.html images/myhouse.png Your program should use a Pattern that will match such links and will extract the web site as a group. Use a Matcher for each line of input, to test whether it contains a link to a web site and, if so, to extract the web site. Print out all the the web sites, one to a line. Note that to make a case-insensitive Pattern, you can use pattern = Pattern.compile(regex, Pattern.CASE_INSENSITIVE); The basic operation for this assignment—extract a list of web sites that a web page links to—is an important one, used, for example, by Google for building its map and index of the Web.
http://math.hws.edu/eck/cs229/s18/regex-lab/index.html
CC-MAIN-2018-26
refinedweb
3,130
73.78
-25-2010 05:07 PM Hi, since I'm on a deadline (university) I would REALLY appreciate some help with reducing my program size and making it faster. I'm doing some complicated matrix computations on a microblaze ( XUPV5-LX110T - board ) like singular value decomposition, etc. on matrices with several hundred float values. It took me quite some time to produce a working code in c , so I am under pressure to get it working on a microblaze (the next guy will then build some accelerators in VHDL) to speed things up. The program looks somewhat like this: #include "xparameters.h" #include "stdlib.h" #include "stdio.h" #include "xutil.h" extern void mbMatMul(float* Mat1, float* Mat2, float* ResMat, int m1, int n1, int n2); extern void mbprintfloatMat(float* Mat, int mRow, int nCol, char titel[32], int target, int limit); void mbprintfloatMat(float* Mat, int mRow, int nCol, char titel[32], int target, int limit) { target = 0; int Rowlim = mRow; int Collim = nCol; if(limit != 0) { if(limit < mRow) Rowlim = limit; if(limit < nCol) Collim = limit; } // else no limit xil_printf("\r\nPrint: %s\r\n", titel); int i,j, prepoint, postpoint; float floatvalue; for (int i = 0; i < Rowlim; i++) { for (int j = 0; j < Collim; j++) { floatvalue = Mat[i*nCol + j]; prepoint = floatvalue; postpoint = (floatvalue - prepoint)*1000000; if(prepoint >= 0) xil_printf(" "); if(postpoint != 0) { xil_printf("%d.%6d ", prepoint, postpoint); } else { xil_printf("%d.000000 ", prepoint, postpoint); } } xil_printf("\r\n"); } } void mbMatMul(float* Mat1, float* Mat2, float* ResMat, int m1, int n1, int n2) { int m2 = n1; float* temp_Mat1 = (float*)malloc(m1 * n1 * sizeof(float)); float* temp_Mat2 = (float*)malloc(m2 * n2 * sizeof(float)); float akku; int i,j; for(i = 0; i < m1*n1; i++) temp_Mat1[i] = Mat1[i]; for(j = 0; j < m2*n2; j++) temp_Mat2[j] = Mat2[j]; int n, m, index; for(n = 0; n < n2; n++) { for(m = 0; m < m1; m++) { akku = 0.0f; for(index = 0; index < n1; index++) { akku = akku + temp_Mat1[m*n1 + index]*temp_Mat2[index*n2 + n]; } ResMat[m*n2 + n] = akku; } } free(temp_Mat1); free(temp_Mat2); } int main (void) { print("-- Entering Main --\r\n"); int m1 = 12; int n1 = 3; int m2 = n1; int n2 = 4; float *Mat1 = (float*)malloc(m1 * n1 * sizeof(float)); if(Mat1) print("Mat1 created\n\r"); float *Mat2 = (float*)malloc(m2 * n2 * sizeof(float)); if(Mat2) print("Mat2 created\n\r"); int i; for(i = 0; i < m1*n1; i++) { Mat1[i] = i / 5.0f; } for(i = 0; i < m1*n1; i++) { Mat2[i] = i * 1.4f; } mbprintfloatMat(Mat1, m1, n1, "Mat1", 0, 0); mbprintfloatMat(Mat2, m2, n2, "Mat2", 0, 0); float *ResMat = (float*)malloc(m1 * n2 * sizeof(float)); if(ResMat) print("ResMat created\n\r"); mbMatMul(Mat1, Mat2, ResMat, m1, n1, n2); mbprintfloatMat(ResMat, m1, n2, "ResMat", 0, 0); print("-- Leaving Main --\r\n"); free(Mat1); free(Mat2); free(ResMat); return 0; } There is a lot more where this came from :-) My memory usage looks like this: text data bss dec hex filename 49260 1340 12388 62988 f60c TestApp_Memory/executable.elf That is with Compiler Optimisation for size. There is almost no space left in my 64K of BRAM, but I need a lot more stack and heap to perform the matrix calculations (about 7K stack and 27K heap). I suspect the text memory usage is huge because I do everything in C instead of using more of the microblaze stuff. I need urgent help getting my program to run on this microblaze. Some possibly viable options: 1) Increasing BRAM. I saw sth about adding an extra controller and BRAM. How to? Possible on this board? 2) Reducing text - size. Examples please! 3) Also you can tell me everything else I could do better. 4) In my real code I use math.h functions fabs, powf, logf and sqrtf. I know thats bad and maybe I will use e.g. lookup-tables instead. Could this make a crucial difference regarding text - size? Please someone reply and help me with some advice. Thank you!!!? 04-25-2010 08:03 PM - edited 04-25-2010 08:04 PM Ok, thanks to the HOWTO increase BRAM thread my program is now running :-) But even 64K heap seems to be too little to run at full specs. But my raw data should never be more than 30K. And I can run my program only once. Strange! Is it possible that free() doesn't work properly? Do I have to set back the heap pointer in a different way? And yes, help regarding how to improve my code is still very welcome! 04-25-2010 11:32 PM Just a few quickies: instead of writing something like for i for j res = input[i*c+j] write tmp = 0; for i tmp2 = tmp; for j res = input[tmp2] tmp2++; tmp += c; printf and float are both very good at increasing code size. Maybe you can dump the results as binary or hex of the raw memory contents? Assuming you have the print attached to a UART or something, use code on the host PC to visualize the results. Cheers, Johan 04-26-2010 01:46 AM The quickest way is to add more BRAM (of course). I assign code and data to two different BRAM blocks. Just add another BRAM block to your hardware design from EDK, and click Generate custom linker script from SDK, then you can assign all code to one bram block and all data to another. Have you enabled the hardware floating point unit on the microblaze. That will both make your code faster, and you will save code space by not using the floating point library as much. Afair, if you set it to extended, it also includes fsqrt and such. You could try the compiler option -ffast-math. It relaxes the IEEE floating point rules a bit, but makes the code faster and smaller. Depending on the amount of hardware you use, you could rewrite some of the Xilinx drivers to only use the features you need. The Xilinx drivers were too bloated for my application, so i rewrote them. Last, you could recompile the standard c libraries (newlib) that Xilinx use, but that only if you are really needing space. 04-26-2010 07:21 AM Thanks a lot for the help woutersj and nfogh! - I tried different Compiler Optimisations and I get the smallest code size for -Os (no surprise) - I copy the matrix at the beginning of MatMult because I actually have cases where matMult(Mat1, Mat2, Mat1, 3,3,3); but I can of course create a temporary matrix outside the function for these cases. - Thanks for the example. Keeping matrix indent calculations outside the loop could speed things up :-) I will do that. - The xil_printf() calls are only for debugging. The result of the calculations will be written to ddr. - I attached another BRAM block and added 2 controllers. I increased ilmb and dlmb to the full range. Would a different configuration (e.g. only one extra controller for data) make more sense speedwise? - I enabled the FPU, but I have to check if the sqrt engine is also on. Do you know how much ressources the FPU takes regarding BRAM and multipliers w/wo the sqrt extensions? - I will try the compiler option -ffast-math and see if precision stays acceptable. - Rewriting the libraries. Sounds like a good idea, but probably takes some time, that I don't have! My biggest problem right now is the huge memory needs of the system. I calculated the memory needs by hand e.g. for a 300x9xsizeof(float) matrix I calculated 10.8KByte and with the current specs I should have less than 20 KByte memory usage for heap. But in reality it is close to 60 KB and if I try to run more than once (putting the functon call in a for loop) the system crashes. I strongly suspect I'm doing sth wrong when using free(). Or do you think that shouldn't be a problem? If I don't find a solution soon I will resort to using arrays of predefined sizes and see how far I can go with that. I'm also looking for a way to approximate log x / log y. Thank you, Chris 04-26-2010 07:47 AM I'm not sure about what your application is, but if you are afraid of malloc and free, you could just declare the variables in the code. like float Mat1[m1][n1]; float Mat2[m2][n2]; void main (void) { ... and the same in the function float temp_mat[max_entries]; void mbMatMul(... Just ensure that temp_mat is big enough to hold what you throw at the function. It's not pretty, but it works :) The reason I declared them out of the function itself, is that they then get allocated on the stack, and for large max_entries, that will probably cause stack overflow problems. Many of the libraries are pretty easy to rewrite. But if you don't use many peripherals, it is probably not worth the effort. I have some code for uart16550, mbox, spi and timers if you are using any of them. I have a version of the GNU toolchain, which has been compiled with newlib with the '-Os' optimization setting. It scraped a few KBs off my code size. It is for Ubuntu Linux though. And you cannot debug with it at all (gdb doesn't work). I think the best solution would be to maybe make a separate bram block for your heap, and then generate a linker script to put your heap into that bram block. If you have enough memory for it, it is by far the easiest solution. 04-26-2010 10:18 AM Of course my memory usage will be even larger if I go for arrays of pre-defined sizes. My real concern is that I cannot call my main function twice, because it runs out of memory the second time. Since all the malloc()s happen inside the main function, the reason could be either leaks (I checked for those) or that free is not working properly or some other problem I haven't thought of yet. If someone could tell me: "I use malloc on microblaze the same way you do: float* Mat = (float*)malloc(m * n * sizeof(float)); //do sth free(Mat); and it works for me!" Then I could exclude one possible reason for running out of memory. 04-26-2010 02:16 PM Did you have a look at the linker scripts? Maybe the heap/stack settings are incorrect. I guess you can be pretty sure that malloc/free work if you have set them up correctly. Cheers, Johan 04-26-2010 02:18 PM Forgot to mention: use objdump to inspect your object for large data objects and big functions. 04-27-2010 04:06 AM I don't even get that far! I rewrote the whole program using non-dynamic fixed-size arrays and now I get: region ilmb_cntlr_dlmb_cntlr is full (TestApp_Memory/executable.elf section .stack) So I reduced the size of my arrays like this: #define MAXPOINTS 100 int main() { float array[4*MAXPOINTS]; //size 4*100*4Byte = 1600 Bytes //do sth return 0; } calculated like this I should be using about 20KBytes of memory now. I have increased the stack size to 64KBytes _STACK_SIZE = DEFINED(_STACK_SIZE) ? _STACK_SIZE : 0xFFFF; _HEAP_SIZE = DEFINED(_HEAP_SIZE) ? _HEAP_SIZE : 0x1000; /* Define Memories in the system */ MEMORY { ilmb_cntlr_dlmb_cntlr : ORIGIN = 0x00000050, LENGTH = 0x0001FFB0 } but it still refuses to build my .elf tools/xilinx/ISE_EDK/10.1/EDK/gnu/microblaze/lin64/bin/../lib/gcc/microblaze-xilinx-elf/4.1.1/../../../../microblaze-xilinx-elf/bin/ld.real: region ilmb_cntlr_dlmb_cntlr is full (TestApp_Memory/executable.elf section .stack) What am I doing wrong? 04-27-2010 04:20 AM EDK is complaining about that it doesn't have enough BRAM to allocate stack. You use 64KB bram for your stack, which is a whole block.If you decrease your stack size, you might make it fit. Could you try to do the following. 1. Attach 2 BRAM blocks to your microblaze, 64KB each 2. Go to SDK, synchronize with hardware and click Generate linker script 3. Under "assign all code sections to", select the first BRAM block 4. Under "assign all data sections to", select the second BRAM block 5. Set the stack size and heap size to a suitable amount (enough to hold your data) This should give you 64K for just data, and 64K for code (.text). Ensure that you have enough heap/stack space for your variables if you allocate them dynamically.? 05-01-2010 11:01 PM - edited 05-01-2010 11:03 PM After spending way to much time trying to figure out all the steps needed to add a xps_timer for profiling, here is a mini - tutorial ( I'm using EDK 10.1.03) . Don't know if it is all necessary, but it works: - Add the xps_timer by double clicking XPS Timer/Counter under IP Catalog > DMA and Timer - System Assembly View > Bus Interfaces > xps_timer_0 > SPLB = mb_plb - System Assembly View > Adresses > xps_timer_0 > Size 64KB mb_plb Size U - System Assembly View > Adresses > Lock all the adresses that must not change - Backup linker script - System Assembly View > Adresses > Generate Adresses - Generate Linker Script, assign memory blocks, if necessary adapt linker file manually, see backup - Open Software Platform settings > OS and Libs > enable_software_intrusive_profiling true + profile_timer xps_timer_0 - Open Software Platform Settings > Drivers > xps_timer tmrctr - Project > Double Click MHS File add missing parameters to xps_timer (Interrupt, c_count_width, etc.) Make it look like this: BEGIN xps_timer PARAMETER INSTANCE = xps_timer_0 PARAMETER C_FAMILY = virtex5 PARAMETER C_COUNT_WIDTH = 32 PARAMETER HW_VER = 1.00.a PARAMETER C_BASEADDR = 0x83c00000 PARAMETER C_HIGHADDR = 0x83c0ffff BUS_INTERFACE SPLB = mb_plb PORT Interrupt = xps_timer_0_Interrupt END Add Interrupt to Microblaze: BEGIN microblaze PARAMETER INSTANCE = microblaze_0 PARAMETER HW_VER = 7.10.d PARAMETER C_USE_FPU = 2 PARAMETER C_DEBUG_ENABLED = 1 PARAMETER C_FAMILY = virtex5 PARAMETER C_INSTANCE = microblaze_0 BUS_INTERFACE DPLB = mb_plb BUS_INTERFACE IPLB = mb_plb BUS_INTERFACE DEBUG = microblaze_0_dbg BUS_INTERFACE DLMB = dlmb BUS_INTERFACE ILMB = ilmb PORT MB_RESET = mb_reset PORT INTERRUPT = xps_timer_0_Interrupt END - Applications > Set Compiler Options>Paths and Options > Other Compiler Options to Append Add -pg - Add to main.c: #include "xtmrctr_l.h" - Add to function you would like to profile: XTmrCtr_mEnable(XPAR_TMRCTR_0_BASEADDR, 1); unsigned long tic1, tic2, dur; //Measure tic1 = XTmrCtr_mGetTimerCounterReg(XPAR_TMRCTR_0_BASEADDR,1); // do sth tic2 = XTmrCtr_mGetTimerCounterReg(XPAR_TMRCTR_0_BASEADDR,1); dur = (tic1-tic2) * 1000 / XPAR_CPU_CORE_CLOCK_FREQ_HZ; //result in ms - Synthesize (BRAM INIT) Actually I don't know if you have to divide by XPAR_CPU_CORE_CLOCK_FREQ_HZ or XPAR_MICROBLAZE_CORE_CLOCK_FREQ_HZ . Maybe one of the gurus can explain which one is applicable.
https://forums.xilinx.com/t5/Embedded-Processor-System-Design/Please-help-me-reduce-my-program-size-so-I-can-get-more-bss/m-p/68239
CC-MAIN-2019-22
refinedweb
2,425
61.77
Error #2044 & Error #2048Iteryx Jul 7, 2009 11:46 AM Hello, I am running Flex 4 SDK and a beta of Flash Builder 4 on an Eclipse IDE. I have two files: one is a small flex program that establishes a XML Socket on the localhost, the other is a Java program that writes a simple string to the port that the flex program listens to. The purpose of this program is to get used to TCP with flex. Both programs run inside of the Eclipse IDE. However, when I run the program, I get the following error message: Error #2044: Unhandled securityError:. text=Error #2048: Security sandbox violation I have searched quite a bit for a solution to this problem to no avail. Any help would be greatly appreciated. Thank you. 1. Re: Error #2044 & Error #2048Flex harUI Jul 7, 2009 1:31 PM (in response to Iteryx)1 person found this helpful There are "Security WhitePapers" on the Adobe site. They explain the rules for security. Alex Harui Flex SDK Developer Adobe Systems Inc. 2. Re: Error #2044 & Error #2048Iteryx Jul 8, 2009 6:07 AM (in response to Flex harUI) Thank you Alex for your reply. I have checked into the security whitepapers I could find, and they did have some thoughts that I had not tried. However, its still not working. I have put crossdomain policy files in the bin directories of both programs in Eclipse that I am trying to make talk to each other. I have set my compiler option -use-network=false because they are on the same local machine. I will state again that both programs are running from the same Eclipse IDE window ( I start one, then start the other ). I am trying to use an XMLSocket for the Flex side of things and the other is a simple Java program that puts a string in the socket the XMLSocket is bound to. Questions: - Do you Alex, or anyone else, have a specific list of security whitepapers (or other resources) I should look at? - Does anyone know if there is a setting in Eclipse to point to the cross-domain policy file for Flash Builder 4? - Is the XMLSocket class capable of doing this type of connection in the first place? Should I just use Socket? Again, any assistance would be most appreciated. 3. Re: Error #2044 & Error #2048CoreyRLucier Jul 8, 2009 8:32 AM (in response to Iteryx) For what it's worth, I just put together something simple and it seems to work: Both are run from a common folder. Note that (AFAIK) you can't create a socket server in Flash (only clients)...so the java socket server needs to be started first: Keep in mind that things won't work out of the box (due to security issues) - an easy way to get things up and running (sans crossdomain file returned from your socket server) is to ensure your app is running in the local-trusted sandbox. To do this you need to add the path (where your Client.swf resides) to a text file living in either your global "FlashPlayerTrust" folder or your user level FlashPlayerTrust folder. For example, on Mac I've added a file called 'trust' containing one line: ... to my "global" FlashPlayerTrust folder located at: /Library/Application Support/Macromedia/FlashPlayerTrust. See Pardon the verbose post, attachments don't work too well for me lately so included src inline... SimpleServer.java: import java.io.*; import java.net.*; public class SimpleServer { public static void main(String args[]) { // Message terminator char EOF = (char)0x00; try { // create a serverSocket connection on port 9999 ServerSocket s = new ServerSocket(9999); System.out.println("Server started. Waiting for connections..."); // wait for incoming connections Socket incoming = s.accept(); BufferedReader data_in = new BufferedReader( new InputStreamReader(incoming.getInputStream())); PrintWriter data_out = new PrintWriter(incoming.getOutputStream()); data_out.println("Welcome! type EXIT to quit." + EOF); data_out.flush(); boolean quit = false; // Waits for the EXIT command while (!quit) { String msg = data_in.readLine(); if (msg == null) quit = true; if (!msg.trim().equals("EXIT")) { data_out.println("You sayed: <b>"+msg.trim()+"</b>"+EOF); data_out.flush(); } else { quit = true; } } } catch (Exception e) { System.out.println("Connection lost"); } } } Client.mxml (Flex 4): <?xml version="1.0" encoding="utf-8"?> <Application xmlns:fx="" xmlns="library://ns.adobe.com/flex/spark" xmlns: <!-- Compiled FXG placed on the left --> <layout> <VerticalLayout/> </layout> <fx:Script> <![CDATA[ import flash.net.XMLSocket; import flash.events.*; private var hostName:String = "localhost"; private var port:uint = 9999; private var socket:XMLSocket; public function connect():void { socket = new XMLSocket(); configureListeners(socket); { ta.text += String("closeHandler: " + event); } private function connectHandler(event:Event):void { ta.text += String("connectHandler: " + event); } private function dataHandler(event:DataEvent):void { ta.text += String("dataHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { ta.text += String("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { ta.text += String("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal); } private function securityErrorHandler(event:SecurityErrorEvent):void { ta.text += String("securityErrorHandler: " + event); } ]]> </fx:Script> <Button label="connect" click="connect()"/> <TextArea id="ta" width="300" height="200"/> </Application> 4. Re: Error #2044 & Error #2048CoreyRLucier Jul 8, 2009 8:48 AM (in response to CoreyRLucier) This post was also helpful and worked for me... shows how you can easily setup your socket server to also serve up the cross domain policy file. Note the call to System.security.loadPolicyFile prior to making the client connection. Regards, C 5. Re: Error #2044 & Error #2048Iteryx Jul 8, 2009 11:39 AM (in response to CoreyRLucier) Corey, thank you for your replies. I tried doing what you suggested with creating a text file with the path to my .swf location. In Vista, the FlashPlayerTrust file is not where it should be inside of C:/Windows/System32/Macrmed/Flash. I created a FlashPlayerTrust file and put a text document in it. It didn't work. I also dug around a bit on the internet and tried editing my global security settings by right clicking on my flash application inside of my web browser and that didn't work either. I also looked at your code samples and they look about the same as to what I am trying to do, so it was good to see you solved the same problem I have. Unfortunately, I cannot get it to work at all. Its the same error every time. I have looked at your link to the blog.pettomato.com and I'm still trying to figure out how to do that inside of Java. Little rusty on ServerSockets. However, inside of the actionscript, the "System.security.loadPolicyFile" method gave me a compiler error, so I couldn't use it or I am using it wrong. Questions: - The compiler does let me use "Security.loadPolicyFile" method, would that do the same thing as the "System.security.loadPolicyFile" ? - Do you, or anyone else, know about any settings I need to do in Eclipse to make this work? Any help would be appreciated. Thanks. 6. Re: Error #2044 & Error #2048CoreyRLucier Jul 8, 2009 11:46 AM (in response to Iteryx) import flash.system.Security; Security.loadPolicyFile(...) should work fine. Sorry to hear you are on Vista..will try to dig up where the global FlashPlayerTrust folder is. -C 7. Re: Error #2044 & Error #2048Iteryx Jul 9, 2009 10:37 AM (in response to CoreyRLucier) Thank you Corey. Your help is very appreciated. I will continue to try and find a solution to this problem as well. 8. Re: Error #2044 & Error #2048CoreyRLucier Jul 9, 2009 10:51 AM (in response to Iteryx) This may help: Scroll down to the FlashPlayerTrust section, they mention a Vista specific issue with creation of the folder and file... perhaps thats what you ran into. In any event, the trust file they setup in those steps is all inclusive (of C:\)..curious if that liberal policy helps your case. -C 9. Re: Error #2044 & Error #2048Iteryx Jul 29, 2009 7:08 AM (in response to CoreyRLucier) Little update, our company has purchased Flex 3 licenses, so that is what I am working with now. However, the same problem still exists there as well. It all has to do with the security sandbox, crossdomain.xml, and other security issues. I have posted a new post over in the Flex 3 forums, to little avail. Going to be updating it with what I have been doing to try and figure out this stuff. Link to the Flex 3 forum post: I will be reviewing that prior post as well, Corey. Thank you. EDIT: Just finsihed trying out the Configure Flash Player section in the link from you last post, Corey. It didn't work either, but thanks for the suggestion anyway.
https://forums.adobe.com/thread/458806?start=0&tstart=0
CC-MAIN-2016-50
refinedweb
1,449
66.64
Everybody has used the GNU or UNIX cat program in the command line. It is used to concatenate files and dump it into the standard output, or can simply be redirected to another file. Long ago i started to write my own version of the cat program. I have implemented each and every function which cat supports, and also made it look identical, except some messages. Although this is not a cat clone, and has no connection with the source code of GNU cat. This code was made by inspecting the output behaviour of GNU cat. This is named wcat. I started this because this was the most simple code to write and was intended for the Whitix OS project run by Mathew (). This is a very good OS development project for the beginners to start with. I could not at present actively participate in this project because of the time limitations here at my end. The preliminary code was made very fast, but it took time to make it perfect and replicate the behaviour or GNU cat . The most interesting part was implementing the different options like, line number for the -n and -b option, and printing the special charters with the -v, -I options etc. The line number generation was implemented with the method described in the r-Permutations With Repetitions post. The line number counter is 20 digits long and counts number in decimal, so there is no worry of overflowing line numbers. You will notice the line_number array is initialized with blank spaces and in some locations it is initialized with \r , \t, and . This was done to keep the line_number array pre-formatted so that it can be directly written to the output buffer without any more processing. Have a look at the code, i have tried to keep it as clean as possible. After some good amount of testing i found no bugs in the release and finally made a final release, which i am presenting here. I will try keep this code updated here at this page (if it undergoes any). Sourcecode The sourcecode is presented below. Find a link at the bottom of the page to dowload a zipped file of the code. /* * Program : wcat * Version : 1.0 * Revision : 1 * Status : Stable * */ /* * Version Update 1.0: * Now accepts input from stdin. */ /* Version Update 0.6: * change in 'wcat()' function parameter * using bit fields for flags * comments added * Decimal counter extended from 18 digits to 20 digits */ /* Features: * -E Show end line with '$' * -T Show tab character with ^I * -n Number all line numbers * -b Number only nonempty line numbers * -v Show non-printing characters with M- or ^ prefix * -s Sqeeze consicutive empty lines into one * -e Same as -vE * -t Same as -vT * -A Same as -vET * -h Help */ /* TODO: * Primary: * Testing [DONE] and feedback * accept input from stdin [DONE (v1.0)] * cleanup code, and make the main output loop better * Secondary: * write long options * brief comment the option flags and the operation location */ /* * Author: Arjun Pakrashi (phoxis) * */ #include <stdio.h> #include <string.h> #include <stdlib.h> #include <unistd.h> #include <sys types. #include <sys stat. #include <fcntl.h> #include <ctype.h> #include <error.h> #include <errno.h> #define VERSION "1.0" #define REVISION "1" #define STATUS "Stable" #define TRUE 1 #define FALSE 0 #ifndef NUL #define NUL '\0' #endif #define NEW_LINE '\n' #define TAB '\t' #define SUCCESS 1 #define FAIL -1 #define READ_ERROR 2 #define WRITE_ERROR 3 #define STDOUT_FILE 1 #define STDIN_FILE 0 #define BLK_SIZE BUFSIZ #define LSB 20 /* pre calculated value of (22 - 2) */ #define ARRAY_LENGTH 22 typedef struct flag_bits { char showend:1; char showtab:1; char linenum_all:1; char linenum_nonempty:1; char sqeeze_bl:1; char showspchar:1; char help:1; } option_flags; int wcat (const char *file, option_flags flag); void print_help (void); void generate_line_number (void); /* line_number is used to store decimal counts. The one but last location is initilized * with 0 to start the count with 0. Others are initilized with blank space and * carrage returns to subpress printing other digits and make equal formatting to each count. * As the count goes on and spans multiple digits the lower posisions are used. This array is * used in the 'generate_line_number ()'. Each call to this function will generate the next count. * The function is called from and 'line_number' is used in 'wcat ()' */ /* array length is 22, valid decimal digits is 20 last two positions for 0 and \t */ char line_number[ARRAY_LENGTH] = { ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', '\r', ' ', ' ', ' ', '\r', ' ', ' ', ' ', ' ', ' ', '0', '\t' }; /* Function Name : main * Parameters : * @ (int) argc * @ (char *) argv * Return Value : (int) * Globals : None * Description : Parses the command line options and calls 'wcat()' */ int main (int argc, char *argv[]) { int current_file; int status; int opt; option_flags flag = { FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE }; while ((opt = getopt (argc, argv, "ETnbsveAh")) != -1) { switch (opt) { case 'E': flag.showend = TRUE; break; case 'T': flag.showtab = TRUE; break; case 'n': flag.linenum_all = TRUE; break; case 'b': flag.linenum_nonempty = TRUE; break; case 's': flag.sqeeze_bl = TRUE; break; case 'v': flag.showspchar = TRUE; break; case 'e': flag.showspchar = TRUE; flag.showend = TRUE; break; case 't': flag.showspchar = TRUE; flag.showtab = TRUE; break; case 'A': flag.showspchar = TRUE; flag.showend = TRUE; flag.showtab = TRUE; break; case 'h': flag.help = TRUE; break; default: error (0, 0, "Execute %s -h for help.\n", argv[0]); return 0; } } if (flag.help) { print_help (); exit (0); } /* If both -b and -n are TRUE then override -b */ if (flag.linenum_nonempty && flag.linenum_all) { flag.linenum_all = FALSE; } /* If no file is supplied, then take the stdin as input */ if (argc == 1) { status = wcat ("-", flag); } for (current_file = optind; current_file < argc; current_file++) { /* here we also pass the '-' parameter, stands for stdin */ status = wcat (argv[current_file], flag); } return 0; } /* Function Name : wcat * Parameters : * @ (const char *) file : File path * @ (option_flag) flag : Flag bits * Return Value : (int) Success or Faliure * Globals : (char []) line_number * Description : Dumps a file contents described by 'file' to stdout. * Output can be modified by flags */ /* FIXME: Should we think also about windows \r\n new lines here? */ int wcat (const char *file, option_flags flag) { int fd; char inbuf[BLK_SIZE], outbuf[BLK_SIZE * 4], *outpt, *currch, prevch = NEW_LINE, *endbuf; long int bytes_read = 0, bytes_to_write = 0, bytes_written = 0; int status, nl_lock = FALSE; /* NOTE: A non-printing character can be represented by at most 4 characters, * when the -v . To be safe 'outbuf[]' is decrared with a size of BLK_SIZE * 4 */ if (file[0] == '-' && file[1] == NUL) { /* read from standard input if file is "-" */ fd = STDIN_FILE; } else { /* else open the supplied file */ fd = open (file, O_RDONLY); if (fd < 0) { error (0, errno, "Cannot Open File \"%s\"", file); return 0; } } while (TRUE) { bytes_read = read (fd, inbuf, BLK_SIZE); if (bytes_read == -1) { status = READ_ERROR; break; } if (bytes_read == 0) { status = SUCCESS; break; } bytes_to_write = 0; currch = inbuf; endbuf = inbuf + bytes_read; outpt = outbuf; while (currch < endbuf) { if (prevch == NEW_LINE) { if (flag.linenum_all) { generate_line_number (); memcpy (outpt, line_number, ARRAY_LENGTH); outpt += ARRAY_LENGTH; } if ((flag.linenum_nonempty) && (*currch != NEW_LINE)) { generate_line_number (); memcpy (outpt, line_number, ARRAY_LENGTH); outpt += ARRAY_LENGTH; } } if (*currch == NEW_LINE) { if (flag.sqeeze_bl && prevch == NEW_LINE) { if (nl_lock) { currch++; continue; } else { nl_lock = TRUE; } } else { nl_lock = FALSE; } if (flag.showend) { *outpt++ = '$'; } *outpt++ = NEW_LINE; } else if ((*currch == TAB) && (flag.showtab)) { *outpt++ = '^'; *outpt++ = 'I'; } else if ((flag.showspchar) && (!isprint (*currch))) { /* NOTE:This condition takes longer time check it, * fix the if else nesting, avoid unnecessary checks. */ if (*currch == NEW_LINE || *currch == TAB) { *outpt++ = *currch; } else { unsigned char c = *currch; if (c > 127) { *outpt++ = 'M'; *outpt++ = '-'; c = c - 128; } if (c == 127) { *outpt++ = '^'; c = c - 64; } if (c <= 32) { *outpt++ = '^'; c = c + 64; } *outpt++ = c; } } else { *outpt++ = *currch; } prevch = *currch++; } bytes_to_write = outpt - outbuf; bytes_written = write (STDOUT_FILE, outbuf, bytes_to_write); if (bytes_written == -1) { status = WRITE_ERROR; break; } if (bytes_written != bytes_to_write) { status = WRITE_ERROR; break; } } /* close the file only if it is not stdin */ /* NOTE: Closing stdin will cause no more input * accepted by the current running program, so it * will not accept the next read from stdin for the * other '-' parameters if supplied */ if (fd != STDIN_FILE) close (fd); return status; } /* Function Name : generate_line_number * Parameters : (void) * Return Value : (void) * Globals : * @ (char []) line_number * Description : Each call of this function generates next decimal count * described by 'line_number' char array. */ void generate_line_number (void) { int i; line_number[LSB]++; if (line_number[LSB] == ':') { for (i = LSB; i >= 0; i--) { if (line_number[i] == ':') { line_number[i] = '0'; if (line_number[i - 1] <= ' ') line_number[i - 1] = '1'; else line_number[i - 1]++; } else break; } } } /* Function Name : print_help * Parameters : (void) * Return Value : (void) * Description : Prints a help of this program into stdout */ void print_help (void) { fprintf (stdout, "Usage: wcat [OPTION] FILE_1 [FILE_2] ... [FILE_n]\n"); fprintf (stdout, "Concatinate FILE(s) to standard output\n"); fprintf (stdout, "\nOptions:\n"); fprintf (stdout, "\t-A\t\tSame as -vET\n"); fprintf (stdout, "\t-b\t\tNumber only nonempty lines\n"); fprintf (stdout, "\t-e\t\tSame as -vE\n"); fprintf (stdout, "\t-E\t\tMark endline with \'$\'\n"); fprintf (stdout, "\t-n\t\tNumber all lines\n"); fprintf (stdout, "\t-s\t\tSqeeze consicutive empty lines into one\n"); fprintf (stdout, "\t-t\t\tSame as -vT\n"); fprintf (stdout, "\t-T\t\tShow tab character as ^I\n"); fprintf (stdout, "\t-v\t\tShow non-printing characters with M- or ^ prefix except TAB and LFD (new line)\n"); fprintf (stdout, "\t-h\t\tShow this help\n"); fprintf (stdout, "\n\nWith no FILE given, or when FILE is - (a hyphen), reads from standard input"); fprintf (stdout, "\n\nExamples with standard input:\n\twcat file1 - file2 : Output file1's content, then standard input, then file2's content\n\twcat \t\t : Copy standard input to standard output"); fprintf (stdout, "\n\nVersion: %s\tRevision: %s\tStatus: %s\n", VERSION, REVISION, STATUS); } Download this code here : Download wcat_v1_0.c.zip To compile the code use the following command gcc wcat.c -o wcat 3 thoughts on “wcat : A GNU cat implementation” Hi Arjun, I know nothing of coding – looks hard to do. How did you get the scrolling section in your page? The code is not hard, and that’s why i started with it. The main goal was to try to organize my coding practices. And about the scrolling portion of the page it is easy, i have used the div tags and used inline CSS options. I have used it like shown below: The style option defines the inline CSS options, which overrides the CSS options specified in the current blog theme’s CSS file. The max length of the page is defined, and if it exceeds then it will automatically add a scrollbar. You can similarly define a max-width also which may also add a horizontal scroll bar. This is very useful to post long codes, and avoid long main page scroll.
https://phoxis.org/2010/05/20/wcat-gnu-cat/
CC-MAIN-2019-18
refinedweb
1,758
68.91
Yet another document model? Apache Axis2 1.1 has been released and offers exciting new features to fans of the long-running Apache series of web services frameworks. We'll cover Axis2 itself in a future article, but this article digs into the AXIs Object Model (AXIOM) XML document model that lies at the core of Axis2. AXIOM is one of the major innovations behind Axis2, and one of the reasons Axis2 offers the potential for substantially better performance than the original Axis. This article guides you through how AXIOM works, how various parts of Axis2 build on AXIOM, and then finishes with a look at how AXIOM performance compares to other Java™ document object models. Document models are a commonly used approach to XML processing, and many different flavors are available for Java development, including various implementations of the original W3C DOM specification, JDOM, dom4j, XOM, and more. Each model claims some advantages over the others, whether performance, flexibility, or rigidly enforced adherence to the XML standard, and each has committed supporters. So why did Axis2 need a new model? The answer lies in how SOAP messages are structured, and especially in how extensions are added to the basic SOAP framework. SOAP itself is really just a thin wrapper around an XML application payload. Listing 1 gives a sample, where the only parts that are actually defined by SOAP are those elements with the soapenv prefix. The bulk of the document is the application data that makes up the content of the soapenv:Body element. Listing 1. SOAP sample Despite the simplicity of the basic SOAP wrapper, it offers the potential for unlimited extensions by using an optional component called the header. The header provides a place for all kinds of metadata to be added which will accompany the application data without being seen by the application (it is possible to include application data in the header, but there isn't a convincing case for why you'd want to do this rather than just using the body for application data). Extensions that build on SOAP (such as the whole WS-* family) can use the header for their own purposes without affecting the application. This allows the extensions to operate as add-ons, where the particular extended functions needed by an application can simply be selected at deployment time rather than baked into the code. Listing 2 shows the same application data as the sample in Listing 1 SOAP, but with WS-Addressing information included. While the original SOAP message would probably only be usable over an HTTP transport (because HTTP provides a two-way connection for an immediate response to be sent back to the client), the Listing 2 version could operate over other protocols because it includes the response metadata directly in the SOAP request message. It would even be easy to have a store-and-forward step involved in the processing of the message in Listing 2 since the metadata provides both request target and response target information. Listing 2. SOAP sample with WS-Addressing The document model dilemma Since the whole point of the SOAP header is to allow arbitrary metadata to be added to a message, it's important that SOAP frameworks be able to accept anything that some extension decides to add. Generally speaking, the easiest way to work with arbitrary XML is by using a document model of one form or another. That's the whole point of a document model, after all -- to faithfully represent XML without any assumptions about the form of that XML. But document models aren't a very efficient way of working with XML that's used to exchange data between applications. Application data normally has a predefined structure, and most developers prefer to work with that data in the form of data objects rather than as raw XML. The job of converting between data objects and XML is in the realm of what's called data binding. Data binding is not only more convenient for developers than working with a document model, it's also much more efficient in terms of both performance and memory use. So most applications want to use data binding to work with the application payload of a SOAP message, but a document model approach is better for working with the metadata present in the header. The ideal approach would be to combine the two techniques within the SOAP framework, but normal document models aren't set up to allow this. They expect to work with an entire document, or at least an entire subtree of a document. They're not really set up to work with just selected portions of a document, as is best suited to SOAP. In addition to AXIOM, there's another change between Axis and Axis2. While the original Axis used a standard push-style (SAX) parser for processing XML, Axis2 uses a pull-style (StAX) parser. With a push approach, the parser is in control of the parsing operation -- you give the parser a document to parse and a handler reference. It then uses the handler for call-backs into your code as it processes the input document. Your handler code can make use of the information passed by the call-backs, but cannot affect the parsing (except by throwing an exception). With a pull approach, on the other hand, the parser is effectively an iterator for going through the components of a document on demand. Both push and pull approaches have their uses, but the pull style has a major benefit when it comes to XML that contains logically separate components (such as SOAP). With a pull parser, the code that handles one part of the document can parse only as much as it needs and then hand off the parser to whatever comes next in the document processing. AXIOM is built around the StAX pull parser interface. AXIOM provides a virtual document model it expands on demand, building only as much of the tree structure document model representation as has been requested by the client application. This virtual document model works at the level of elements in the XML document. An element representation is created when the parser reports the element start tag, but the initial form of that element is essentially just a shell which holds a reference to the parser. If the application needs to get the details of the element's content, it simply requests the information by calling a method of the interface (such as the org.apache.axiom.om.OMContainer.getChildren() method). The element then builds the child content from the parser in response to the method call. Since a parser delivers data in document order (the same order items appear in the XML document text), the on-demand construction implemented by AXIOM requires some clever handling. For instance, it's normal to have multiple elements in an incomplete (under construction) state, but these elements all have to be in a direct line of inheritance. In terms of the standard sort of tree diagram of XML with the root element at the top, the incomplete elements will always be in the line down the right side of the tree. As more data is requested by the application, the tree grows further to the right, with the bottom elements completed first. All the XML document models have a great deal in common when it comes to their APIs (no big surprise, since they're all working with the same underlying data), but each has some quirks that distinguishes it from the others. The original W3C Document Object Model (DOM) was designed for cross-language and cross-platform compatibility, so it builds on interfaces and avoids using Java-specific collections in favor of its own versions. JDOM uses concrete classes rather than interfaces and incorporates the standard Java collections classes for an API that many Java developers find friendlier than DOM. dom4j combines DOM-like interfaces with Java collections classes for a very flexible API that offers a lot of power -- at the cost of some complexity. AXIOM has a lot in common with these other document models. It also has a few significant differences related to its on-demand build process, along with some specialized features to support its use in web services. AXIOM's API is probably closest to DOM in overall feel, but it has its own quirks. For instance, the access methods are designed around using java.util.Iterator instances for access to components (as returned by org.apache.axiom.om.OMContainer.getChildren() and related methods), rather than any form of list. Instead of indexing into a list of components, navigation uses org.apache.axiom.om.OMNode.getNextOMSibling() and org.apache.axiom.om.OMNode.getPreviousOMSibling() methods to move sequentially through nodes at a level of the document tree (similar to DOM in this respect). This structuring of access and navigation methods matches how on-demand tree construction works, since it means AXIOM can let you move to the first child of your starting element without having to first process all the child elements. Like DOM and dom4j, AXIOM defines the API used to access and manipulate the tree representation using interfaces. The AXIOM distribution includes several different specialized implementations of these interfaces. One implementation (in package org.apache.axiom.om.impl.dom) is dual-headed, supporting both AXIOM and DOM interfaces with the same implementation classes. This can be useful because of the number of web services add-ons that expect to work with a DOM view of the data. For more general use, the org.apache.axiom.om.impl.llom package provides an implementation based on linked lists of objects (the "ll" part of the package name). There are also extensions of both the basic org.apache.axiom.om interfaces and these implementations in the org.apache.axiom.soap package tree which are customized for use with SOAP messages. For a quick look at the AXIOM API in action, we'll look at some samples from the code used for performance- testing AXIOM against other document models. Listing 3 gives the first sample, based on the code used to build the AXIOM representation of an input document. As with DOM and dom4j, before doing anything with AXIOM you need to get a factory that builds the component objects of the model. The Listing 3 code selects the basic linked list implementation of the AXIOM interfaces by using the org.apache.axiom.om.impl.llom.factory.OMLinkedListImplFactory implementation of the org.apache.axiom.org.OMFactory interface. The factory interface includes methods to build a document from various sources directly, and includes methods for creating individual components of the XML document representation. Listing 3 uses the method to build a document from an input stream. The object returned by the build() method is actually an instance of org.apache.axiom.om.OMDocument, though that's not specified by this code. Listing 3. Parsing a document into AXIOM Listing 3 uses the org.apache.axiom.om.impl.builder.StAXOMBuilder class to build the document representation by parsing an input stream. This just creates a StAX parser instance and the basic document structure before returning, leaving the parser positioned inside the root element of the document with the rest of the document representation to be built later, if needed. AXIOM doesn't have to be built using a StAX parser. In fact, org.apache.axiom.om.impl.builder.SAXOMBuilder is a partial implementation of a builder based on the SAX push parser. But if you build it any other way you won't get the benefits of on-demand construction. Listing 4 shows the code used to "walk" through the elements in a document representation and accumulate summary information (the count of elements, the count and total length of attribute value text, and the count and total length of text content). The walk() method at the bottom takes a document to be summarized, along with the summary data structure, while the walkElement() method at the top processes one element (calling itself recursively to process child elements). Listing 4. Navigating AXIOM Finally, Listing 5 gives the code used to write the document representation to an output stream. AXIOM defines many output methods as part of the OMNode interface, including variations of destination (as output stream, normal character writer, or StAX stream writer), with and without formatting information, and with and without the ability to access the document representation after it's been written (which requires the full representation to be built, if not already constructed). The OMElement interface defines yet another way of accessing the document information, by getting a StAX parser from the element. This ability to pull XML out of the representation using a parser offers an appealing symmetry, and it works nicely when the AXIOM representation is built on-demand (since the parser being used to build the representation can then be returned directly). Listing 5. Writing a document from AXIOM AXIOM provides some basic methods for modifying an existing document component (such as OMElement.setText() to set the content of an element to a text value). If you're starting from scratch, you'll need to create new instances of components directly. Since the AXIOM API is based on interfaces, it uses factories to create actual implementations of the components. For more details on the AXIOM API, see the links for the AXIOM tutorial and JavaDocs in the Resources section. One of the most interesting features of AXIOM is its built-in support for the W3C XOP and MTOM standards used in the latest version of SOAP attachments. These two standards work together, with XML-binary Optimized Packaging (XOP) providing a way for XML documents to logically include blobs of arbitrary binary data, and with MTOM (SOAP Message Transmission Optimization Mechanism) applying the XOP technique to SOAP messages. XOP and MTOM are crucial features of the new generation of web services frameworks since they finally provide interoperable attachment support and end the current problems in this area. XOP works with base64-encoded character data content. Base64 encoding transforms arbitrary data values into printable ASCII characters by using one ASCII character to represent each six bits of the original data. Since binary data cannot generally be embedded inside XML (XML only works with characters, not raw bytes; even a number of character codes are not allowed in XML), base64 encoding is useful for embedding binary data into XML messages. XOP replaces the actual base64 text with a special "Include" element from the XOP namespace. The include element gives a URI that identifies a separate entity (outside the XML document) which is the actual data to be included in the XML document. Normally this URI will identify a separate block within the same transmission as the XML document (though there's no requirement that it do so, which offers potential benefits for exchanging documents through intermediaries or storing documents). The benefits of replacing the base64 text with a reference to the raw data are somewhat reduced document size (up to 25 percent smaller for normal character encodings) and faster processing without the overhead of encoding and decoding the base64 data. MTOM builds on XOP, first defining an abstract model of how XOP can be used for SOAP messages, then specializing that model for use with MIME Multipart/Related packaging, and finally applying it to HTTP transport. Altogether, this provides a standard way to apply XOP to SOAP messages using the widely used HTTP transport. AXIOM supports XOP/MTOM through the org.apache.AXIOM.om.OMText interface and the implementations of this interface. OMText defines methods to support text items backed by binary data (in the form of a javax.activation.DataHandler, part of the Java Activation API widely used for attachment support in Java web services frameworks), along with an "optimize" flag that tells whether the item can be processed using XOP. The org.apache.AXIOM.om.impl.llom.OMTextImpl implementation adds an MTOM-compatible content ID that can be set when an instance of the class is created, or generated automatically if it is not set. Listing 6 shows a sample of how to build a message using XOP/MTOM in AXIOM. This code is taken from a performance test example which uses Java serialization to convert a result data structure into an array of bytes, and then returns that array as an attachment. Listing 6. Creating an XOP/MTOM message Even though the code in Listing 6 generates a response that can be sent using XOP/MTOM, in the current version of Axis2, XOP/MTOM support is disabled by default. To turn it on, you need to include a parameter <parameter name="enableMTOM">true</parameter> in either the Axis2 axis2.xml file or the services.xml file for your service. We'll provide the full code for this example as part of upcoming performance comparisons, but for now we'll finish with a sample of XOP/MTOM in use. Listing 7 shows the structure of the response message generated by the Listing 6 service, with and without XOP/MTOM enabled (without the MIME headers and the actual binary attachment, in the first case, and with most of the data left out, in the second case). Listing 7. Response message with and without XOP/MTOM - Most developers working with web services need to work with data in the form of Java objects, rather than XML documents (or even document models, such as AXIOM). Last-generation frameworks, such as Axis and JAX-RPC, implemented their own forms of data binding to convert between XML and Java objects, but this was a very limited solution. The data binding implementations in web services frameworks generally couldn't compete with specialized data binding frameworks, so users who wanted better control over the handling of XML had to "glue" the frameworks together with inefficient conversion code. Because of these issues, Axis2 was designed from the beginning to support "plug-in" data binding using a wide range of data binding frameworks. The data binding support uses customized extensions to the WSDL2Java tool included in Axis2. This tool generates Axis2 linkage code based on a WSDL service description in the form of a stub for the client side or a message receiver for the server side. The client-side stub functions as a proxy for making calls to the service, defining method calls that implement the service operations. The server-side message receiver functions as a proxy for the client, calling the actual user-defined service method. When data binding is requested in the WSDL2Java command line, the tool calls the specified data binding framework extension to generate code in the stub or message receiver that converts between OMElements and Java objects. In the case of the stub, the Java objects (or primitive values) are passed in the method call and the converted XML is sent to the server as a request. The returned XML is then converted back into a Java object, which is then returned as the result of the method call. The message receiver on the server side does the same conversions in reverse. On the inbound or unmarshalling side (converting received XML to Java objects), the handling is easy. The XML document payload to be converted into Java objects is available in the form of an OMElement, and the data binding framework just needs to process the data from that element. OMElement provides access to the element data in the form of a StAX javax.xml.stream.XMLStreamReader, which most current data binding frameworks can use directly as input. The outbound or marshalling side (converting Java objects to transmitted XML) is a little more difficult. The whole point of AXIOM is to avoid building a full representation of XML data unless absolutely necessary. To support this principle when marshalling, there has to be a way for the data binding framework to be invoked only when needed. AXIOM handles this by using an org.apache.AXIOM.om.OMDataSource as a wrapper for the data binding conversion. OMDataSource defines methods for writing out the wrapped content using any of the methods supported by AXIOM (to a java.io.OutputStream, a java.io.Writer, or a StAX javax.xml.stream.XMLStreamWriter), along with another method that returns a parser instance for the wrapped content. An OMElement implementation can use an instance of OMDataSource to supply data on demand, and the OMFactory interface provides a method to create this type of element. We'll wrap up coverage of AXIOM with a quick look at performance. At the writing of this article, AXIOM is available as a 1.1 release which means that the interfaces described in this article should be stable. Performance, on the other hand, is subject to change over time as the actual implementation code is modified. We don't expect to see major changes in AXIOM's performance relative to the other document models, but it's possible that some of the details will be different. To compare AXIOM with other document models, we updated code from an earlier study of document models (see Resource section), adding AXIOM and one other new document model (XOM), converting the code to use the StAX parser standard rather than the older SAX standard, and switching to the improved System.nanoTime() timing method introduced in Java 5. The performance test code first reads the documents to be used in the test into memory, then runs through a sequence of operations on the documents. First, some number of copies of the document representations are built from parsers, with each resulting document object retained. Next, each document object is "walked," meaning the code scans the entire document representation (including all attributes, values, and text content). Finally, each document object is written to an output stream. The time is recorded for each individual operation and averaged at the end of the test run. Since the main focus of AXIOM (especially its use in Axis2) is on SOAP message handling, we used three different SOAP message test cases to check the performance. The first one is a sample response from a web service performance test that gives information about earthquakes within a particular time and latitude/longitude range ("quakes," 18 KB). This document contains a many repeated elements with some nesting and heavy use of attributes. The second one is a larger sample from a Microsoft WCF interoperability test, consisting of a single structure repeated with minor variations in values ("persons," 202 KB). This document has child elements with text content but no use of attributes. The third test case is a collection of 30 small SOAP message documents taken from some older interoperability tests (total size 19 KB). All formatting white space was removed from the test documents to make the documents representative of XML exchanged by real production services (which generally turn formatting off to keep message sizes down). The charts below show the average time required for 50 passes on each document. The test environment was a Compaq notebook system with a 1600 MHz ML-30 AMD Turion processor and 1.5 GB of RAM, running Sun's 1.5.0_07-b03 JVM on Mandriva 2006 Linux. We tested AXIOM 1.0, dom4j 1.6.1, JDOM 1.0, Xerces2 2.7.0, and XOM from the Nux 1.6 distribution. Custom builders from StAX parsers were used for dom4j, JDOM, and Xerces2, and the Nux StAX parser builder was used for XOM. All tests used the Woodstox StAX parser 2.9.3. Figure 1 shows the sums of the average times required for the first two steps in the test sequence, building a document from the parser, and walking the document representation to check the content. If you look at the first step in isolation (times not shown here), AXIOM does much better than the other document models, at least for the first two documents. However, this just shows that AXIOM is working as expected and is not actually building the full document until needed. We wanted to use the time taken to actually construct the full representation in memory for a fair comparison, which is why the two times in this chart are summed. Figure 1. Times to build and expand documents (in milliseconds) As seen in Figure 1, AXIOM is slower overall than any of the other document models tested except Xerces2. Several of the document models showed performance problems when it comes to the collection of small documents ("soaps"). Xerces2 was especially bad in this case, but AXIOM also showed a lot of overhead which is probably the most troubling issue this chart shows. Small messages are common in many web services, and AXIOM should be able to process them efficiently. Given that AXIOM is really designed around the on-demand expansion of the tree, the timings for the two larger documents are not a big concern since they're at least close to the other document models. Figure 2. Times to write documents (in milliseconds) Figure 2 shows the average times for writing the documents to an output stream using each model. Here Xerces2 actually gives the best times by a substantial margin (but not enough to make up for its poor performance on the build step; the scales of the two charts are different), while AXIOM is the worst. Here again, AXIOM seems to do an especially poor job with small documents. Figure 3. Document memory sizes (in KB) Finally, Figure 3 shows the memory used by each of the frameworks to represent the documents. dom4j is again the clear best, and AXIOM the worst by a considerable margin. Part of AXIOM's poor performance in memory use is due to the parser being referenced by the constructed document so that the parser instance will be kept around as long as the document instance is in use. That's likely partly why AXIOM again does especially poorly with small documents. However, the objects used by AXIOM as components of the document are also considerably larger than their counterparts in the other document models, and this difference is probably why AXIOM uses much more space even for the two larger test documents (where the fixed-size overhead of the parser and other data structures is smaller in proportion to the total memory usage). If you sum the times from the first two charts, the overall performance leader is dom4j, while the performance laggard is Xerces2 (with AXIOM just a smidgen ahead of the latter). On memory use, dom4j is again best, but in this contest AXIOM is the undisputed loser. Sound grim for AXIOM? It would be grim if the full tree representation were always built, but remember that the whole point of AXIOM is that often this full representation is not needed. Figure 4 shows the time from just the initial document construction in AXIOM, compared with the corresponding time for the other document models to build the document representation. Here AXIOM comes out much faster than the others (too fast to even register, in the case of the two larger documents). The same sort of comparison applies on the memory side. The net result is that if you need to work with only part of the document model (the "first" part, in document order terms), AXIOM delivers great performance. Figure 4. Initial document construction This article has provided an inside view of the AXIOM document object model at the core of Axis2. AXIOM embodies some interesting innovations, especially in terms of its build-on-demand approach to constructing the full representation. It's not quite on a par with other Java document models when you need a fully expanded representation. Its performance with small documents is especially troubling, but the flexibility it provides for web service processing balances out many of these concerns. You now know how Axis2 handles SOAP messages representations using AXIOM, including how it passes XML to and from data binding frameworks. The next article looks at how Axis2 support for different data binding frameworks works from a user perspective, including code samples using three of the frameworks. Information about download methods Learn - For an older and more detailed comparison of Java document object models, see Dennis Sosnoski's "Document models, Part 1: Performance" and "Java document model usage." - View the developerWorks web services standards library for information on a wide range of WS-* technologies. - Get a great overview of Axis2 with Eran Chinthaka's eeb services and Axis2 architecture. Get products and technologies - Build your next development project with IBM trial software, available for download directly from developerWorks. - Get more information and try AXIOM for yourself by downloading it at the Apache AXIOM Project. - The dom4j document object model is fast, memory-efficient, and also offers great extensibility. - The JDOM document object model is known for ease of use. - The XOM document object model protects users from common mistakes in the use of XML, while offering good performance and memory-efficiency. - Nux is an open source project geared toward high-throughput XML messaging middleware that aims to integrate best-of-breed components for XML processing into a single toolkit. - Xerces2 is the Apache implementation of the W3C document object model standard. - Get the latest Apache Axis2 information and downloads. - WoodStox is an open source implementation of the StAX pull parser standard. Dennis Sosnoski is a consultant specializing in Java-based SOA and web services. His professional software development experience spans over 30 years, with the last seven years focused on server-side XML and Java technologies. Dennis is the lead developer of the open source JiBX XML Data Binding framework built around Java classworking technology and the associated JiBXSoap web services framework, as well as a committer on the Apache Axis2 web services framework. He was also one of the expert group members for the JAX-WS 2.0 and JAXB 2.0 specifications.
http://www.ibm.com/developerworks/webservices/library/ws-java2/index.html
crawl-003
refinedweb
4,988
50.26
Tim Daneliuk wrote: > I am a bit confused. I was under the impression that: > > class foo(object): > x = 0 > y = 1 > > means that x and y are variables shared by all instances of a class. What it actually does is define names with the given values *in the class namespace*. > But when I run this against two instances of foo, and set the values > of x and y, they are indeed unique to the *instance* rather than the > class. > I imagine here you are setting instance variables, which then *mask* the presence of class variables with the same name, because "self-relative" name resolution looks in the instance namespace before it looks in the class namespace. > It is late and I am probably missing the obvious. Enlightenment > appreciated ... You can refer to class variables using the class name explicitly, both within methods and externally: >>> class X: ... count = 0 ... def getCt(self): ... return self.count ... def inc(self): ... self.count += 1 ... >>> x1 = X() >>> x2 = X() >>> id(x1.count) 168378284 >>> x1.inc() >>> id(x1.count) 168378272 >>> id(x2.count) 168378284 >>> id(X.count) 168378284 >>> x1.getCt() 1 >>> x2.getCt() 0 >>> regards Steve -- Steve Holden Python Web Programming Holden Web LLC +1 703 861 4237 +1 800 494 3119
https://mail.python.org/pipermail/python-list/2005-January/353055.html
CC-MAIN-2019-35
refinedweb
206
74.69
I am learning pointers and doing an exercise where I'm dynamically creating a pointer and accepting input from the user and then storing the double the user entry to the dynamically created pointer in the heap and printing out the dynamically created heap to the console. The problem I am having is, it is not printing double the users entry, I have debugged it and it looks like the t variable is not saving double the user entry and i'm not sure how to resolve this. I have posted my code below and would greatly appreciate any tips or hits that would help me solve the issue I am having. The current output is: Say something: hey Size of char: 8 Size of s: 8 Size of t: 8 Doubling copy... Original: hey Double copy: hey Counter: 8 Say something: hey Size of char: 8 Size of s: 8 Size of t: 8 Doubling copy... Original: hey Double copy: heyhey (I would like this line to print double the word the user entered as input) Counter: 8 #include <stdio.h> #include <cs50.h> #include <string.h> #include <ctype.h> int main(void) { int scale_value = 2; int counter = 0; printf("Say something: "); char* s = GetString(); if (s == NULL) { return 1; } string t = malloc((strlen(s) * scale_value + 1)* sizeof(char)); if (t == NULL) { free(s); return 1; } printf("Size of char: %lu\n", sizeof(char*)); printf("Size of s: %lu\n", sizeof(s)); printf("Size of t: %lu\n", sizeof(t)); for(int j = 0; j < scale_value; j++) { for (int i = 0, n = strlen(s); i<=n; i++) { t[counter] = s[i]; counter++; } } printf("Doubling copy...\n"); if (strlen(t)>0) { printf("Original: %s\n", s); printf("Double copy: %s\n", t); printf("Counter: %d\n", counter); } } It is because you copied the terminating null-character scale_value times instead of once and the first terminating null-character will terminate the string. This issue will not only terminate the string too early but also cause out-of-range access (buffer overrun). Try this instead of the copying part: for(int j = 0; j < scale_value; j++) { /* change <= to < to exclude the terminating null-character from what is copied */ for (int i = 0, n = strlen(s); i<n; i++) { t[counter] = s[i]; counter++; } } /* terminate the string */ t[counter++] = '\0'; Also note that %lu is not correct format specifier to print what is returned from sizeof, which has type size_t. Using %zu is correct. Using incorrect format specifier will invoke undefined behavior.
https://codedump.io/share/wBvoGoXWejww/1/storing-double-the-user-entry-in-a-dynamic-pointer-in-c
CC-MAIN-2017-17
refinedweb
419
54.76
I'm using the jquery.validate.pack.js plugin to validate my contact form. I want to block certain email address account such as Hotmail, Outlook and Live from being validated. How can I add this rule to the validation? I have this line of code which validates the email address ```php if (!eregi("^[A-Z0-9._%-]+@[A-Z0-9._%-]+\\.[A-Z]{2,4}$", trim($_POST['email']))) ``` Can I simply add the email address I want to not allow into the expression and if so where? I realise that eregi is now depreciated and I will update it. Hi freakystreak, Are you saying you want Hotmail address to fail validation, or you want them to pass validation without using the regex? I want any hotmail address to fail validation OK, so assuming that your email validation code has to return true or false, you could do it like this: $blacklist = array( 'hotmail.com', 'outlook.com' 'live.com' ); foreach ($blacklist as $domain) { if (strpos($email, $domain) >= 0) { return FALSE; } } return (bool) preg_match("/^[A-Z0-9._%-]+@[A-Z0-9._%-]+\\.[A-Z]{2,4}$/i", $email); Basically you have a blacklist of the domains that you want to fail validation, and you loop through each one checking to see if it's present in the email address string. Thanks for that will give it a try. Is there a way to get custom error messages with the jquery.validate.pack.js so I can let the user know that I don't accept Hotmail? Appreciate the help This topic is now closed. New replies are no longer allowed.
http://community.sitepoint.com/t/validating-email-addresses/33321
CC-MAIN-2015-11
refinedweb
269
65.42
Advanced Coloring Coloring Molecules Basic Coloring Any molecule in PyMOL can be assigned a color using the small rightmost buttons in the object list (in the upper right part of the main GUI window. The Color command will do the same. PyMOL has a predefined set of colors that can be edited in the Settings->Colors menu. Alternatively, you can use the Set_Color command. Coloring secondary structures To assign helices, sheets and loops individual colors, do: color red, ss h color yellow, ss s color green, ss l+'' When the colour bleeds from the ends of helices and sheets into loops, do: set cartoon_discrete_colors, 1 Or activate Cartoon -> Discrete Colors in the GUI menu. Coloring by atom type The util.cba* ("Color By Atom") commands color atoms according to type: oxygen in red, nitrogen in blue, hydrogen in white. Carbon will get a different colors, depending on the command: For instance: util.cbay three will color the object three by atom type, with the carbon atoms in yellow. CMYK-safe Colors There are two distinct color spaces on computers: RGB (red-green-blue), which is for screens, and CMYK (cyan-magenta-yellow-black), which is for printing. Some RGB triplets do not have equivalents in CMYK space. As a result, a figure that looks great on a screen can come out with unpredictable colors when printed. Most applications do a good job with RGB-to-CMYK conversions for photos, but do not do such a good job with graphics that use pure primary colors. For example, reds are generally OK, but pure blues and greens do not translate very well. Here are some RGB values that are within the CMYK gamut (i.e. are "CMYK-safe"): #optimized rgb values for cmyk output: set_color dblue= [0.05 , 0.19 , 0.57] set_color blue= [0.02 , 0.50 , 0.72] set_color mblue= [0.5 , 0.7 , 0.9 ] set_color lblue= [0.86 , 1.00 , 1.00] set_color green= [0.00 , 0.53 , 0.22] set_color lgreen=[0.50 , 0.78 , 0.50] set_color yellow=[0.95 , 0.78 , 0.00] set_color orange=[1.00 , 0.40 , 0.0 ] # these are trivial set_color red= [1.00 , 0.00 , 0.00] set_color mred= [1.00 , 0.40 , 0.40] set_color lred= [1.00 , 0.80 , 0.80] set_color vlred= [1.00 , 0.90 , 0.90] set_color white= [1.00 , 1.00 , 1.00] set_color vlgray=[0.95 , 0.95 , 0.95] set_color lgray= [0.90 , 0.90 , 0.90] set_color gray= [0.70 , 0.70 , 0.70] set_color dgray= [0.50 , 0.50 , 0.50] set_color vdgray=[0.30 , 0.30 , 0.30] set_color black= [0.00 , 0.00 , 0.00] ## Note that there are default atom colors such as "carbon", "nitrogen", "oxygen", "hydrogen", "sulfur", etc. which should also be redefined: set_color carbon= [0.00 , 0.53 , 0.22] etc. Coloring with 'chainbows' from a script The chainbow function can be invoked by: util.chainbow("object-name") Assign color by B-factor Robert Campbell has a color_b.py python script on his PyMOL web page that you can use. it has a number of options including the selection, minimum and maximum values to consider and several types of colouring schemes (a selection of gradients plus the ability to set the saturation and brightness levels) and two types of binning of the colours (equal number of atoms in each colour or equal spacing of colours along the B-factor range). See to download. PyMol B-factor Coloring This concept is also discussed in Coloring by BFactors and Spectrum and has a great list of the colors in the spectrum. This just shows a quick standard PyMol way to color your protein by b-factor. It also sets the range of color for coloring: spectrum b, minimum=20, maximum=50 Or to color on a per-object basis: load myprotein.pdb spectrum b, selection=myprotein, minimum=20, maximum=50 See Also Color, Coloring with Spectrum Creating a Color bar To show a vertical/horizontal color bar indiacting the b-factor variation, use the script pseudobar.pml on the structure pseudobar.pdb, or do the following: - Create a pdb-file which contains CA positions only, whereas the numbers correspond to your wanted increments of colors. Be sure that CA's are separated by a contant value, say 5 Angstroem. - Load this new pseudobar-pdb file into PyMOL, make bonds between increment 1 and increment 2 [increment 2 and increment 3 and so on...], define/assign a smooth color for each increment (copy colors definition from automatically created colors made by b-factor script) and show the b-factor bar as lines (or sticks). Also, see the newly created spectrumbar script! Coloring insides and outsides of helices differently The inside of helices can be adressed with: set cartoon_highlight_color, red Coloring all objects differently Is there a simple way to colour each object currently loaded, with a different colour? There is a script color_obj.py that does the job. USAGE color_obj(rainbow=0) This function colours each object currently in the PyMOL heirarchy with a different colour. Colours used are either the 22 named colours used by PyMOL (in which case the 23rd object, if it exists, gets the same colour as the first), or are the colours of the rainbow List the color of atoms To retrieve the color for all residues in a selection, you can iterate over it from the PyMOL command line iterate all, print color In Python, it looks like this: import pymol pymol.color_list = [] cmd.iterate('all', 'pymol.color_list.append(color)') print pymol.color_list
http://www.pymolwiki.org/index.php?title=Advanced_Coloring&oldid=8253
CC-MAIN-2015-18
refinedweb
931
65.32
*** Changes in GCC 3.3: * The "new X = 3" extension has been removed; you must now use "new X(3)". * G++ no longer allows in-class initializations of static data members that do not have arithmetic or enumeration type. For example: struct S { static const char* const p = "abc"; }; is no longer accepted. Use the standards-conformant form: struct S { static const char* const p; }; const char* const S::p = "abc"; instead. (ISO C++ is even stricter; it does not allow in-class initializations of floating-point types.) *** Changes in GCC 3.1: * -fhonor-std and -fno-honor-std have been removed. -fno-honor-std was a workaround to allow std compliant code to work with the non-std compliant libstdc++-v2. libstdc++-v3 is std compliant. * The C++ ABI has been fixed so that `void (A::*)() const' is mangled as "M1AKFvvE", rather than "MK1AFvvE" as before. This change only affects pointer to cv-qualified member function types. * The C++ ABI has been changed to correctly handle this code: struct A { void operator delete[] (void *, size_t); }; struct B : public A { }; new B[10]; The amount of storage allocated for the array will be greater than it was in 3.0, in order to store the number of elements in the array, so that the correct size can be passed to `operator delete[]' when the array is deleted. Previously, the value passed to `operator delete[]' was unpredictable. This change will only affect code that declares a two-argument `operator delete[]' with a second parameter of type `size_t' in a base class, and does not override that definition in a derived class. * The C++ ABI has been changed so that: struct A { void operator delete[] (void *, size_t); void operator delete[] (void *); }; does not cause unnecessary storage to be allocated when an array of `A' objects is allocated. This change will only affect code that declares both of these forms of `operator delete[]', and declared the two-argument form before the one-argument form. * The C++ ABI has been changed so that when a parameter is passed by value, any cleanup for that parameter is performed in the caller, as specified by the ia64 C++ ABI, rather than the called function as before. As a result, classes with a non-trivial destructor but a trivial copy constructor will be passed and returned by invisible reference, rather than by bitwise copy as before. * G++ now supports the "named return value optimization": for code like A f () { A a; ... return a; } G++ will allocate 'a' in the return value slot, so that the return becomes a no-op. For this to work, all return statements in the function must return the same variable. *** Changes in GCC 3.0: * Support for guiding declarations has been removed. * G++ now supports importing member functions from base classes with a using-declaration. * G++ now enforces access control for nested types. * In some obscure cases, functions with the same type could have the same mangled name. This bug caused compiler crashes, link-time clashes, and debugger crashes. Fixing this bug required breaking ABI compatibility for the functions involved. The functions in questions are those whose types involve non-type template arguments whose mangled representations require more than one digit. * Support for assignment to `this' has been removed. This idiom was used in the very early days of C++, before users were allowed to overload `operator new'; it is no longer allowed by the C++ standard. * Support for signatures, a G++ extension, have been removed. * Certain invalid conversions that were previously accepted will now be rejected. For example, assigning function pointers of one type to function pointers of another type now requires a cast, whereas previously g++ would sometimes accept the code even without the cast. * G++ previously. * G++ no longer allows you to overload the conditional operator (i.e., the `?:' operator.) * The "named return value" extension: int f () return r { r = 3; } has been deprecated, and will be removed in a future version of G++. *** Changes9x. *** Changes in EGCS 1.0: * A public review copy of the December 1996 Draft of the ISO/ANSI C++ standard is now available. See for more information. * g++ now uses a new implementation of templates. The basic idea is that now templates are minimally parsed when seen and then expanded later. This allows conformant early name binding and instantiation controls, since instantiations no longer have to go through the parser. What you get: + Inlining of template functions works without any extra effort or modifications. + Instantiations of class templates and methods defined in the class body are deferred until they are actually needed (unless -fexternal-templates is specified). + Nested types in class templates work. + Static data member templates work. + Member function templates are now supported. + Partial specialization of class templates is now supported. + Explicit specification of template parameters to function templates is now supported. Things you may need to fix in your code: + Syntax errors in templates that are never instantiated will now be diagnosed. + Types and class templates used in templates must be declared first, or the compiler will assume they are not types, and fail. + Similarly, nested types of template type parameters must be tagged with the 'typename' keyword, except in base lists. In many cases, but not all, the compiler will tell you where you need to add 'typename'. For more information, see + Guiding declarations are no longer supported. Function declarations, including friend declarations, do not refer to template instantiations. You can restore the old behavior with -fguiding-decls until you fix your code. Other features: + Default function arguments in templates will not be evaluated (or checked for semantic validity) unless they are needed. Default arguments in class bodies will not be parsed until the class definition is complete. + The -ftemplate-depth-NN flag can be used to increase the maximum recursive template instantiation depth, which defaults to 17. If you need to use this flag, the compiler will tell you. + Explicit instantiation of template constructors and destructors is now supported. For instance: template A<int>::A(const A&); Still not supported: + Member class templates. + Template friends. * Exception handling support has been significantly improved and is on by default. The compiler supports two mechanisms for walking back up the call stack; one relies on static information about how registers are saved, and causes no runtime overhead for code that does not throw exceptions. The other mechanism uses setjmp and longjmp equivalents, and can result in quite a bit of runtime overhead. You can determine which mechanism is the default for your target by compiling a testcase that uses exceptions and doing an 'nm' on the object file; if it uses __throw, it's using the first mechanism. If it uses __sjthrow, it's using the second. You can turn EH support off with -fno-exceptions. * RTTI support has been rewritten to work properly and is now on by default. This means code that uses virtual functions will have a modest space overhead. You can use the -fno-rtti flag to disable RTTI support. * On ELF systems, duplicate copies of symbols with 'initialized common' linkage (such as template instantiations, vtables, and extern inlines) will now be discarded by the GNU linker, so you don't need to use -frepo. This support requires GNU ld from binutils 2.8 or later. * The overload resolution code has been rewritten to conform to the latest C++ Working Paper. Built-in operators are now considered as candidates in operator overload resolution. Function template overloading chooses the more specialized template, and handles base classes in type deduction and guiding declarations properly. In this release the old code can still be selected with -fno-ansi-overloading, although this is not supported and will be removed in a future release. * Standard usage syntax for the std namespace is supported; std is treated as an alias for global scope. General namespaces are still not supported. * New flags: + New warning -Wno-pmf-conversion (don't warn about converting from a bound member function pointer to function pointer). + A flag -Weffc++ has been added for violations of some of the style guidelines in Scott Meyers' _Effective C++_ books. + -Woverloaded-virtual now warns if a virtual function in a base class is hidden in a derived class, rather than warning about virtual functions being overloaded (even if all of the inherited signatures are overridden) as it did before. + -Wall no longer implies -W. The new warning flag, -Wsign-compare, included in -Wall, warns about dangerous comparisons of signed and unsigned values. Only the flag is new; it was previously part of -W. + The new flag, -fno-weak, disables the use of weak symbols. * Synthesized methods are now emitted in any translation units that need an out-of-line copy. They are no longer affected by #pragma interface or #pragma implementation. * __FUNCTION__ and __PRETTY_FUNCTION__ are now treated as variables by the parser; previously they were treated as string constants. So code like `printf (__FUNCTION__ ": foo")' must be rewritten to `printf ("%s: foo", __FUNCTION__)'. This is necessary for templates. * local static variables in extern inline functions will be shared between translation units. * -fvtable-thunks is supported for all targets, and is the default for Linux with glibc 2.x (also called libc 6.x). * bool is now always the same size as another built-in type. Previously, a 64-bit RISC target using a 32-bit ABI would have 32-bit pointers and a 64-bit bool. This should only affect Irix 6, which was not supported in 2.7.2. * new (nothrow) is now supported. * Synthesized destructors are no longer made virtual just because the class already has virtual functions, only if they override a virtual destructor in a base class. The compiler will warn if this affects your code. * The g++ driver now only links against libstdc++, not libg++; it is functionally identical to the c++ driver. * (void *)0 is no longer considered a null pointer constant; NULL in <stddef.h> is now defined as __null, a magic constant of type (void *) normally, or (size_t) with -ansi. * The name of a class is now implicitly declared in its own scope; A::A refers to A. * Local classes are now supported. * __attribute__ can now be attached to types as well as declarations. * The compiler no longer emits a warning if an ellipsis is used as a function's argument list. * Definition of nested types outside of their containing class is now supported. For instance: struct A { struct B; B* bp; }; struct A::B { int member; }; * On the HPPA, some classes that do not define a copy constructor will be passed and returned in memory again so that functions returning those types can be inlined. *** The g++ team thanks everyone that contributed to this release, but especially: * Joe Buck <jbuck@synopsys.com>, the maintainer of the g++ FAQ. * Brendan Kehoe <brendan@cygnus.com>, who coordinates testing of g++. * Jason Merrill <jason@cygnus.com>, the g++ maintainer. * Mark Mitchell <mmitchell@usa.net>, who implemented member function templates and explicit qualification of function templates. * Mike Stump <mrs@wrs.com>, the previous g++ maintainer, who did most of the exception handling work.
http://opensource.apple.com//source/gcc/gcc-1640/gcc/cp/NEWS
CC-MAIN-2016-40
refinedweb
1,854
56.76
Creating Your First WebVR App using React and A-Frame Prayash Thapa, Former Developer Article Categories: Posted on Building VR apps has never been easier. Combine that with the power and accessibility of the web and you get WebVR.. In the end, we'll go through the deployment process through surge.sh so that you can share your app with the world and test it out live on your smartphone (or Google Cardboard if you have one available). For reference, the final code is in this repo. Over the course of this tutorial, we will be building a scene like this. Check out the live demo as well. Exciting, right? Without further ado, let's get started! What is A-Frame?. Setup The first thing we're going to be doing is setting up A-Frame and React. I've already gone ahead and done that for you so you can simply clone this repo, cd into it, and run yarn install to get all the required dependencies. For this app, we're actually going to be using Preact, a fast and lightweight alternative to React, in order to reduce our bundle size. Dont worry, it's still the same API so if you've worked with React before then you shouldn't notice any differences. Go ahead and run yarn start to fire up the development server. Hit up and you should be presented with a basic scene including a spinning cube and some text. I highly suggest that you spend some time going through the README in that repo. It has some essential information about A-Frame and React. It also goes more into detail on what and how to install everything. Now on to the fun stuff. Building Blocks Fire up the editor on the root of the project directory and inspect the file app/main.js (or view it on GitHub), that's where we'll be building out our scene. Let's take a second to break this down. The Scene component is the root node of an A-Frame app. It's what creates the stage for you to place 3D objects in, initializes the camera, the WebGL renderer and handles other boilerplate. It should be the outermost element wrapping everything else inside it. You can think of Entity like an HTML div. Entities are the basic building blocks of an A-Frame Scene. Every object inside the A-Frame scene is an Entity. A-Frame is built on the Entity-component-system (ECS) architecture, a very common pattern utilized in 3D and game development most notably popularized by Unity, a powerful game engine. What ECS means in the context of an A-Frame app is that we create a bunch of Entities that quite literally do nothing, and attach components to them to describe their behavior and appearance. Because we're using React, this means that we'll be passing props into our Entity to tell it what to render. For example, passing in a-box as the value of the prop primitive will render a box for us. Same goes for a-sphere, or a-cylinder. Then we can pass in other values for attributes like position, rotation, material, height, etc. Basically, anything listed in the A-Frame documentation is fair game. I hope you see how powerful this really is. You're grabbing just the bits of functionality you need and attaching them to Entities. It gives us maximum flexibility and reusability of code, and is very easy to reason about. This is called composition over inheritance. But, Why React? Sooooo, all we need is markup and a few scripts. What's the point of using React, anyway? Well, if you wanted to attach state to these objects, then manually doing it would be a lot of hard work. A-Frame handles almost all of its rendering through the use of HTML attributes (or components as mentioned above), and updating different attributes of many objects in your scene manually can be a massive headache. Since React is excellent at binding state to markup, diffing it for you, and re-rendering, we'll be taking advantage of that. Keep in mind that we won't be handling any WebGL render calls or manipulating the animation loop with React. A-Frame has a built in animation engine that handles that for us. We just need to pass in the appropriate props and let it do the hard work for us. See how this is pretty much like creating your ordinary React app, except the result is WebGL instead of raw markup? Well, technically, it is still markup. But A-Frame converts that to WebGL for us. Enough with the talking, let's write some code. Setting Up the Scene The first thing we should do is to establish an environment. Let's start with a blank slate. Delete everything inside the Scene element. For the sake of making things look interesting right away, we'll utilize a 3rd party component called aframe-environment to generate a nice environment for us. Third party components pack a lot of WebGL code inside them, but expose a very simple interface in the markup. It's already been imported in the app/intialize.js file so all we need to do is attach it to the Scene element. I've already configured some nice defaults for us to get started, but feel free to modify to your taste. As an aside, you can press CTRL + ALT + I to load up the A-Frame Scene Inspector and change parameters in real-time. I find this super handy in the initial stage when designing the app. Our file should now look something like: import { h, Component } from 'preact' import { Entity, Scene } from 'aframe-react' // Color palette to use for later const COLORS = ['#D92B6A', '#9564F2', '#FFCF59'] class App extends Component { constructor() { super() // We'll use this state later on in the tutorial this.state = { colorIndex: 0, spherePosition: { x: 0.0, y: 4, z: -10.0 } } } render() { return ( <Scene environment={{ preset: 'starry', seed: 2, lightPosition: { x: 0.0, y: 0.03, z: -0.5 }, fog: 0.8, ground: 'canyon', groundYScale: 6.31, groundTexture: 'walkernoise', groundColor: '#8a7f8a', grid: 'none' }} > </Scene> ) } } Was that too easy? That's the power of A-Frame components. Don't worry. We'll dive into writing some of our own stuff from scratch later on. We might as well take care of the camera and the cursor here. Let's define another Entity inside the Scene tags. This time, we'll pass in different primitives ( a-camera and a-cursor). <Entity primitive="a-camera" look-controls> <Entity primitive="a-cursor" cursor={{ fuse: false }} material={{ color: 'white', shader: 'flat', opacity: 0.75 }} geometry={{ radiusInner: 0.005, radiusOuter: 0.007 }} /> </Entity> See how readable and user-friendly this is? It's practically English. You can look up every single prop here in the A-Frame docs. Instead of string attributes, I'm passing in objects. Populating the Environment Now that we've got this sweet scene set up, we can populate it with objects. They can be basic 3D geometry objects like cubes, spheres, cylinders, octahedrons, or even custom 3D models. For the sake of simplicity, we'll use the defaults provided by A-Frame, and then write our own component and attach it to the default object to customize it. Let's build a low poly count sphere because they look cool. We'll define another entity and pass in our attributes to make it look the way we want. We'll be using the a-octahedron primitive for this. This snippet of code will live in-between the Scene tags as well. <Entity primitive="a-octahedron" detail={2} radius={2} position={this.state.spherePosition} You may just be seeing a dark sphere now. We need some lighting. Let there be light: <Entity primitive="a-light" type="directional" color="#FFF" intensity={1} position={{ x: 2.5, y: 0.0, z: 0.0 }} /> This adds a directional light, which is a type of light emitted from a certain point in space. You can also try using ambient or point lights, but in this situation, I prefer directional to emulate it coming from the sun's direction. Building Your First A-Frame Component Baby steps. We now have a 3D object and an environment that we can walk/look around in. Now let's take it up a notch and build our own custom A-Frame component from scratch. This component will alter the appearance of our object, and also attach interactive behavior to it. Our component will take the provided shape, and create a slightly bigger wireframe of the same shape on top of it. That'll give it a really neat geometric, meshy (is that even a word?) look. To do that, we'll define our component in the existing js file app/components/aframe-custom.js. First, we'll register the component using the global AFRAME reference, define our schema for the component, and add our three.js code inside the init function. You can think of schema as arguments, or properties that can be passed to the component. We'll be passing in a few options like color, opacity, and other visual properties. The init function will run as soon as the component gets attached to the Entity. The template for our A-Frame component looks like: AFRAME.registerComponent('lowpoly', { schema: { // Here we define our properties, their types and default values color: { type: 'string', default: '#FFF' }, nodes: { type: 'boolean', default: false }, opacity: { type: 'number', default: 1.0 }, wireframe: { type: 'boolean', default: false } }, init: function() { // This block gets executed when the component gets initialized. // Then we can use our properties like so: console.log('The color of our component is ', this.data.color) } } Let's fill the init function in. First things first, we change the color of the object right away. Then we attach a new shape which becomes the wireframe. In order to create any 3D object programmatically in WebGL, we first need to define a geometry, a mathematical formula that defines the vertices and the faces of our object. Then, we need to define a material, a pixel by pixel map which defines the appearance of the object (color, light reflection, texture). We can then compose a mesh by combining the two. We then need to position it correctly, and attach it to the scene. Don't worry if this code looks a little verbose, I've added some comments to guide you through it. init: function() { // Get the ref of the object to which the component is attached const obj = this.el.getObject3D('mesh') // Grab the reference to the main WebGL scene const scene = document.querySelector('a-scene').object3D // Modify the color of the material obj.material = new THREE.MeshPhongMaterial({ color: this.data.color, shading: THREE.FlatShading }) // Define the geometry for the outer wireframe const frameGeom = new THREE.OctahedronGeometry(2.5, 2) // Define the material for it const frameMat = new THREE.MeshPhongMaterial({ color: '#FFFFFF', opacity: this.data.opacity, transparent: true, wireframe: true }) // The final mesh is a composition of the geometry and the material const icosFrame = new THREE.Mesh(frameGeom, frameMat) // Set the position of the mesh to the position of the sphere const { x, y, z } = obj.parent.position icosFrame.position.set(x, y, z) // If the wireframe prop is set to true, then we attach the new object if (this.data.wireframe) { scene.add(icosFrame) } // If the nodes attribute is set to true if (this.data.nodes) { let spheres = new THREE.Group() let vertices = icosFrame.geometry.vertices // Traverse the vertices of the wireframe and attach small spheres for (var i in vertices) { // Create a basic sphere let geometry = new THREE.SphereGeometry(0.045, 16, 16) let material = new THREE.MeshBasicMaterial({ color: '#FFFFFF', opacity: this.data.opacity, shading: THREE.FlatShading, transparent: true }) let sphere = new THREE.Mesh(geometry, material) // Reposition them correctly sphere.position.set( vertices[i].x, vertices[i].y + 4, vertices[i].z + -10.0 ) spheres.add(sphere) } scene.add(spheres) } } Let's go back to the markup to reflect the changes we've made to the component. We'll add a lowpoly prop to our Entity and give it an object of the parameters we defined in our schema. It should now look like: <Entity lowpoly={{ color: '#D92B6A', nodes: true, opacity: 0.15, wireframe: true }} Adding Interactivity We have our scene, and we've placed our objects. They look the way we want. Now what? This is still very static. Let's add some user input by changing the color of the sphere every time it gets clicked on. A-Frame comes with a fully functional raycaster out of the box. Raycasting gives us the abiltiy to detect when an object is 'gazed at' or 'clicked on' with our cursor, and execute code based on those events. Although the math behind it is fascinating, we don't have to worry about how it's implemented. Just know what it is and how to use it. To add a raycaster, we provide the raycaster prop to the camera with the class of objects which we want to be clickable. Our camera node should now look like: <Entity primitive="a-camera" look-controls> <Entity primitive="a-cursor" cursor={{ fuse: false }} material={{ color: 'white', shader: 'flat', opacity: 0.75 }} geometry={{ radiusInner: 0.005, radiusOuter: 0.007 }} event-set__1={{ _event: 'mouseenter', scale: { x: 1.4, y: 1.4, z: 1.4 } }} event-set__1={{ _event: 'mouseleave', scale: { x: 1, y: 1, z: 1 } }} </Entity> We've also added some feedback by scaling the cursor when it enters and leaves an object targeted by the raycaster. We're using the aframe-event-set-component to make this happen. It lets us define events and their effects accordingly. Now go back and add a class="clickable" prop to the 3D sphere Entity we created a bit ago. While you're at it, attach an event handler so we can respond to clicks accordingly. <Entity class="clickable" // ... all the other props we've already added before events={{ click: this._handleClick.bind(this) }} /> Now let's define this _handleClick function. Outside of the render call, define it and use setState to change the color index. We're just cycling between the numbers of 0 - 2 on every click. _handleClick() { this.setState({ colorIndex: (this.state.colorIndex + 1) % COLORS.length }) } Great, now we're changing the state of app. Let's hook that up to the A-Frame object. Use the colorIndex variable to cycle through a globally defined array of colors. I've already added that for you so you just need to change the color prop of the sphere Entity we created. Like so: <Entity class="clickable" lowpoly={{ color: COLORS[this.state.colorIndex], // The rest stays the same /> One last thing, we need to modify the component to swap the color property of the material since we pass it a new one when clicked. Underneath the init function, define an update function, which gets invoked whenever a prop of the component gets modified. Inside the update function, we simply swap our the color of the material like so: AFRAME.registerComponent('lowpoly', { schema: { // We've already filled this out }, init: function() { // We've already filled this out } update: function() { // Get the ref of the object to which the component is attached const obj = this.el.getObject3D('mesh') // Modify the color of the material during runtime obj.material.color = new THREE.Color(this.data.color) } } You should now be able to click on the sphere and cycle through colors. Animating Objects Let's add a little bit of movement to the scene. We can use the aframe-animation-component to make that happen. It's already been imported so let's add that functionality to our low poly sphere. To the same Entity, add another prop named animation__rotate. That's just a name we give it, you can call it whatever you want. The inner properties we pass are what's important. In this case, it rotates the sphere by 360 degrees on the Y axis. Feel free to play with the duration and property parameters. <Entity class="clickable" lowpoly // A whole buncha props that we wrote already... animation__rotate={{ property: 'rotation', dur: 60000, easing: 'linear', loop: true, to: { x: 0, y: 360, z: 0 } }} /> To make this a little more interesting, let's add another animation prop to oscillate the sphere up and down ever so slightly. animation__oscillate={{ property: 'position', dur: 2000, dir: 'alternate', easing: 'linear', loop: true, from: this.state.spherePosition, to: { x: this.state.spherePosition.x, y: this.state.spherePosition.y + 0.25, z: this.state.spherePosition.z } }} Polishing Up We're almost there! Post-processing effects in WebGL are extremely fast and can add a lot of character to your scene. There are many shaders available for use depending on the aesthetic you're going for. If you want to add post-processing effects to your scene, you can utilize the additional shaders provided by three.js to do so. Some of my favorites are bloom, blur, and noise shaders. Let's run through that very briefly here. Post-processing effects operate on your scene as a whole. Think of it as a bitmap that's rendered every frame. This is called the framebuffer. The effects take this image, process it, and output it back to the renderer. The aframe-effects-component has already been imported for your convenience, so let's throw the props at our Scene tag. We'll be using a mix of bloom, film, and FXAA to give our final scene a touch of personality: <Scene effects="bloom, film, fxaa" bloom="radius: 0.99" film="sIntensity: 0.15; nIntensity: 0.15" fxaa // Everything else that was already there /> Boom. we're done. There's an obscene amount of shader math going on behind the scene (pun intended), but you don't need to know any of it. That's the beauty of abstraction. If you're curious you can always dig into the source files and look at the shader wizardry that's happening back there. It's a world of its own. We're pretty much done here. Onto the final step... Deployment It's time to deploy. The final step is letting it live on someone else's server and not your dev server. We'll use the super awesome tool called surge to make this painfully easy. First, we need a production build of our app. Run yarn build. It will output final build to the public/ directory. Install surge by running npm install -g surge. Now run surge public/ to push the contents of that directory live. It should prompt you to log in or create an account and you'll have the choice to change your domain name. The rest should be very straightforward, and you will get a URL of your deployed site at the end. That's it. I've hosted mine Fin I hope you enjoyed this tutorial and you see the power of A-Frame and its capabilities. By combining third-party components and cooking up our own, we can create some neat 3D scenes with relative ease. Extending all this with React, we're able to manage state efficiently and go crazy with dynamic props. We've only scratched the surface, and now it's up to you to explore the rest. As 2D content fails to meet the rising demand for immersive content on the web, tools like A-Frame and three.js have come into the limelight. The future of WebVR is looking bright. Go forth and unleash your creativity, for the browser is an empty 3D canvas and code is your brush. If you end up making something cool, feel free tweet at @_prayash and A-Frame @aframevr so we everyone else can see it too. Additional Resources Check out these additional resources to advance your knowledge of A-Frame and WebVR. - Crash Course: VR Design for N00bs for tips on designing for VR. - A-Frame School for more A-Frame knowledge. - A Week of A-Frame for inspiration. - A-Frame Slack for the community. - A-Frame Stack Overflow for common problems that you will run into. - Awesome A-Frame for a general hub for anything A-Frame. - Three.js 101 for an awesome intro to Three.js.
https://www.viget.com/articles/creating-your-first-webvr-app/
CC-MAIN-2022-21
refinedweb
3,397
67.35
- Author: - fnl - Posted: - December 12, 2008 - Language: - Python - Version: - 1.0 - "primary key" bigserial bigint auto_increment - Score: - 1 (after 1 ratings) Allows to create bigint (mysql), bigserial (psql), or NUMBER(19) (oracle) fields which have auto-increment set by using the AutoField of django, therefore ensuring that the ID gets updated in the instance when calling its 'save()' method. If you would only subclass IntegerField to BigIntegerField and use that as your primary key, your model instance you create would not get the id attribute set when calling 'save()', buy instead you would have to query and load the instance from the DB again to get the ID. Hi, I am using your bigint_patch, it works very well. I tried to expand it to accomodate OneToOneFields and ManyToManyFields as well, but I am lacking some expertise for setting up the ManyToManyField. Have you tried this? I would appreciate any help or suggestions. Thanks, Seth # For sqlite3 use (for both BigIntegerField, BigAutoField): # Does anybody have a new version of this for 1.2 or do I need to write one myself? # I would also be interested in support for 1.2.x. I have very limited knowledge of python and django. If someone has a working version for this patch, I would be very grateful. # I have made a hack to get this to work with django 1.3 What you do is replace the: if settings.DATABASE_ENGINE == 'mysql': with: if settings.DATABASES['default']['ENGINE'] == 'django.db.backends.mysql': as in line 22, but with the relevant string for the other engines And it works. HOWEVER... It only finds what the default database engine is set to, not the current model's engine. So that is a bit of a flaw. If anyone knows how to access the model instance from this file, we could try something like: from dhango.db import router ... if router.db_for_read(model_instance.class, instance=model_instance) = '?' # Thanks for the script. It was a very helpful starting point. I ended up need a bit more to get it working end-to-end. Here is my version: Logs: v1.0: Created by Florian v1.1: Updated by Thomas Fixed missing param connection Used endswith for engine type check (for better compatibility with dj_database_urland heroku) Added support for sqlite3 (which uses BIGINT by default) Returned super.db_type() for other database Added south's add_introspection_rules if south is defined Added BigOneToOneField and short description Assumed file location: common/fields.py # Hey tomleaf, can you share the updated version of the patch which you have created for ManyToMany fields and OneToOne fields.. Thanks. # Please login first before commenting.
https://djangosnippets.org/snippets/1244/
CC-MAIN-2019-13
refinedweb
436
65.62
Write a C program to read an English sentence and replace lowercase characters by uppercase and vice-verso. Output the given sentence as well as the case converted sentence on two different lines. <ctype.h> #include <conio.h> void main() { char sentence[100]; int count, ch, i; clrscr(); printf("Enter a sentencen"); for(i=0; (sentence[i] = getchar())!='n'; i++) { ; } sentence[i]=' '; count = i; /*shows the number of chars accepted in a sentence*/ printf("The given sentence is : %s",sentence); printf("nCase changed sentence is: "); for(i=0; i < count; i++) { ch = islower(sentence[i]) ? toupper(sentence[i]) : tolower(sentence[i]); putchar(ch); } } /*End of main()*/)
https://c-program-example.com/2011/09/c-program-to-read-an-english-sentence-and-replace-lowercase-characters-by-uppercase-and-vice-versa.html
CC-MAIN-2021-17
refinedweb
106
57.27
Provides the run control state machine singleton. Constructing a RunstateMachineSingleton encapsulates a single instance of a RunStateMachine so that all clients are assured of interacting with the same state machine object. The state machine object has a well defined set of states and allowed transitions between those states. See STATES below for the set of allowed states and transitions. By itself the state machine is not worth very much. The value comes from the ability to register Callout bundles with the state machine. See the CALLOUT BUNDLES for more about what a callback bundle is and how to create one. The RunstateMachineSingleton object wraps a single application wide RunstateMachine and exposes all of its methods to its client. The methods described here are therefore actually RunstateMachine methods. The methods below designated as TYPEMETHODS do not require an object to invoke. ::RunstateMachine::listStates This typemethod returns a Tcl list whose elements are the legal states the machine can be in. These are described in the STATES section below. ::RunstateMachine::listTransitions state This typemethod accepts the name of a state returned from ::RunstateMachine::listState and returns a list consisting of the names of the valid state that can be reached from that state. getState Return the current state of the state machine. This will be a state in the list returned by ::RunstateMachine::listStates listCalloutBundles Returns a list consisting of the names of the currently registererd callout bundles. See CALLOUT BUNDLES below for more information about callout bundles. The list is provided in registration order which also correspondes to the callout order. addCalloutBundle bundle-name ?before-bundle? Registers a new callout bundle with the state machine. See CALLOUT BUNDLES below for information about what a callout bundle is. The bundle-name is considerd to be the name of a namespace relative to the global namespace (e.g MyBundle is considered to be the namespace ::MyBundle). The namespace is checked for the existence and proper parameterization of the required callback bundle procs as described in CALLOUT BUNDLES. An error is thrown if the namespace is determined to not be a valid callout bundle. If before-bundle is provided it must be the name of an existing bundle (one that would be returned from listCalloutBundles). The bundle will be registerd just before before-bundle in the ordered list of bundles. This allows action depenencies between bundles to be properly scheduled. removeCalloutBundle bundle-name Removes the named callout bundle from the list of registered bundles. It is an error if bundle-name does not correspond to a registered callout bundle. transition new-state Attempts a transition from the current state to new-state. The leave and enter procs for the registered callback bundles are invoked. leave is invoked prior to making the transition while enter is invoked after the transition has occurerd. If new-state is not an allowed state transition, an error is thrown. The RunstateMachine has a well defined set of state and allowed transtion between those states. The finite state automaton that is defined by these states and their allowed transitions is shown in simplified form in ReadoutShell's state diagram These states and their allowed transitions are described textually below. The system is not ready for use. In this state, data sources have not yet been started. You must also be in this state to modify the set of data sources known to the application. Allowed target states for the transition method are; NotReady and Starting. Starting. This state is entered to start the data sources that have been defined for use with the application. NotReady is an allowed target state and is normally entered if one or more data sources failed to tart up. Halted is the other valid target state and is entered if all data sources started correctly. This state indicates the system is ready for use but there is no current data taking run. Valid transition targets are; NotReady, if a data source fails or Active if a run is successfully started. This state indicates data taking is ongoing. Valid transitions are: Paused if the run is paused, Halted if the run is ended and NotReady if a data source fails. Note that while not all data source providers support a Paused state this is not known or supported directly by the run state machine. Instead, the ReadoutGUI interrogates the data sources defined and removes the GUI elements that can trigger a transition to the Paused state if not all data sources support paused runs. Indicates a data taking run is temporarily paused. This state can transition to: Halted if the run is stopped, Active if the run is resumed or NotReady if a data source fails. The true value of the run state machine is the ability of components of the ReadoutGUI (including your extensions) to register Callout Bundles. You can think of a callout bundle as a generalization of the ReadoutShell's ReadoutCallouts.tcl mechanism. A Callout bundle is a Tcl namespace. The namespace must contain three exportedproc definitions: current-state This proc is called when the bundle is registered with the state machine via addCalloutBundle from-state to-state This proc is called just before the state machine makes a transition from from-state to to-state from-state to-state This proc is called just after the state machine has made a transition from from-state to to-state. The enter and leave procs for the callout bundles are invoked in the order in which the bundles were registered. The sample code below shows the creation and registration of a callout bundle that does nothing. You can rename the namespace and fill in the procs shown below to build and register your own callout bundles. Example 1. A do nothing RunstateMachine callout bundle package require RunstateMachine namespace eval ::MyBundle { variable sm namespace export attach enter leavenamespace eval ::MyBundle { variable sm namespace export attach enter leave } proc ::MyBundle::attach currentState {} proc ::MyBundle::attach currentState { } proc ::MyBundle::leave {from to} {} proc ::MyBundle::leave {from to} { } proc ::MyBundle::enter {from to} {} proc ::MyBundle::enter {from to} { } set ::MyBundle::sm [RunstateMachineSingleton %AUTO]} set ::MyBundle::sm [RunstateMachineSingleton %AUTO] $::MyBundle::sm addCalloutBundle MyBundle$::MyBundle::sm addCalloutBundle MyBundle smto live in that namespace (to hold the state machine object command) and exports the required proc names from the namespace. currentStatewill be the state at the time the attach was done. fromwill be the state at the time the transition is being started and towill be the target state. fromis the old state and tois the new state. ::MyBundle::sm. This allows it to be used from within the bundle procs as well as for bundle registration purposes.
http://docs.nscl.msu.edu/daq/newsite/nscldaq-11.0/r53735.html
CC-MAIN-2017-30
refinedweb
1,107
53.71
Making a Static Site Generator with Python - part 2 TheNiqabiCoderMum ・7 min read Originally published at my blog: blog.naveeraashraf.com In the first part of this tutorial we explored the two main components behind making a static site generator or SSG - the Markdown parser, Markdown2, and the templating engine, Jinja2. We saw how to use these components together to create HTML files from Markdown files and pre-created templates. In this part we will create our own static site generator. This is the final product So let's get started. Structure In this tutorial I will create a recipe blog. Since this tutorial is for educational purpose, I will only create two main views: the home page of the recipe blog with all the posts listed on it, and the individual posts pages. You can find the complete code in the github repository Scaffolding Create a new folder where you want your code to live: mkdir recipe-ssg In this folder, create the content folder where we will write our blog posts in markdown files: cd recipe-ssg & mkdir content We will also need posts written in markdown. If you want to follow along, you can copy the recipes written in markdown from here. By the way these are not just place holder recipes. These are my tried and tested recipes. If you are a baker, do give these recipes a try! We will also need some images. Feel free to use my images from here. Make a folder in your root project called output. Inside the output folder make another folder called img and add all the image files to this folder. You will also need to install Markdown2 and Jinja2, using pip or pipenv, ideally inside a virtual environment. pipenv install markdown2 pipenv install jinja2 Writing the Python Script Create a new file in your project root and call it main.py. First we will import all the packages we will need: import os from datetime import datetime from jinja2 import Environment, PackageLoader from markdown2 import markdown Next we will parse our markdown files. This is how we did it in the first part: from markdown2 import markdown with open('content/turkish-pide.md', 'r') as file: parsed_md = markdown(file.read(), extras=['metadata']) But since we have more than one file now, we will change the code to loop over all the files in the content folder like so: POSTS = {} for markdown_post in os.listdir('content'): file_path = os.path.join('content', markdown_post) with open(file_path, 'r') as file: POSTS[markdown_post] = markdown(file.read(), extras=['metadata']) Next we will sort these posts in reverse order so the newest posts show first but the dates are in a string format. So we will first need to convert them to datetime: POSTS = { post: POSTS[post] for post in sorted(POSTS, key=lambda post: datetime.strptime(POSTS[post].metadata['date'], '%Y-%m-%d'), reverse=True) } So far so good. You can check if your script is working so far by printing some metadata to the console. Creating Templates Next we will write our templates. Create a templates directory in the project root. We will write two templates, one for the main home page and the other for individual posts. Let's write the individual posts one first: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>{{ post.title }}</title> </head> <body> <h1>{{ post.title }}</h1> <small>{{ post.date }}</small> <p> {{ post.content }} </p> </body> </html> Save the above code into post.html inside the templates directory. Create another file home.html inside templates directory and paste the following code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>My Recipes</title> </head> <body> <h1>My Recipes</h1> {% for post in posts %} <p> <h2>{{loop.index}}: <a href="posts/{{ post.slug }}/">{{post.title}}</a> <small>{{post.date}}</small></h2> {{post.summary}} </p> {% endfor %} </body> </html> This template is a little less straight forward than the individual post one. But we are simply using Jinja2's for loop to loop through all the posts and filling in the place holder with data from those posts. The data will be handed over to this template in a list so we can easily loop over it. When we will go to our main.py again and write the script to pass data along to these templates, this will make much more sense. Using Template Inheritance But before we do that, you may have noticed that there is quite a bit of repetition. We had to write the same scaffolding HTML code for both templates. While this is not a big deal in this case because we have just two templates, but imagine having to rewrite the same code for larger projects. For sites with navigation menus and footers a lot of code will need to be written over and over again. And if you had to make a change, let's say in your navigation menu, you will need to make the change on each and every template. This can get incredibly monotonous, not to mention increasing the chance of errors and bugs. Fortunately all templating languages offer the solution to this problem through template inheritance. You can read more about Jinja2's template inheritance from here. Templates usually take advantage of inheritance, which includes a single base template that defines the basic structure of all subsequent child templates. You use the tags {% extends %}and {% block %}to implement inheritance. ~ Real Python Create a new file in templates directory and call it layout.html. In this file we will put all the code that needs to be repeated on every template: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>My Recipes</title> </head> <body class="container"> {% block content %} {% endblock %} <br> </body> </html> Now let's make suitable changes to the other two templates as well. In the home.html template paste the following code: {% extends "layout.html" %} {% block content %} <h1>My Recipes</h1> <div> {% for post in posts %} <p> <h2>{{loop.index}}: <a href="posts/{{ post.slug }}.html">{{post.title}}</a> <small>{{post.date}}</small></h2> {{post.summary}} </p> {% endfor %} </div> {% endblock %} And in the post.html change the code to: {% extends "layout.html" %} {% block content %} <h1>{{post.title}}</h1> <p> <small>{{post.date}}</small> {{post.content}} </p> {% endblock %} Now we have much cleaner code. Now let's work on rendering these templates. Rendering Home Page In your main.py file, get the templates with Jinja2 just like we did in the first part. But this time we will get two templates: env = Environment(loader=PackageLoader('main', 'templates')) home_template = env.get_template('home.html') post_template = env.get_template('post.html') Now let's pass the data to our home page template from our POSTS list. Since the home page only needs the metadata we will pass it metadata only: posts_metadata = [POSTS[post].metadata for post in POSTS] home_html = home_template.render(posts=posts_metadata) This will pass a list of metadata through the variable posts to our home page template. This is the same posts variable over which we looped in the template. One more thing that I want to do is to have the tags of each post in some kind of list so I can loop through them as I may want to make each tag into a clickable link. Currently all the tags are being passed as a single string. To change that I will create a new list from the post_metadata['tags'] variable and pass it along with other data to my home template. posts_metadata = [POSTS[post].metadata for post in POSTS] tags = [post['tags'] for post in posts_metadata] home_html = home_template.render(posts=posts_metadata, tags=tags) Now let's write this HTML to a file: with open('output/home.html', 'w') as file: file.write(home_html) Run your main.py and you will get a home.html in your output directory. Open the file in browser and it will look like this: Rendering Individual Posts If you click on the title of any of the post in your browser, there will be a not found error. Because the individual post pages haven't been rendered yet. To do so add the following code to your main.py: for post in POSTS: post_metadata = POSTS[post].metadata post_data = { 'content': POSTS[post], 'title': post_metadata['title'], 'date': post_metadata['date'], } post_html = post_template.render(post=post_data) post_file_path = 'output/posts/{slug}.html'.format(slug=post_metadata['slug']) os.makedirs(os.path.dirname(post_file_path), exist_ok=True) with open(post_file_path, 'w') as file: file.write(post_html) Now run the main.py again and now when you click on a link it will take you to the corresponding page. Extra Bits Our static site generator is done at this point. But I want to show you a few extra things. Let's say we want to add some css to make our site look nicer. This is where we will be thankful to the template inheritance. Just add the following styling to your layout.html in the templates directory: <style> .container { width: 80%; margin: auto; margin-top: 3em; } </style> Run your main.py again and you will see the styles applied to all pages. Great! Let's add a light-weight css framework to our site. I am using Picnic. Add this link to the head of your layout.html: <link rel="stylesheet" href=""> Re run main.py and voila! Remember those tags we passed as an individual list? Using Jinja2's build in filters we can now iterate over that list from within our template and put each tag inside it's own span or button element. {% set list_of_tags = post.tags.split(",") %} {% for tag in list_of_tags %} <button class="shyButton mybutton">{{ tag }}</button> {% endfor %} And after adding some CSS classes to my code here is the final product and You can find the complete code in the github repository. That's it! I hope you enjoyed this tutorial. If you did don't forget to share it. This is #nice. You have a star :) A few days ago I've built a simple website with panini, quite similar to your tool. i like that you only used jinja and markdown as dependency I am glad you liked this :)
https://dev.to/nqcm/making-a-static-site-generator-with-python-part-2-4al
CC-MAIN-2020-16
refinedweb
1,737
67.15
- Products - Solutions - API & Docs The <Dial> verb's <Conference> noun allows you to connect to a conference room. Much like how the <Number> noun allows you to connect to another phone number, the <Conference> noun allows you to connect to a named conference room and talk with the other callers who have also connected to that room. The name of the room is up to you and is namespaced to your account. This means that any caller who joins 'room1234' via your account will end up in the same conference room, but callers connecting through different accounts would not. By default, Twilio conference rooms enable a number of useful features used by business conference bridges: You can enable and disable each of these features based on your needs. The <Conference> noun supports the following attributes that modify its behavior: The muted attribute lets you specify whether a participant can speak on the conference. If this attribute is set to true, the participant will only be able to listen to people on the conference. This defaults to false. The beep attribute lets you specify whether a notification beep is played to the conference when a participant joins or leaves the conference. This defaults to true. This attribute tells a conference to start when this participant joins the conference, if it is not already started. This is true by default. If this is false and the participant joins a conference that has not started, they are muted and hear background music until a participant joins where startConferenceOnEnter is true. This is useful for implementing moderated conferences. If a participant has this attribute set to true, then when that participant leaves the conference ends and all other participants drop out. This defaults to false. This is useful for implementing moderated conferences that bridge two calls and allow either call leg to continue executing TwiML if the other hangs up. The waitUrl attribute lets you specify a URL for music that plays before the conference has started. The URL may be an MP3, a WAV, or a TwiML document that uses <Play> or <Say> for content. This defaults to a selection of Creative Commons licensed background music, but you can replace it with your own music and messages. If the waitUrl responds with TwiML, note that Twilio can only process <Play>, <Say>, and <Redirect> verbs. <Record>, <Dial>, and <Gather> verbs are not allowed. If you do not wish anything to play while waiting for the conference to start, specify the empty string (waitUrl=""). If no waitUrl is specified, Twilio will use it's own HoldMusic Twimlet that reads a public AWS S3 Bucket for audio files. The default waitUrl is: This URL points at S3 bucket com.twilio.music.classical, containing a selection of nice Creative Commons classical music. Here's a list of S3 buckets we've assembled with other genres of music for you to choose from: This attribute indicates which HTTP method to use when requesting waitUrl. It defaults to POST. Be sure to use GET if you are directly requesting static audio files such as WAV or MP3 files so that Twilio properly caches the files. <Response> <Dial> <Conference>1234</Conference> </Dial> </Response> By default, the first caller to execute this TwiML would join conference room 1234 and listen to the default waiting music. When the next caller executed this TwiML, they would join the same conference room and the conference would start. The default background music ends, the notification beep is played and all parties can communicate. First, you can drop a number of people into the conference, specifying that the conference shouldn't yet start: <Response> <Dial> <Conference startConferenceOnEnter="false">1234</Conference> </Dial> </Response> Each person will hear hold music while they wait. Then, when the "moderator" or conference organizer calls in, you can specify that the conference should begin: <Response> <Dial> <Conference startConferenceOnEnter="true" endConferenceOnExit="true" >1234</Conference> </Dial> </Response> Also note that since the moderator has "endConferenceOnExit='true'" set, then when the moderator hangs up, the conference will be ended, and each participant's <Dial> will complete. <Response> <Dial> <Conference muted="true">SimpleRoom</Conference> </Dial> </Response> This code allows forces participants to join the conference room muted. They can hear what unmuted participants are saying but no one can their them. The muted attribute can be enabled or disabled in realtime via the REST API. <Response> <Dial> <Conference beep="false" waitUrl="" startConferenceOnJoin="true" endConferenceOnExit="true"> NoMusicNoBeepRoom </Conference> </Dial> </Response> Sometimes you just want to bridge two calls together without any of the bells and whistles. With this minimal conferencing attribute setup, no background music or beeps are played, participants can speak right away as they join, and the conference ends right away if either participant hangs up. This is useful for cases like bridging two existing calls, much like you would with a Dial. <Response> <Dial> <Conference beep="false"> Customer Waiting Room </Conference> </Dial> </Response> This code puts the first caller into a waiting room, where they'll hear music. It's as if they're on hold, waiting for an agent or operator to help them. Then, when the operator or agent is ready to talk to them... their call would execute: <Response> <Dial> <Conference beep="false" endConferenceOnExit="true"> Customer Waiting Room </Conference> </Dial> </Response> This code would join the operator with the person who was holding. Because the conference starts when they enter, the wonderful hold music the first person was hearing will stop, and the two people will begin talking. Because "beep='false'", then the caller won't hear a ding when the agent answers, which is probably appropriate for this use case. When the operator hangs up, then "endConferenceOnExit" will cause the conference to end. <Response> <Dial action="handleLeaveConference.php" method="POST" hangupOnStar="true" timeLimit="30"> <Conference>LoveTwilio</Conference> </Dial> </Response> Because Conference is an element of Dial, you can still use all the Dial attributes in combination with Conference (with the exception of callerId and timeout, which have no effect). You can set a timeLimit, after which you'll be removed from the conference. You can turn on hangupOnStar, which lets you leave a conference by pressing the * key. You can specify an action, so that after you leave the conference room Twilio will submit to the action and your web server can respond with new TwiML and continue your call.
http://www.twilio.com/docs/api/2008-08-01/twiml/conference
CC-MAIN-2014-41
refinedweb
1,060
51.58
A GPS Module An antenna A Raspberry Pi (preferably the newest Pi 3 since I know it works)? ... oCQNrw_wcB Breadboard cables ... UTF8&psc=1 Optionally, a 3.3V USB to UART (LVTTL) ... UTF8&psc=1 (or similar) This makes configuring the GPS easier A 12V to USB cigarette adapter and USB cable of your choosing A USB mouse, keyboard, and HDMI monitor for working with the Pi So first off, the "why?" If you've ever tried to look at your line through a track, you've probably noticed that it's not very useful: Here's a corner from one track in Texas using my Galaxy S5's onboard GPS (1Hz). Pretty useless, with some accelerometer data and speed, but wildly spread out and pretty inaccurate. Here's what a different track looks like (zoomed out a bit) with (true) 20hz GPS - with a fast enough interface (wi-fi) Obviously, the first thing you see is that it's immediately obvious what line was taken. Did I apex early for my best lap? But another thing that's better is there's WAY more detail in acceleration, etc. - you can even see how long I was decelerating, for instance. Pretty f'n awesome for something that costs about what a GPS of half the speed will run you (a little less if you already have a few of these lying around).. so let's get onto the "how". The basic idea is: [GPS}--UART-->[Pi]--Wi-Fi(via Pi)-->HLT The GPS itself is configured from and outputs using 3.3V serial, AKA UART. You have to make sure that any voltage you give to it is 3.3V, otherwise you'll fry it. 1) Get your GPS configured to output 20Hz. You can totally do this with serial commands, which I encourage you to do! However the fastest way to get the GPS outputting at max speed is using the included software (found on the SFE site) and a USB to serial converter. Just make sure you save the settings to FLASH so it runs at 20Hz every time it boots up (it can take a while if you're indoors) -Set Baud to the max of 115200 -Set Update rate to 20Hz Using the provided software on your PC, it'll be seriously easy to configure and see that it's working 2) Configure your Raspberry Pi's serial interface. For the most part, I used Python to do most of this since it is by *far* the most simple way to interface with low-level peripherals, but of course the sky is the limit with that little thing. -Set up serial. PySerial is your friend here, and Lady ADA is, too - ... ead-of-usb The UART interface on the Pi 3 is /dev/ttyS0! For any write-ups, NOT AMA0 -Keep in mind that the Pi 3 uses UART to provide a console if you don't have a monitor, etc. - you need to disable it so console messages don't screw with your GPS -The Pi 3 also changes its clock speeds and doesn't account for how they affect the UART clock. you can account for this and read about it here ... pberry-pi3 The actual connection here is super simple - just make sure to use 3.3V: Using Python you can really easily check and see that your serial interface is working using something like this: Code: Select all #import the serial library so we can talk to the GPS import serial #Define the interface - you should have already configured the GPS using the included software if you took the easy road ser = serial.Serial(port='/dev/ttyS0', baudrate = 115200, timeout = 1) #While the port is open, display what's being sent - we should see NMEA sentences scream through the terminal while ser.isOpen(): line = ser.readline() print line #If you see gibberish, chances are the baud rate is mis-matched. If the messages come once every second, the update rate wasn't set correctly #Press ctrl+z to stop the script 3) Turn your Raspberry Pi into a Wi-Fi Access Point -Most of this is done with hostapd, which you can read about here: ... h-hostapd/ 4) Once you have the network up, use sockets to stream the data to the App. -The port definition used here is used on HLT! -There's lots of stuff out there on how to do this - but you can test this by using .listen() and connecting to the port with PuTTY with another computer. Once the socket is connected, start sending it data! HLT is cool in that you can pretty much just use getline() from serial and blast it over the socket -Be careful and make backups of the files you change! You're editing the Pi's hardware configuration, and while it'll work great to host an isolated Wi-Fi connection, you have to keep in mind that (if you don't give it a main IP to route everything to) it has no real internet connection via Wi-Fi once these settings take effect. Without an Ethernet cable, it's like plugging in a router by itself. 5) Configure the Pi to run the script on boot. -Be sure to try this out a few times while you have a monitor! Keep in mind that if you go to your home screen (or something) HLT will close the pipe to the Pi and cause an error. You need to wait until a device connects again, otherwise it'll crash -Make sure that you configure HLT, set the track up, etc. before you connect to the Pi's network. Otherwise, the app will try to download POI's when there's no IP connection! If you configure the track, etc. THEN open up Wi-Fi, the app should find the device, and you should be good to go- but this takes some practice. That's it for now! Those are pretty much the building blocks that I dug up, and I'm hoping that it stirs some interest in those of you looking to do something similar. I'll post more information and elaborate soon, but I hope that you try it out if you're interested and post problems/ suggestions!
http://forum.gps-laptimer.de/viewtopic.php?f=19&t=4060
CC-MAIN-2018-17
refinedweb
1,050
69.62
Hello. Iam new in programming and i have a problem with templates class: #include <iostream> using namespace std; template <class T> class Complex{ T *rp,*ip; // pointers se pragmatiko kai fantastiko meros public: T real() const {return *rp;} T imag() const (return *ip;} Complex(Tr=0,Ti=0):rp(new T(r)),ip(new T(i)){} } I want to write down the destructor of the class, the copy destructor and the assignment operator. Please help!!! I want to write down the destructor of the class, the copy destructor and the assignment operator. What's the problem ? It's the same as in a class without a template. Why are you using pointers for the real an imaginary part of the complex number? I don't see the point of that. Marius Bancila My CodeGuru articles I do not offer technical support via PM or e-mail. Please use vbBulletin codes. hello..its only for educational purposes. its not a real program. so..the destructor of that class should be like the below? ~Complex(){cout<<"Destructor\n"; delete rp; delete ip;} View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?470908-New-with-templates!!-pls-help&p=1812841
CC-MAIN-2015-32
refinedweb
185
65.93
My refactoring menus are grayed out, only showing "rename". All other features in Resharper appear to work. Interestingly, some refactorings show up on right click of class view( But not inline method) And more oddly, if I open it in object browser, I can get "inline method" to show. Sometimes "Extract method and Introduce variable" will show on right click in source code. In 4.5/VS2008, I routinely could right click on a method name and say "inline" - Visual Studio 2010 RTM - Resharper 5.0 personal full edition (C# + VB) - Windows Server 2000 I tried this : but I couldn't install that version of MSXML, so I ran the "Update for Microsoft XML Core Services 4.0 Service Pack 3 (KB973685)" No help there. I also tried clearing the Resharper caches, Resetting the short cuts. No help there. So I uninstalled 5.0 and 4.5 and re-installed Version:5.0 Build: 1659. Still no improvement. I'm out of ideas. My refactoring menus are grayed out, only showing "rename". All other features in Resharper appear to work. Hello, Does the "Refactor This" (CtrlShiftR) menu work? — Serge Baltic JetBrains, Inc — “Develop with pleasure!” It only works in the sense that "Rename" shows up, where as if I navigate to the object explorer on the same method name, I get about 14 refactoring options instead of just rename. I have the same problem. Have you found out the fix for this? I have the same problem and Ctrl+Shift+R does bring up the Resharper dialog, but only Rename available (context is a public class) Hello, No, I do not have any positive idea yet. R# refactorings should be available at three locations: main menu (ReSharper -> Refactor), context menu (Refactor submenu), and "Refactor This" (CtrlShiftR). As for the main menu, all of the refactoring items should always be present. They could be greyed out if the particular refactoring is not available, but shouldn't be hidden. Are they there? Does "Safe Delete" (CtrlRD in VS layout) work on a public class? — Serge Baltic JetBrains, Inc — “Develop with pleasure!” Serge, Thanks for the help and explanation. Bottom line, R# works fine, I was using it incorrectly. I did not realize that I had to have the cursor exactly on the thing that I wanted refactored without highlighting. This is slightly different than the way that Visual Studio works but entirely logical. For example, in Visual Studio you can be anywhere in the class white space to get the "Extract Interface" refactoring. However, in R# you just click on the class name, not highlighting it, and you get the refactorings. Thanks again Serge for the help. Buddy Hello, Yes, I admit the behavior could be confusing somewhat, especially with selection versus just placing the caret. The reasoning for exposing class-related refactorings on the class name only (as opposed to anywhere in the class) is that we'd like to have the "Refactor This" menu as short as possible. Otherwise it would have too many items to navigate through it quickly. Same applies to selection versus caret: there're refactorings specifically applied to the selected range, and in that case regular ones are pushed aside to keep the list short. The possible objections are: OK the short "Refactor This" menu does not have class-related refactorings when I'm inside the class, why can't I still invoke them explicitly from the main menu (disables instead of hiding, so the list is always large) or with a keyboard shortcut? — I think that for some of the refactorings this is done for clarity (for example, Safe Delete could apply both to methods and classes, if you're outside both method name and class name, it would get confusing whether it's trying to delete the nearest method or the containing class), while for others, like extract interface, it just has not been implemented (separate behavior of short and full menus). Space for improvement :) If the class name is selected, then there're no special selection-range-based refactorings (like there are on expressions inside method bodies), so the no-selection refactorings should still be available. — Well, they should. Some of the refactorings actually are available, and others are not. Good point, here's a bug on our part. — Serge Baltic JetBrains, Inc — “Develop with pleasure!” I found a solution to this same problem: If the class declaration is within another class it's greyed out... temporarily comment out the parent class(es) and then "Extract Interface..." Matt
https://resharper-support.jetbrains.com/hc/en-us/community/posts/206675005-Refactor-menus-greyed-out?page=1
CC-MAIN-2019-22
refinedweb
755
73.37
I'm new to these forums (and pretty new to C), so first I would like to say Hi to everybody. I have come here seeking help with a linked list stack implementation. Basically I am trying to implement a program that generates a stack of regular playing cards ie: 52 cards hearts spades etc. It then shuffles these cards and then deals them out into two seperate piles of equal amounts. This is for a school assignment with focus on ADTs (something I am still trying to get my head around :S). The code compiles without any errors, however I get a segmentation fault whenever I try to run the executable. I have narrowed down when the fault ccurs and it seems to be caused by the line: strcpy(newCard->suit, suit); which is in the init_card function of deck.c. I understand that segmentation faults occur when the program attempts to write to memory when it does not have permission. My understanding of malloc() is that memory allocated by it remains until freed regardless of which function the program is in. I apologise for the amount of code included. If anybody can offer any help at all it would be much appreciated. Thanks. Code:[main.c] #include <stdlib.h> #include <stdio.h> #include "main.h" #include "deck.h" #include "stack.h" int main (int argc, char **argv) { int i; // Generate a deck of cards and shuffle char *suit[] = {"Diamonds", "Hearts", "Clubs", "Spades"}; card deck[52]; fill_deck (deck, suit); shuffle (deck); // Initialise stack pointers card *p1top, *p2top, *p1bottom, *p2bottom = NULL; stack_init (&p1top, &p1bottom); // Initialise player 1 stack stack_init (&p2top, &p2bottom); // Initialise player 2 stack deal (deck, &p1top, &p2top); // Deal 26 cards to each player return (0); } [/main.c] [main.h] struct card { int value; char *suit; struct card *nextCard; struct card *prevCard; }; typedef struct card card; [/main.h] [stack.c] #include <stdlib.h> #include <string.h> #include <stdio.h> #include "main.h" #include "deck.h" #include "stack.h" void stack_init (card **top, card **bottom) { *top = NULL; *bottom = NULL; } void push (card *new, card **top) { card *prev = *top; new->nextCard = NULL; if (*top == NULL) { new->prevCard = NULL; } else { prev->nextCard = new; new->prevCard = prev; } *top = new; } void queue (card *hold, card **bottom) { card *prev = *bottom; card *new; new->prevCard = NULL; prev->prevCard = new; new->nextCard = prev; *bottom = new; } card *pop (card **top) { return *top; } [/stack.c] [stack.h] /* Function Prototypes */ void stack_init (card **top, card **bottom); void push (card *new, card **top); void queue (card *new, card **bottom); card *pop (card **top); [/stack.h] [deck.c] #include <stdlib.h> #include <stdio.h> #include <string.h> #include <time.h> #include "main.h" #include "deck.h" #include "stack.h" card *init_card (char *suit, int value) { card *newCard = (card *) malloc (sizeof(card)); if (newCard == NULL) // Check for successful malloc() return (NULL); else { strcpy(newCard->suit, suit); newCard->value = value; return (newCard); } } void fill_deck (card *deck, char *suit[]) { int i; for (i = 0; i < 52; i++) { deck[i].value = (i % 13) + 2; deck[i].suit = suit[i / 13]; } } void shuffle (card *deck) { srand (time(NULL)); int i, j; card temp; for (i = 0; i < 52; i++) { j = rand() % 52; temp = deck[i]; deck[i] = deck[j]; deck[j] = temp; } } void deal (card *deck, card **p1top, card **p2top) { int i; card *hold; for (i = 0; i < 52; i++) { hold = init_card (deck[i].suit, deck[i].value); if (i % 2 == 0) { push (hold, p1top); } else { push (hold, p2top); } } } [/deck.c] [deck.h] card *init_card (char *suit, int value); void fill_deck(card *, char *suit[]); void shuffle(card *); void deal (card *, card **, card **); [/deck.h]
http://cboard.cprogramming.com/c-programming/94090-segmentation-fault-aaaaaaaah.html
CC-MAIN-2014-15
refinedweb
597
74.08
SUMMARY: I spent two years trying to make Rails do something it wasn’t meant to do, then realized my old abandoned language (PHP, in my case) would do just fine if approached with my new Rails-gained wisdom. INTRO / BACKGROUND:.* (To be fair to Jeremy’s mad skillz: many setbacks were because of tech emergencies that pulled our attention to other internal projects that were not the rewrite itself.) The entire music distribution world had changed, and we were still working on the same goddamn rewrite. I said fuckit, and we abandoned the Rails rewrite. Jeremy took a job with 37 Signals, and that was that. I didn’t abandon the rewrite IDEA, though. I just asked myself one important question: “Is there anything Rails can do, that PHP CAN’T do?” The answer is no. I threw away 2 years of Rails code, and opened a new empty Subversion respository. Then in a mere TWO MONTHS, by myself, not even telling anyone I was doing this, using nothing but vi, and no frameworks, I rewrote CD Baby from scratch in PHP. Done! Launched! And it works amazingly well. It’s the most beautiful PHP I’ve ever written, all wonderfully MVC and DRY, and and I owe it all to Rails. Inspired by Rails: *-.) *- all HTML coming from a cute and powerful templating system I whipped up in 80 lines, all multi-lingual and caching and everything *- … and much more. In only 12,000 lines of code, including HTML templates. (Down from 90,000, before.) Though I’m not saying other people should do what I’ve done, I thought I should share my reasons and lessons-learned, here: SEVEN REASONS I SWITCHED BACK TO PHP AFTER 2 YEARS ON RAILS: #1 - “IS THERE ANYTHING RAILS/RUBY CAN DO THAT PHP CAN’T DO? … (thinking)… NO.”. Ruby is prettier. Rails has nice shortcuts. But no big shortcuts I can’t code-up myself in a day if needed. Looked at from a real practical point of view, I could do anything in PHP, and there were many business reasons to do so. #2 - OUR ENTIRE COMPANY’S STUFF WAS IN PHP: DON’T UNDERESTIMATE INTEGRATION By the old plan (ditching all PHP and doing it all in Rails), there was going to be this One Big Day, where our entire Intranet, Storefront, Members’ Login Area, and dozens of cron shell scripts were ALL going to have to change. 85 employees re-trained. All customers and clients calling up furious that One Big Day, with questions about the new system. Instead, I was able to slowly gut the ugly PHP and replace it with beautiful PHP. Launch in stages. No big re-training. #3 - DON’T WANT WHAT I DON’T NEED I admire the hell out of the Rails core gang that actually understand every line inside Rails itself. But I don’t. And I’m sure I will never use 90% of it. With my little self-made system, every line is only what’s absolutely necessary. That makes me extremely happy and comfortable. #4 - IT’S SMALL AND FAST One little 2U LAMP server is serving up a ton of cdbaby.com traffic damn fast with hardly any load. #5 - IT’S BUILT TO MY TASTES I don’t need to adapt my ways to Rails. I tell PHP exactly what I want to do, the way I want to do it, and it doesn’t complain. I was having to hack-up Rails with all kinds of plugins and mods to get it to be the multi-lingual integration to our existing 95-table database. My new code was made just for me. The most efficient possible code to work with our exact needs. #6 - I LOVE SQL Speaking of tastes: tiny but important thing : I love SQL. I dream in queries. I think in tables. I was always fighting against Rails and its migrations hiding my beloved SQL from me. . Ok. All that being said, I’m looking forward to using Rails some day when I start a brand new project from scratch, with Rails in mind from the beginning. But I hope that this reaches someone somewhere thinking, “God our old code is ugly. If we only threw it all away and did it all over in Rails, it’d be so much easier!” Unintentionally hilarious? So, uhhh, is your rewrite done? Good for you for sticking with what suits you. We sure have different tastes, but I'm glad you ultimately judged things by real-world metrics rather than religious convictions. In my view, your love of SQL and your focus on your existing tables were the big barriers here. Writing a procedural system in an object-oriented framework is a poor match. Treating the database as the main focus, rather than an implementation detail is, from the Rails perspective, the wrong approach. And Rails is very opinionated software; it does its thing well, but if you want it to do something else, it's not a good match. And I'd note that integration is only a problem if you're integrating components via the database. In effect, you've duplicated the schema everywhere, making it impossible to change. Integrate via APIs rather than your database, and integration isn't nearly as big a deal. Your last comments seem to re-affirm the old cliche that PHP becomes ugly. Doing something from scratch, in a one man team, after circling around the issues for two years, is a little different to working in a team with a new problem and needing a clean framework with (omg) Opinionated methodologies. Your website looks like a spam site. When i read "php is better than ruby", I had to laugh... I wrote a lot of php code as well, and after 3 years, soon 4 years with ruby, I am 100% sure that ruby is a LOT better than php in EVERY aspect (i dont consider rails to be a part OF ruby), and I think your conclusion is WRONG in any way that PHP sucked because you "became a better programmer". I will tell you the truth: PHP as a language sucks. I agree on one point... the domain language of php, the world wide web, should be solved BY ruby. All what is possible in php, should be possible with ruby as well. I hope matz decides to adopt this, because I am 100% sure that people will use ruby instead of php IF they can use ruby just the way they can use php (which means, no stupid mod_ruby errors for example) I actually agree on one more point - hype is never good, what really matters is IF you work BETTER with these tools or not. People, trust me - PHP is ugly, and will ALWAYS stay ugly. I have long ago jumped away from PHP. Only because rails doesnt fit your needs, doesnt mean Ruby as a language does not fit you! Oh by the way, my comment was a little too aggressive. I think I can agree that this should make the rails developers THINK about these issue and attempt to SOLVE (or easen) them, too. Because it seems like a rather legit reason and the arguments within it can be well voiced to improve rails... It is also true that ruby *can* grow very complex, but it really matters on the programmer. Whenever possible, one should strive to pick the shortest, cleanest, easiest-on-the-brain solution instead of a "magical solution" (unless it is a very clean and clear human readable and nice to maintain DSL of course ;> ) It seems like starting from scratch in PHP is kind of similar to starting from scratch with Ruby and erb, which is what DDH did, right? Useless post without a concrete illustration. Show us an example of: 1) What you tried to accomplish. 2) How you tried to implement it in rails. 3) The rails code that "didn't work." 4) The "beautiful" PHP code you created instead. I enjoyed the article, but now I'm sad I can no longer say "CDBaby does!" when someone asks me if anyone big is using Rails. > William Pietri: > In my view, your love of SQL and your focus on > your existing tables were the big barriers here. Bingo. Objects, their responsibilities and relationships first, database second (in Rails). I loved the part about the database as an "implementation detail." That sings to me. Taking an extreme stance to breaking down the problem domain yields the best results for me. Who keeps track of my appointments? My first answer was "me." Then I stood up and paced around the house for a bit. Upon sitting back down I wrote... ./script/generate model AppointmentBook Break it down. Down. Down. Looks like you among many out there had a poor team. Poor development environment where you were working on several projects at once and putting priorities in wrong order among several projects. I will have to say being stuck in one mental framework can be hard to overcome, especially when learning to code in a new and cleaner manner as per Ruby on Rails. But you actually could have coded so much more faster had you stuck to your guns and not given up, or not, since you prioritized among so many projects. Know this: straight out the gate, even with little MySQL/SQL knowledge I know college grads that know and love Ruby on Rails that can code entire websites and admins in manner of weeks not to mention localization and advanced features like cacheing and advanced logging systems. One important thing to learn from these guys and gals out there: they write tests alongside their code and think in that manner. All their code and program design fit together and they develop better code and projects because of that. You are stating pure opinion without discussion anything substantial. I agree with author, but i honestly see the point of posting it here. Take it elsewhere if you can. This is a problem with any web framework. The idea of a framework is to make decisions for you, and if that doesn't work for you then it's the wrong framework. It's not a hit against Ruby in this case, just a hit against Rails. coding entire websites that do nothing is exactly what rails is great at. Having been involved in framework development that had OOP inheritance layers 25 children deep and also in heavy utility development with few objects, there is no comparison. Dont fault the author for liking SQL and its relative if not portable efficiency. The fact is it works and its way better than some pretend ORM like rails (ooh look johnny a WHERE clause builder!) which says well you cant do any joins that are meaningful to your application unless you pull it all back and iterate the old fashion way. SQL allows you to let the database do the work. But as far as alot of RAILS or other ORM users are concerned, pulling back rows from individual tables and "joining" them in memory in a very inefficient way is somehow beneficial. Because they dont know how to leverage existing technology. And thats what I liked about this article. Because it is true. Rails and ruby may cover for you for a while, but in the end you will have to do heavy lifting. So your little login and your "oh I coded a website" and I dont know SQL and how databases work. Is a bunch of bullshit. ORM and especially active record is fools gold for people who will never learn what it takes to actually get the job done. But it is a good learning tool for people in a hurry. Learning how to fail. she: "When i read "php is better than ruby", I had to laugh..." If you read that here, I think you need to adjust your browser. And specifically to nick: I know college grads...bla bla bla Which is why we are outsourcing software development. Because they dont know SQUAT. Do you use memcache at all for any of the sql requests? Yes Derek. I had quite the same experience. Tried rewriting a web app (PHP->Rails) for about 2 weeks. I gave up and started back in PHP, but the new code was a whole lot more methodical.. well personally it just sounded like you picked the wrong project; rails is opinionated and expects things to work certain ways, as you noted, and if your project is written entirely different then this - guess what, rails might not be the best choice. In your case it sounds like php definitely was the better choice. It would be interesting to see, if you pick rails for a future new project if things would be different, and, two years down the road after many enhancements and changes, which codebase is more manageable, and should you have to recruit others for help, which they find easier to work with. Just few things Rails != Ruby. Ruby was there long before Rails, and it was and still is one of the easier scripting languages. Howerver PHP means currently having a broader base. And a lot of experiences. I just can tell that the PHP pages I had were relatively messed up. If one really likes to know I can check the old pages (there we have used PHP, HTML (of cource;-) Apache and Mysql. I do not like MySQL for whatever reason and so I replaces it with PostgreSQL. We are also usign a Ruby based wiki but are still using PHPBB. It works has just one really big drawback. It get's spammed with subscriptions. So it seems because it is that popular the captcha was somehow cracked. This makes me angry. I would like to fix that but the code from PHPBB is not for the faint of heart. Howerver I'm not keen on check my braveness on it... But some other things to consider: Webprogramming does not stop with PHP or Ruby. There are so many alternatives. You have choosen what you knew. But what would happen if you had tried something else? Here are some ideas: 1) OpenACS 2) Seaside 3) Some lisp based stuff AllegroServe, Hutchentoot etc 4) Some scheme based stuff 5) Erlang stuff 6) Objective-C 7) Java 8) Perl 9) Python and so on ..... I've tried so long: php stuff (serious) lisp stuff (very serious but with very less success) openacs (just had a look) rails (has saved me from spending another few months on the lisp stuff) I for my part are quite happy about rails. Maybe it's it's model that suits me that good... Regards Friedrich Judging from the article what you tried to do was to rewrite your old PHP code to Ruby on Rails 1 to 1. This is the wrong way to go. Rails (and any framework in general) has some constrains you have to adopt to, and you break those only if you are *absolutely* sure there is no way to accomplish what you want within those constrains, which more often that not means that you have to change your perception of the problem - not the code. Now, this is some experience I would really like to buy a book of. The language of my company is PHP and it's very unlikely that there will be a switch. I don't see a very compelling reason as well. But I have been looking for a book of somebody with a 'puristic' mind that has found a way of working with PHP. I found none, perhaps I was not looking good enough but the books were either 'PHP 101' or 'let's create a big ball of mud'. I hope you will write more about that experience you have. I was thinking... before you have the book written ;) could you post some code examples of all coding rules you imposed upon yourself? Gee, I'd like to see those rule examples Cornelius asked about as well.... Haha at the Rail Kiddies... I'm a little reluctant to add to the wasteland that is this post and these comments, but here goes. I'm familiar with the situation here. The deal was this: Derek was not a programmer; he was a musician. Project fails. All right. As he has learned in #2, legacy compatibility trumps everything. Also, ship early and often. As you can see in Derek's post about MySQL encodings, he's not always the clearest thinker. Even above he says that REST means POST-only destruction, which misses the point entirely. His team was fine (mostly just Jeremy, until another developer was hired in the last months). Rails was fine. But there were a lot of things wrong with the project plan ("rewrite everything, eventually") and with the project leader, who was convinced he had found a silver bullet. No framework saves you from your own inexperience. Out. Thanks for the funny comments. To Richard Hertz and Cornelius, and others asking for more concrete examples: maybe some day. I finished the rewrite a month ago, and it's taken me a month to find the two hours it took to write this post, which I did for the benefit of an invisible someone someday. To show in examples what I love about my new PHP code would take hours and hours more, and I might do it some day, but I've got years' worth of things that are more important for me to do, first. Maybe some day I'll just put it all up on a public svn server, but I'm not ready for that yet. Also, when considering it, I thought my specific code that made me happy probably wouldn't make you happy. I just designed a little system for my tastes, and that's the point. I came on toward the end of the project, so I can't too much credit, but yellowpages.com was rewritten in Rails in four months. That's four months from the first subversion commit to deployment in production. That was with a team of five or so, learning as they went along, with the typical business-person meddling of a mega-corporation. That includes writing a service layer that interfaces with a brand new ruby FAST library. That includes load testing and realizing you need to use mongrel handlers in a few key places. Bob, Rails is not just ActiveRecord. Our heavy lifting is done on a service layer. Rest assured, it blasts SQL away. Bob, how many millions of hits a day does your site get? Well in a way you are right. Trying to make Rails work the way they are not supposed to is really a hard task to accomplish. Its crazy to even go that way. Really crazy. Saying that it is better to use bare PHP and custom made templating system (isnt PHP a "clever template" after all?) is better than starting with Ruby and ERB is pure nonsence to me. Ruby is much better language (pure OOP in Ruby vs hacked in OOP in PHP) but has some downsides in deployment. Integration is not an issue when using REST and Webservices, but i guess that would be too hard to implement in PHP ;-) Reading your reason #6, I see that you totally don't get what rails is. And thats pretty sad as a resumee after 2 years. PHP is a better PHP than Ruby (and Rails) ;-) I saw it coming when you wrote "Jeremy ... twisting the deep inner guts of Rails to make it do things it was never intended to do". Good post Derek. Don't let the Insane Rails Posse get you down. Your approach is refreshing. Actually build exactly what you need instead of buying into that whole Rovian "it's opinionated and only works if you kiss its ass approach and if it's not working because you're doing it wrong" talking points brief. Look, if something was really good you all wouldn't have to work so hard to defend it. It just might be possible to develop software using something else. > yellowpages.com Wow, you and your posse made a site with links that lets you search. And you did that in 4 months with 5 people? Respect. I just can't imagine doing that so fast any other way. The problems with PHP run much deeper than just your cosing style. * PHP handles integers wrong on 64-bit platforms. * Its namespace is polluted to a point that makes working with libraries a major pain. * Its identity and type coaxing system makes it easy to produce security flaws. * Its OO-model doesn't allow inspection which makes working with ORM like Propel or eZPublish's PersistentObject a real problem of XML/YML configuration or lots of custom coding A problem that both have is that their Unicode-handling sucks big time (which is why I would have argued in favor of Django/Python, they're at least getting there) Maintenance-wise you'll almost always be much better off with Django, Tapestry or Rails. Especially since your maintenance-phase will almost always be about 4-10 times as long as your development-phase. And as both Ruby and PHP are Turing-complete there's nothing that can't be done with each of them (except implementing 64-bit integer handling in less than 80 lines of code in PHP), including implementing each other. cheers I agree with most of the things you said about PHP not being good. But, as far as your project is concerned, it failed because of *your* lack of planning and vision. Hell, anyone could have re-written that site in JSP/mod_perl/whatever in 15 months. Even trying to relate that with rails and use it as a point of difference between rails and php, is totally lame. Once again, rails didn't fail you. You failed yourself. i like your site, minimalist, pure techie site. However you can do better. I agree to your point about php and R&R... R&R 's too simple, that's exactly what kills it, it will never make it to the top. php - your house wife, it WORKs for you, however ugly R&R - your beautiful one-night-stand, however you can have 20 of them each year I can say it right here: any language that requires too much brain power, absolutely and definitely WILL NOT make it to the top. WHY? No managers likes thinking too much, unless the efficiency really is important. My ability of being able to "Get on with it" while you are still tweaking your R&R code, is PRICELESS - why?? because I grabbed 2/3 of your customer before you laugh your site!! To be WINNING in this world you had to take risks, if you haven't been investing 180~190% of the all you've got[startup $$], and talk to 1000+ of your potential customers, your project would most likely to fail - NOT because your code or service is BAD, but because you spend just a bit of time TWEAKING on computers when you should be communicating with someone else... Remember: business is always business, you are selling a service, if you are only an employee, who take proud of the clever code and admiring comments from colleagues, then you WILL always be an employee. Take risks, work hard, COMMUNICATE, and HAVE FUN. You will find your millions or billions[with luck+ lawyer/acc friend]. @jason yeah, I thought that too! LOL woot a holy crap website! :/? Oh and I just checked out your site. What an ugly P.O.S! 2 years and you can't build craigslist for CD's using rails. Jesus. And for those idiotic rails evangelists who don't know a shit about rails - check and if you ever saw changelog of rails, you wouldn't be being a moron here. You're not alone. Although my site doesn't have the exposure yours has, I went through the exact same thing only I had been using ASP.Net. Not to pat my own back, but I'm pretty good with ASP.Net and SQL. Fortunately, it occurred to me about 4 months in. Wow, if you liked PHP for those reasons, you should REALLY try ASP.NET (don't worry, you can run it on Mono if you blindly-hate Microsoft like most other people on O'Reilly). It seems to be we have two types of commentors; those who understand where you are coming from (even if they disagree) and the pre-pubescent fan-boys you'd only expect to see with something like Nintendo, Sony or Halo. The one thing I've noticed working on several rails projects is this: If you're starting on a new project, fresh and clean, no existing database, you can't beat rails. However, if you're trying to migrate an existing project to rails, more specifically, have a database that rails must be laid on top of, it gets a LOT more difficult. Just my experiences there, G cdbaby.com is comprised of over 100k lines of sourcecode?... terrible... you certainly are one wasteful, beat-around-the-bush lengthy programmer, as the americans usually are. If you need 100,000 lines of code to provide what cdbaby.com does and is, I wouldn't want to hire you in a lifetime. I too love SQL, and dislike abstractions that take the power away from you. I'm coming from a Java perspective here, rather than a PHP perspective, but I will not use database interface engines like JDO which turn 1 clear concise line of SQL (stuck into a prepared statement) into 10 messy method calls to do the same things, with a horde of behaviours that aren't what I want it to do. Spending months fighting and bypassing JDO's issues was a nightmare, and I will never do it again. Writing some DAOs with PreparedStatements took a tenth of the time, far less code, and is far more maintainable by someone who knows Java, but doesn't have JDO experience. That said, things like Struts and Tiles - lightweight frameworks that make your work easier without limiting the power available to you - are good. Indeed Tomcat itself is a framework for writing a webserver that takes all the webserver grunt away leaving you to just write the logic. What I don't understand is why you didn't drop the limiting framework - Rails - and just stick with Ruby. I don't know Ruby, but Rails must be built upon libraries for database use, etc, that you can access yourself, write your own SQL statements, do things the correct way, in a language you admit you like? On the other hand, when you need to do something quickly, it is far better to do it using mechanisms you know and understand. I've had the exact opposite experience, but i won't turn this into a PHP vs the world thing. Our sites handle tens of thousands of users per day and we have some extremely elaborate things going on. There have been extreme challenges, as with any other language, but we have figured out good ways to handle every obstacle with Ruby. We've even built our own distributed ruby application to handle the heavy lifting pieces, which has worked great. Anyway, to your... arguments? 1. "But when I took a real emotionless non-prejudiced look at it, I realized the language didn’t matter that much." So you're saying that end the end you went back to the language you were most comfortable with, not that Rails couldn't handle your app, right? 2. The One Big Day switch doesn't work. We have tried it before, and you end up getting a flood of repeated questions and you end up confusing a lot of people. The best way to do this is to prioritize the intranet pieces and website pieces and switch sites an intranet apps one at a time. We have 30 intranet apps being used by a few hundred people... We held big meetings to say, "hey, this is going to change and this is what we imagine it will look like." A few months later, their pieces started changing and they were (mostly) prepared. But, doing it in pieces allowed us to re-evaluate how each project was being used and make improvements, where your One Big Day approach limits you to trying to build it to function as it did before as quickly as possible. 3. Yeah... there's a lot of crap in Rails. A lot of plugins and libraries that never get used, and there are several libs that are just too unstable for real use. It would help if they cleaned up the docs and API a bit. This is a problem :( 4. Small & fast... you mean Rails, right? It's easy enough to set up a mongrel cluster with Apache proxy balancing that your 2U LAMP server should've been able to handle most anything. 5. Rails lets you do exactly the same thing. 6. I think those who use the object associations directly in Rails are doomed to failure. You can't just do someobj.otherthings and expect it to be optimized in any way. You can do direct SQL in rails, but what I like to do is: someobj.otherthings.find(:all, :conditions => "otherkey='asdf'", :select => "the, fields, I, want"). I adjusted all of our queries to be limited to only getting what I want and it improved performance significantly... especially in cases where I'm sorting after-the-fact through hashes of arrays and whatnot. 7. I kindof see where you're going with this, and agree that any exposure to a new language is a growing experience. What I'd like to see are specific examples of what you couldn't do with Rails or what exactly made it more difficult. Without specific examples, I have to say, your post just sounds like a PHP fanboy that gave Rails a try, did learn some interesting stuff, but went back to what he was already more comfortable with. I suppose there's nothing wrong with that in itself. The ridiculously misleading and idiotic summary on slashdot of your post: "Two years later, through blood and sweat, the project was then canceled because of limitations of Rails. Rails just wasn't meant to do everything since it is very much "canned" project" Now that I've actually read the article, it is obvious you didn't mean your post as flamebait, but you are lacking specific examples of how Rails didn't meet your needs. I've suspected this about application stacks for a while. Perhaps there's a theorem somewhere that states "The more 'intuitive' a framework is the less potentially useful it will be for anyone other than the original authors" I've had projects in C# where I would swear that I could do it better in Python in half the time, and other projects where I felt the converse. Same thing with J2ee vs ASP.net etc etc etc etc ad nauseum. It depends on the problem domain. It seems the rails posse is so damn blind with their undying love for their framework, that they just can't accept that it may not be useful in solving some problem they never thought of.Sort of like those people who use a 1 button mouse and can't accept that some(most really) just feel more productive with 2 or even 3 buttons on their mice. The original poster IS THE DOMAIN EXPERT here. Founding an independent music company that has been able to survive this long in a sea with only 4 really really greedy BIG FISH is a testament to that. Ever heard of Incremental System Replacement? You're on the Web! You can replace the implementation page by page without anyone knowing what's going on under the hood! Ever heard of Incremental System Replacement? Agile Software Development? You're on the Web! You can replace the implementation page by page without anyone knowing what's going on under the hood! I don't care. Rails is all about beautiful code to me. Code PHP all you want, I'll stick to rails. Looks like everyone missed the basic take-away in this post: in the end, there's no real benefit to doing a scorched earth re-write of a working system. All posturing aside, technology A will have advantages over technology B, and technology B will have advantages over technology A. The more important point is that a re-write will take at least as long as the original implementation, and, at the end, if you're very lucky, you'll have a system that can do what you already had at the start of the project. With a two year re-write, you'll have a two-year-old system. Or, worse yet, and more likely, you've continued to enhance the existing system at the same time, and may or may not be reflecting those enhancements in the new system. You're much better off refactoring and improving an existing system than grabbing the latest silver bullet. This is an excerpt of an email I sent regarding this very interesting article.. I enjoy programming in C#, but I wonder what we might be getting into. I wonder, if C# is better, or if it looks better just because it an attempt to prevent us from making the mistakes we might be tempted to do if we don't think about our code. (To the reader:I'm not dismissing C# but just wondering if we are biting off more than we need with a language switch instead of a self-disciplined clean up.) So then the question follows .. Can a programing language prevent sloppy code? The obvious answer is no. Damn, but Rails sucks. I'd rather program in interpretive BASIC. Just realized this post was on Slashdot, with a summary that got the point entirely wrong.. Still, have fun flaming. It's funny. Seriouzly why didnt you look at Java? Java is really OO, it is a nice language, and the web playform is superb, J2EE is really powerfull but you may not need everything.. >>. GarryFre : THANK YOU. That is a much more succinct and accurate way of putting it! TJerk: > Seriouzly why didnt you look at Java? The whole point was that ALL of our existing code was in PHP, and in order to transition over to new code, sticking in PHP made things 1000 times easier on everyone. If our old code was in Java, I would have looked at staying in Java. >> cdbaby.com is comprised of over 100k lines of sourcecode?... terrible No - our entire operation, managing digital distribution to 150 outlets like iTunes, international credit card sales and merchandising, accounting and weekly payments to 200,000 musicians with transparent accounting, a 50-person warehousing and shipping system that sends and receives thousands of boxes a day, a members login area that lets them administer our site, set their digital rights, see all accounting, our entire customer service intranet, and digital rights management in-house... ... THAT is what's in 12,000 (down from 90,000) lines of PHP. RTFA. Ppbbbbt. Neener neener. :-) If you were trying to do a complete re-write and did not include the database layer then you were headed for disaster from the start. Your approach should have been a full re-write (including the database schema). And since you would be migrating an existing system, one of the requirements would be to write the data migration program from the old shcema to the new one. It shouldn't have been such a monumental task unless you were trying to keep too much of the old. You would have had more headaches in data migration but that's all. I think this has less to do with the technologies involved and more to do with the approach you took for the re-write. But this is how you learn and now you have long list of things that you know for a fact that will not work. That's called experience and it's invaluable in this business. You should have tried the Catalyst framework for Perl which gives you the shortcuts and structure of Rails combined with much more freedom. Yeah, I'm already hearing "Perl's dead" shouts ... But I don't care as long as it gets the job done :) And with Catalyst web dev has never been more fun and productive. And DBIC is one hell of an ORM - lightyears ahead of the ActiveRecord stuff Rails has to offer. PHP is a super slow language that not even ruby can beat for performance. I've got similar reasons for not using rails. My alternative is something other than PHP, but I know the feeling. We've many languages and many frameworks for differents problems. You need look for a better for your problem. I love ruby, python, rails, turbo gears but i still can't use them in my work, because this aren't the better to resolve the problem. Coming from the same place as you -- a legacy php app that does what I want -- I considered a re-write. To test the supposed power of the MVC and automated object-relational mapping, I wrote a small application in Django. In doing so, the most frustrating part of the exercise was ORM. I spent a lot of time trying to figure out how to make Django's database classes do what's easy (for me) in SQL. #6 is wisdom for anyone who's already comfortable with SQL, which, btw, isn't that hard. I looked at cdbaby.com just now. It's a shitty site. It's got a little shopping cart and catalog and from Derek's later comments a few small back end pieces. Clearly not a big deal. It's great that Derek learned some things about pretty code and software development. Getting a degree in computer science could have probably helped him realize a lot of the mistakes he made before made them. Rails is crap but for crap it's one of the better craps out there. You can do the same thing in PHP, the language doesn't matter hugely (well, not really true, ruby is pretty, has flaws but is pretty). Joel has an interesting article on rewrites, you should read it So many comments and not a single one to point the Zend Framework for PHP5? I'm a developer with 12 years experience, 7 years of PHP, and I can't code without it anymore. It just takes care of so many rudimentary tasks. Provides MVC, DB abstraction (you can still write your own SQL), Zend_View (templates that makes sense, unlike Smarty), offers classes for almost anything and everything, and it's extremely well thought out. It's quite beautiful because it's so light-weight. Now, I don't want to come across as some kind of rails hater, because I certainly am not. I'm having a really hard time believing that RoR is to blame for the project failure. I've seen people drag tush with PHP projects as well.. Bottom line for me is: Use the best tool for the job that you're familiar with. For me that's PHP+ZFW, because I know it. For a few developers on another team in our company it's RoR. Derek, I must admit I admire your courage posting this on the Ruby site. :) I think one thing you might consider is that Rails just wasn't right for your organization. Like most frameworks, they just don't scale to larger sites with "outside of the box" goals and legacy integration. I've worked on a number of larger sites likes yours, and I've seen the mess that occurs when you have to patch a framework to "get things done". Which is exactly what you would have ended up doing in rails (if you hadn't done this already). Also, too often debugging the baby, you end up debugging the bathwater with frameworks. I've found myself knee-deep in activerecord trying to figure out why my foreign keys are giving the library hives, amongst a million other things. Mason (perl) is no better... It's just more mature. I think Rails is sexy, and a great idea, it's just not mature enough for large sites and/or development teams just yet. I'll stick with using Ruby for damn near everything else, and mod_perl 2 for serious web work. to high. too high. I worked at a place briefly that had a similar storefront/backend written in php. They decided to rewrite it in Java and hired Java folks. Three years later still no store. Sometimes it's best to stick with what works for you. Especially if your business depends totally on the result. I love and use C. Not C++ or Obj-C or C# or Java ... but C. I produce web applications in record time and they are insanely fast. For data, I use the file system with a simple distribute cache. You see, MySQL won't scale and once you get any real traffic, your site dies. If you 10,000+ requests/second, use C. Couldn't have said it better myself, i'v tried learning many languages over the years, and really, I always come back to PHP... yes it's a lot because I know it inside-out however at the same time it is also because it is easy to work with and you can still create clean code and very little of it to do a lot if you think things through and use OOP properly. This is about to be flamewar central however you have my vote on everything you mentioned above! Aaron, A little research would go a long way in making you *not* look so foolish. In the article he gives you Jeremy Kemper's online alias, bitsweat. Anyone who's looked at more than a couple of changesets in the Rails repository would instantly recognize the name. And, yeah, he is a core team member. You seem to have missed the point of the entire post. Use the tool that works for you. For some people it's Rails, for some people it's PHP. At the end of the day what matters is that you get the job done. Haha. The CD Baby website kicks ass. It works, it makes it easy to navigate through the thousands of albums they have. They have reviews. It's nice, simple, and easy to use. Plus it does not suck up my computer's resources with javascript hell like most 'Web 2.0' websites do (big example: Digg.com is absolutely horrible design). Plus it's very handicap accessable, which Web 2.0-style sites totally fuck up at. Plus it's not flash-site hell. Most 'web 2.0'-sites that manage to avoid javascript hell the run headlong into flash hell. No flash-based little bars to play music or other nonsense I have to go through. Just hit 'listen broadband' button and I get a m3u file that opens right up into Totem player. That's nice. The only change I could recommend is to hire a artist, a real artist in the visual arts and not some javascript or website hacker, (preferably somebody with very little knowledge in any markup or scripting language that does art with real materials) to make a design for the the first page more sophisticated looking. (no aninations/columns/blogs or any such nonsense) I don't know if it's true or not. But the general impression of the comments on this page reek of snobbishness. And I don't remember for certain, but I once heard a very good developer say: "Objected languages can end up a crutch for people who can't think in a object oriented manner". That is C or PHP or anything else can take advantage of code reuse and objects as well as C++ or Java or anything else. It just requires more skill and ingenuity then it would otherwise normally take. The only real thing I dislike, as far as PHP goes, is the security record, which is something that Apache suffers from also (compared to competitors like IIS6). For PHP/Perl/Python/Ruby and Apache to recover ground it's lost to Microsoft it's going to take a lot more obviously (as in from the view of people outside these projects) focused effort on ease of use and security then what is going on right now. Oh, well. I agree almost at all... But: #3 I don't want what I don't need? hey but with PHP you get the same million of functions in every scope you don't need them but whether you like it or not. I think that the problems of PHP programmers is that they need a strict teacher, that show them the right way to do the job. The last thing: You should really try Django, is a good teacher that let you write SQL if you want :) Rewriting any significant system (even in the same language), especially one in production, is difficult. I have seen Mr. Sever's ads looking for "Ruby Rock Stars" - rock stars are good for boosting attendance at conferences, generating traffic to a blog post, and influencing people. They are not the right resource for a major rewrite. The right resource (say someone with 10-20 years of experience) would most likely have come in looked at your operation and code and broke the news to you gently that trying to rewrite your entire system was a bad idea. Derek's experience is more of a commentary on how hard it is to manage technology rather than anything inherent to Ruby/Rails or PHP. I also think he does not want to admit that he was lacking in this area and has chosen to blame his tools. "This damn hammer is no good - always crushing my thumb!." The 7 points are silly and call into question his knowledge of programming and Rails in specific. Take Item #6 "I love SQL" Does Derek really love those "create table .. " statements so much that he can't live without managing them directly? Migrations are intended to wrap DDL and in any case migrations are optional and there no config files to edit to turn them off - just stop typing "rake db:migrate" If Derek really wanted to express his love for SQL I would think ActiveRecord would have been the more appropriate target - but even in this case , nothing prevents you from using raw SQL to get at your database. I think Derek needs to "man up" and just admit he fucked up, not his tools. Mr. Sivers - thank you for your informative comments, but could you please give us an example or a sense of (from a previous post by Richard Hertz): 1) What you tried to accomplish. 2) How you tried to implement it in rails. 3) The rails code that "didn't work." 4) The "beautiful" PHP code you created instead. If it took you two years to determine its inadequacy, I would suggest you use more direct and efficient evaluation techniques. I can't imagine taking two years to find out that a particular language was unsuitable for a project. How could you have neglected to do the investigation necessary to make the same determination in a month? Outstanding article. Not an attack on anything, just a path to better programming. Understand what you need, do not fear SQL, use TDD and DRY to minimize your code, develop your own tiny, specialized frameworks, then finish and get on to the next thing. Brilliant. Thanks for the update! I wondered what happened to your rewrite project (especially while weighing PHP vs. Ruby). What happened with PostgreSQL? Or did you stick with MySQL, too? I'm not entirely sure I understand the article's objective, but I'm glad you feel you learned something. End of day you just needed to rethink your app and clean up your code, but you spent 2 years diddling around? A couple of things popped into my head as I read this: * You didn't invest the appropriate amount of time or effort in learning the relative benefits of one language or framework vs. another prior to embarking "whole hog" on a rewrite. What were the real reasons for attempting to migrate to Ruby? Did you do small test cases based on your real world problems with the existing application? * You seemed to succumb to "fad" thinking on this where you (as you noted) really needed to approach the project objectively. I'd be very careful exposing this kind of thinking if you're looking for funding or clients. * You went with your old stand-by. I agree completely with the idea that you don't make radical technology changes unless you consider how you're going to support your business applications. Most expense in a project like this will come from support. Retraining is also not just limited to the language - now the entire team needs to pick up your complete rewrite and learn how to maintain your approach. * Personally, I don't much care for the way a LOT of ASP, PHP, etc. script-type applications are written because they turn into "spaghetti code presentation/business logic in same code block" nightmares. It's cool that you went back and tore things apart to get away from this kind of mess. Playing with another technology or design approach will always make you stronger and some times it will make you appreciate your existing tool set. If you haven't already, definitely play with JSF or some of the more recent .NET stuff. A lot of good ideas in there you might want to work into your PHP apps. I really, really dig the way data binding works in repeatable items, tables, etc. Validator separation from input component are another really fun one. Very fascinating and informative. Thanks for the write up. Altough the site design is not the best, cdbaby.com has an enviable position in the Alexa index (8,355). Not far from the rubyonrails.org ranking (7,820). What do oyou think right now? Don't quit your day job. I believe it's chef's choice. Some believe and use Rails. While others evangelize PHP. To me, Boolean Algebra is Boolean Algebra no matter how you express it. Of course, I'm a LISP bigot. I enjoy both Rails and PHP as well as Python and Java. All the best, :D Mike you should definitely check out CodeIgniter, which solves a lot of boot stuff and MVC related structure. So basically what you're saying is that you bought into the hype rather than making an educated decision and picked the wrong tool for the job? Is that your failing or the frameworks'? > Is that your failing or the frameworks'? Mine, as I said throughout. This blog post is only here because some people asked why I didn't use Rails after all. My reason is, "Because I realized any language will do just fine, and the one we already have is most convenient for us." That's all. I said clearly in the post that I love Rails and my decision to switch was (both directions) was just personal preference, and in the end: compatibility. If you love Rails, why don't you try using CakePHP, it dominates and it's worth while. You can right beautiful PHP code in seconds. About point 5: Rails is a framework, PHP is language, don't compare (Oh my God, this is a nuby comparation). About point 7: I can say so, Ruby sucks because you sucks. "Give yourself some credit". Please spend more time in "think" how to explain and compare in a right way. Next step in your evolution as a developer: The realization that PHP is antiquated and actually a pretty crappy way of developing web applications. Agree 100% This is an interesting perspective but seems to lack substance. Your seven reasons would be much better if you used actual code examples instead of girlfriends. And you know, I think a better title for this blog piece might be "Several Reason Why My Large Web Project Bogged Down" This was a well thought out decision. Now you can run on IIS6 or IIS7 and reap the full benefits of using Windows Server as your platform. You may want to look into Silverlight to give your PHP sites some life. --- Jake "cdbaby.com is comprised of over 100k lines of sourcecode?... terrible... you certainly are one wasteful, beat-around-the-bush lengthy programmer, as the americans usually are." U silly americans. haw haw haw (snooty french laugh) You should have tried X framework for the Y pet language which gives you the shortcuts and structure of Rails combined with much more freedom. Yeah, I'm already hearing "Y's dead" shouts ... But I don't care as long as it gets the job done :) And with X, web dev has never been more fun and productive. Derek Have you considered doing the rewrite in Java? The EJB frameworks would have helped take care of the object relational mappings. I doubt there is a true rival in the lamp wold on that. As far as MVC you could have used Java server faces (JSF) technology to separate the UI from the business logic. All this technology is standard java so you would be able to find maintainers it for years to come. waaahh.. So many rail-babies whining Seems to me Ruby is an elegent little gem of a thing.. Rails felt creepy, hiding too much.. I didn't stay on very long. I don't like shooters-on-rails, either, in the gamer world.. Some of us like some freedom, and this guy just didn't realize what he'd be missing.. so what? You all sound like the metaphorical g/f he might have left after 2y.. stops ye bitchin' you need a UX guy Derek's diatribe is not about code but about thought process. He prefers an imparitive thought process to a declarative one. His rails project failed because he could not get past himself or his level of incompetent convention. This is so common. This was so full of logical fallacies that I thought it was intended as humor. Sadly, I was wrong. thanks god, at least you could complete your site. it would be nice, if you can better explain your use cases, where you had stucked with rails and ruby. so we can understand what you really wanted to mean. I need to say this... those rails-hype-emo-kids are pushing people away from (excelent) Rails. The guy who want rewrite all php-web on rails makes me laugh a lot. I followed a similar route as yourself. Learned a lot from Ruby on Rails (especially the MVC structure) and am now applying it to my PHP projects at Ning.com. You should check out Phalanger ( ). ;) Thanks for bringing this up now. You've saved me a lot of future headaches. I wrote an application in php that I was going to totally redo in rails. I actually bought an excellent book - 'Agile Web Development with Rails' so I can learn this stuff. Even though some things look super easy and are super easy and quick with rails, for serious operability and true agility, I don't see rails fitting this bill. An early warning came when I flipped to the last chapters about deploying your site. (php think: 'just upload, right?') Here is an actual quote from the book: "In contrast with how easy it is to create a Rails application, deploying that application can be tricky, irritating, and sometimes downright infuriating." And then I tried to put a simple little thing online and gave up. PHP, please forgive me. I never realized how kind to me you really are. And when I upload your directory, it doesn't come with...(I just started an ftp app to see how big the do nothing so far rails folder is, and after waiting a full minute for the properties I switched back here to write this)...(sorry, it's still loading.) OK, it's done - 5.31MB, 490 files, 173 folders - this includes a large documents directory that is generated apparently for every single app you make, unless I can figure out how to stop that from happening. I still want to try Ruby, but the sermon about how fast and easy things can become has taken about 2 months of occasional reading to get nowhere. My guess is this - Once you learn all about it, it probably is quick and easy to make a new app. But I'm afraid the ease will never have time to come for me. ...and the document directory is 4.9 megs of that total. So I never should have switched from tango to PHP? Hi Derek! Great article! Curiously, I just took a look at CDBaby yesterday and was impressed by its sleekness in both design and function. And it's FAST! Which is one of the things I liked most about PHP. I'm not a SQL person, really, and mainly find ActiveRecord a boon. I have done some Rails and some PHP, now using Monorail and I don't think I could make a web app without MVC now. I'd like to see more details about what you did, because MVC in PHP sounds like a good combination of pragmatism and structural programming. Chris PS. You should probably move the Add Comment box to the end of the comments. Great insight, I'm still interested in RAILS but for experimenting with not actually using to develop with. Every language has its own pro and cons. For people that are familiar with rails, they might think its a total crap. Sometimes too automated process makes people more dumb. Tell you frankly, i dunno why i juz can't understand ASP .NET. But im doing pretty well in PHP. I wrote my system in OOP using PHP4. It works juz fine and its also compatible with PHP5. Nice entry. As it happens I wanted to start a rails rewrite of our PHP code a few months back, but the PHP code was written for a rails-esque PHP framework called CakePHP. I was just constantly running up against the limitations of CakePHP and got sick of it. We decided in the end to hack around the things missing from Cake (specifically I really missed my precious precious has_many :through) and bear with it for now. A proper rails rewrite has been in the works, but hasn't begun yet. Now I question whether or not it's really feasible. All of our new projects, naturally, are written in rails with a rails mindset and those deploy beautifully. Thanks for a good post! Wow. I can't believe the flames here. People are flaming this guy for being incompetent. Maybe he is, I don't know him. Why the religious devotion to rails? RoR is wonderful to develop in. Fast and easy. But (as the devs are happy to admit) it is not for everything. Rewriting anything that is in PHP in RoR is kind pointless, because things like "beauty" aren't that important. You can get the job done just as well in PHP, even if you think the core libraries are fugly. (they are) Some apps don't lend themselves to Rails. Large DB centric apps where most business logic is in stored procedures is a good example. Or apps where most backend code is in C/C++. But if you are making a standard CRUD-style web app you can't beat rails. But people, please realize that its just a language and a framework. You can do everything you do in RoR in perl. Or lisp. Or python. Or PHP. You just won't get the code generators and the structure of the framework for free. Wow, funny comment thread. While I agree it would have been nice to hear some specifics, I can certainly understand the first blush of love and hype leading to a poor decision. For the guy who said the poster is thinking in a non-declarative way, perhaps you should read up on what SQL actually is? And for the people who want to bash him for "dreaming in tables", get a life. He didn't say "I don't understand ORM modelling" or "Please halp me with teh active records!". He said HE LIKES SQL. Get over it. Lots of us do. Writing reports in Rails is a flat out nightmare compared to just writing manual SQL. Same is true for CakePHP in fact. But the author sure expected all the Rails kids to come prosthelytize with their Holier-Than-Though we are the only ones to have it figured out you're methodologies suck your thought processes are wrong pack of sheep. By the way, I've used Rails for years and have successfully developed many big projects in it. And I still prefer PHP. And yes, I understand Ruby quite well. And yes, you'll still flame me for not "seeing the light". Just like you did this author. But that's because you're narrowminded and can't stand to have someone question your position of self-annointed superiority. I posted a comment, then read the other posts... My site also looks like crap and is only a front page with links to several Google Album pages. I did that because it works for now, was easy and does what I want it to. Forgive me if I'm wrong, but I believe what you did is post your own personal experiences. You listened to the RoR hype and thought it would be super easy. For you, it was not. PHP worked better for you with your knowledge and training. I don't think you said Rails should be banned from use - just that it wasn't as quick and easy as you were led to believe. With my experience (VERY limited), I tried PHP, learned enough to make sloppy code, made a workable site with 5+ hits some days (0) on others..., decided to try rails and found it not as easy. So, I'm reworking some PHP code on the site I made (not the one in my sig!), making some changes, //commenting more, and having all kinds of fun with array($loops) that actually function, even though I wrote them! I really do want to try and learn Rails, but I can see that it will take considerable effort until I can make something functional. It's not just click-> type-> functional_site. PHP is nice. Rails is nice. Java is nice for some people. BASIC is what I learned in 1980. That is just my opinion, of course. I am tired of having someone express their opinion on a subject and get slapped down by the usual group-think arguments. It is as if you believe that a technology has to be better because it is newer. It is this type of thinking and left me with a bad taste in my mouth. Looking back, however, I think it was a good thing because it forced me to go back to school and get a real degree. 2 year rewrite ? Even 2 month rewrite seems a little long. Your website isn't that advanced really... Do it in Domino and you could have done it in a month. Your post sounds like a rant about how you tried something new and it didn't work out, so now it sucks. No, it doesn't suck, it just didn't work for you. In the end it sounds like bad judgement to hire one programmer for 2 years to redevelop such a large complex site. Lesson learned, I'm sure. It took you 90000 lines to write that? Were 89000 of them comments? I bet just the act of rewriting it was the major reason for your success. The best way to do a project is to start from scratch, get something that sort of works but has some problems, and then throw it out and start over. This time you will do an incredibly much better job. You will design out all the annoying things that got in the way the first time. It doesn't even matter that much if you do the rewrite in a different language, as long as you actually know both of them to begin with. Prof. Sussman at MIT say that you might as well plan to throw the first implementation away, because you're eventually going to end up throwing it away anyway, whether you're willing to admit to it or not. People ask what languages like scheme are for, and I say that Scheme is the language that you use to write the implementation that you're going to throw away. Why wrestle with oddball frameworks like MFC or swing or whatever at the beginning of your project, when you really just want a blank slate to flesh out your ideas. And you don't even have to throw it away, you can save it for prototyping and experiments. First impression of your site was "wtf? is this the right site? Looks like he let his domain expire and someone snatched it!" Then I clicked around. Love the content and the organization and the free full length music samples. Dammit you need to hire a designer to make your site look more appealing. Glad to hear I was not the only one. I took a few weeks off from my current project to consider a Ruby rewrite. It was a wonderful few weeks that ended with my deciding a rewrite at this time wouldn't be economical. I returned to my old scripts to discover that in the weeks I was studying Ruby (and also rails) I had picked up many new techniques which are by-and-large not exclusive to Ruby. I'm now writing the best code of my life (albeit, not in Ruby). Maybe next project! Ruby is a full blown OO language. PHP has objects bolted on. That's enough. Everything on Ruby is an object and that makes it a delight and phenomenally powerful. I have to program in PHP and other languages for pragmatic commercial reasons, but given freedom of choice there's only one worthwhile choise. Of course I could write everything in assembly I can just do it better faster and more accurately and elegantly in Ruby than in any other language. Its ulimatley that elegance that PHP lacks because when it comes to OO programming PHP is fundamentally flawed. Couldn't have said it better myself. Came to the exact same conclusions. Typo: "I loved it’s..." --> "I loved its..." Well I was engaged in the comments until I read that the author 'couldn't' supply a few concrete examples. Nothing to see here, just digg-bait. If anything this article illustrates that it is not the platform that matters - it's what's built upon it that counts. Most platforms have the capacity to scale well - just like clean code and logic, it's yours or your teams output that counts... I like turtles Thank for your post. Just sad it takes you 2 years to give up and come back to PHP! Even if it took me quite a while, it is worth reading the comments. I have particularly loved those three classes of people: - The 'Oh My God, Rails is under attack: let's reply fast' ones : I guess legally people still have the right to post bad experience about Rails - The 'I can handle 10000 queries/sec in lisp. What about you?' ones : no comment - And finally the 'Your web site is ugly so you are a poor developer (100000 lines of code)?' ones : no, web development is not only about producing beautiful HTML/flash pages. Sometime, you have to run business underneath. Finally, my advice: 1. Use the language you feel comfortable with. Of course you can almost do anything with any langage but you will be always more efficient with the one you love. It reminds me of the eternal war between linux and windows: there can be windows administrators that do a good job and run a domain properly. 2. Don't forget that behind a framework there is a language and sooner or later you ll have to deal with it! Because you always have specific issues, you need to do more than what the framework provides (generic by definition) : don't feel bad about it, it is normal! At that point, if you don't feel conforatble with for instance Ruby (compared to PHP), you'll face difficulties! (Note to developers, it is also a good time to contribute to the community nd enhance the functionalities of the framework). 3. Follow not only the evolution of your programming language but also the evolution of other languages because you always learn from difference. Obviously, Derek learned a lot from is rails experience (OO, MVC, SOC with REST...). Most good practices, design patterns aren't bounded to a specific language and could be used everywhere. euphrate_ylb I took a look at the CDBaby site. I can't imagine how this project could've taken more than say three months to port over from PHP to any given language/framework. I also find it hard to believe that it took 100,000 lines of code to develop that site originally. I've worked on sites orders of magnitude more complex than CDBaby, both in RoR and other technologies. Heck, I've worked on stand-alone MFC applications that were way more complex than this. None of my projects took anywhere near two years to complete. The original writeup of CDBaby must've been an extreme hack job. If I heard that a developer took 100,000 lines of code to develop that site, I'd seriously question their competency. I'd be expecting you to code your own webserver in addition to serving up that site for 100,000 lines of code. Hell, I've written complete device drivers in about 100,000 lines of code. ." Yeah - and any developer who does NOT do this ends up with crap code that will be unmaintainable for the rest of its life. Code quality MATTERS! Maintenance of code costs far more than the original development. Get a clue. Language does not really matter unless you're pushing the envelope on what programming - any programming - can do. There are certain major differences in orientation for languages that do make one better than another for any given task. But generally, most languages today provide the core functionality to do most any common programming task. The problem comes when there is poor system design because of poor system development methodology. Derek sounds like a guy who has never studied system design. Most managers who dabble in programming are like that. They can generate something that works and that they understand - but it won't be maintainable, it won't scale, it won't be secure - and it won't work at some point. And most of the time they can't understand why it takes five times as long to do a proper system design than a "code it now" project - or why it will cost them five times LESS to do it that way over the long run. Bottom line of this piece: management failure. And that's the bottom line of most project failures. It's why in IT, as Woody Allen summed it up, "Nothing works and nobody cares." "Your website looks like a spam site." Warning: OT. Not the most constructive comment in the world, but he has a point. This may just be me, but your site would literally get a half-second glance before I hit the back button. Simplicity is beauty, but beware similarities with overused styles as is commonplace with camped domains. I like Ruby a lot! Rails, on the other hand, is totally overrated. Thanks for the advice. I was really thinking of scrapping my projects written in PHP and trying to rewrite in in Ruby, but you have changed my view. I know my code is horribly unorganized and inefficient, so changing the language isn't going to fix my problem. I should probably just learn OOP in and out and rewrite using a language I'm familiar with. Good article, and thanks again. I knew of one company that got bit by the "this is better" than php, we don't know why, lets switch bug. They had a product that was written in php and a "genius" developer from someplace in the north-west, obviously bored with his job, decided to get involved and convinced them to rewrite it all in another language. While the flying circus rewrite (a clue about what they switched too) happened, they forgot all about "marketing" and didn't understand open source, so that is why you have never heard about them today... As to the php sucks crowd. When ruby can be as easily integrated into the web server as php is, then maybe it will be useful. Having to fight off the mongrel's to get it to work sucks (as does the WTF ruby syntax). Derek, I reached a similar conclusion. We have started a new project with Rails but were completely disappointed. First, Rails is very inefficient and slow comparing to languages like Java that utilize JIT and other advanced techniques, while Ruby processes bytecodes using 60s technology. Second, we do optimizations with database and our code focused on performance: fast queries and calculations. Rails was not helpful there. And finally, for me, Java professional, Ruby was often counter-intuitive. Same as you I used what I learned in Rails and took that knowledge to Java. We have a beautiful system that combines Struts and Velocity and mimics Rails MVC structure. It is elegant, VERY FAST and easily scalable Shame on you for serving XHTML 1.1 as text/html. You should serve CD Baby's valid XHTML 1.1 as application/xhtml+xml. I never really understood the appeal of sending a hash full of SELECT options in Rails was superior to writing the SQL you're thinking of to begin with. Then again, i never really picked up the "write some other language in Ruby!" fetish. In the case of schemas, etc. i can see where Domain Specific Language leads to database independence, but writing Rails queries always seemed needlessly elliptical. I always thought the best way to maintain db independence was to stick to standard SQL and write wrapper functions for queries that require engine-specific optimizaiton. I also have aching problems with annoyances like the limitations of "static" in php5 and the organization of the PEAR libraries (PEAR::isError() anyone?). "Functional" programming (not quite, read: function pointers) are also possible in PHP, but hardly graceful. I tend to avoid them. The whole thing's a pity. Also, most of the el-cheapo web hosts still run php4. I'm getting excited about PHP6 (ETA: when?), but i'm sure by the time that comes out about half the hosts i run into will actually default to php5 ;-S I can solidly attest to having improved my PHP via a Rails tryst. Unfortunately, given the limitations of PHP, the result often comes off something like a Java app minus the purity, static typing, etc. It works. I really think that the #7 is not very fortunate in the comparison with girlfriends. It might be a funny joke for guys, but I really think the text would be much better without it. GO BACK TO C! PHP for life! It seems to me that alot of PHP people don't get MVC and OOP and I've strived to get those I work with to improve their skills. It's nice that RUBY does this but they inhibit you (like SQL for one as you pointed out) and it still has yet to scale. There really is nothing in RUBY that other languages with a framework can do better and do do better as they scale whereas RUBY still doesn't. @Miriam I though the 'girlfriends' comment was quite enlightened. People go chasing off after new partners only to discover the new one has the same 'flaws' as the old one. Some people do this again and again, without realising these supposed flaws are in fact entirely their own. Many of the polarized comments here, including some unwarrentedly harsh criticism of Derek's site (Rails didn't work for you, therefore your site must suck, etc), prove the comparison is valid. I tend to agree that you need to be close to the SQL, always. Anything that takes me away from my queries, joins, sorts etc is completely missing the point of any sophisticated and efficient application. The database is central, the language is ancillary. PHP just does it. Reliably, easily. Designing your own MVC framework is no big deal. Don't blame the language for not knowing how to use it. I'm 100% sure you would have spent the same 2 years in vain if you tried switching to asp or jee. And can they do more than PHP? YES. But it all depends on what you are trying to do. And how well you know how to use the tool... Third time's a charm. Excellent post Derek. The most important things in running a successful Internet company are focus and taking the right opportunity. PHP is an excellent, mature solution to take that opportunity and solve your problems. While you and many others, including some of the biggest ones, are making money using perfectly fine working solutions, let theorists and fanboys pursue their goals of theoretical correctness. Looking at life cycles of web sites and products, they are the silly ones as opportunities pass. ." Couldn't have said it better myself. It's a shame that people can't simply see programming languages as tools, instead of dogmatic religions where 'my syntax smites your syntax'. I guess people invest time into learning a language and they get attached, I can understand that, but it doesn't mean they have to attack every language they have yet to master. Amazing how many replies miss the point entirely. A musician bought into a framework due to hype and *unlike many of the folks still mired on, say, the Java trend's rocky coast* realized that the ugly, simplistic, buggy, inelegant language the code was initially written in was just fine. Applied some lessons from the things RoR gets right, re-launches, experiences success. Writes op-ed piece about what worked for him, why. Fanboys flood the piece, write about how he should have let the patient bleed out, learned how to use their favorite tourniquet, site is ugly, etc. etc. May or may not have read the post. May or may not have ever launched a significant Rails site (at least one has, others simply snipe without substance). Bottom line: the end result counts much, much more than the tool. Nobody gives a shit about what kind of brushes Michelangelo used. I think this is a shitty article made only to promote your "new" site. Sorry. I absolutely agree with the writer. RoR is already dying project as it's too limited, as it's designed to solve already trivial problems. Nontrivial ones will cause enourmous burden for dev, as you have to write all kind of hacks. all logic is coming from the models, one per database table Nice to see this being acknowledged. Any engineer knows it's true, but it rarely gets said. Get the data model right, and everything else pretty much drops into place. Get it wrong, and the project is hosed from the start. Now all you need now is a designer. Your web site is u.g.l.y. It's all about design. Rails has Design - a plan and pattern for how things should work - and your original PHP code didn't. No shame there - nobody's code does at first, and code that grows organically as you grow your business is the worst of the lot. Rails failed for you because it's Design didn't match your Design (or lack thereof). But you made the conceptual leap that many others don't, and now the importance of Design is yours - again, congratulations. The language really doesn't matter all that much. It's the Design. I am actually quite pissed about those who evangelize Rails. I am a recent EE grad who started off doing a website in RoR - and then when reading online I came across many people saying that the Rails' backend does not scale well. Then I go and ask questions on Ruby forums about whether this is a problem with Rails - I was already about a couple of months into coding seriously, and I was loving Ruby and I was loving Rails - I didn't want to let go, it was like a nice dream. What ultimately pissed me off is that many Rails' folks would say this - "Why bother about scalability right now? Only a very small fraction of the websites become so popular that you have to start worrying about scalability. What makes you think your website is going to become so popular? Cross the bridge when you come to it." I was totally pissed off with this kind of widespread mentality. Then, I came across an interview by the person in charge of Twitter, saying what a pain in the butt it was for them to scale Ruby. That was the last straw - I was pained that I had to go back to PHP after experiencing such a beautiful language and framework - but like someone points out here, RoR is like a beautiful girl that is a one night stand and PHP is like an ugly wife but who will stick with you in the worst of times - I prefer the latter. Well, Derek, maybe this discussion will at least boost traffic to your site. It got me there and I like it; simple and fast, focused on content and what I want to know. And soon there will be an Internet legend that it's written with a 100K (well, why not say an even million) lines of code, ... more power to you! Hi, Derek! As it is noticed elsewhere here, it looks, like you had not to rewrite, but recreate thing. Give away current database, as well. And everything else, perhaps. You start from scratch with RR, and you are guided further by it's limitations. You pick your limitations first, you get nowhere with those of framework. But I could confirm experience, you are posting here: approaching loose language, as PHP, with cleaner constraints/aims on your mind, you are getting surprisingly different product from the same language. Actually, I had quite satisfactory OOP implementations with the same PHP. It is good, that you found perfect recipe for your situation and desires, in the end. Also, there is point in sharing such dramatical real-life project story: though not directly appealing to Ruby lover, it certainly can be of use to some PHP code maintainer. Good luck with cd babies! Gotta love the "I could have done it faster in any language" comments. I stopped saying that a decade ago because you never understand the complexity of software until you really delve into it. It seems the point that so many people are missing is that you aren't bashing rails so much as stating why you aren't using it at cdbaby. Rails is still young and it will have it's growing pains, but it has done a good job of getting a LOT of web guys thinking about things as developers instead of as templaters. I don't see why people think so much in terms of Perl > PHP > Ruby > insert language here. Get to know the languages and figure out which one is right for any given job in terms of performance, scalability, ease of development, and maintainability with your current work force. Sometimes Ruby is a great answer. Sometimes you are better off using something else. Regardless, the concepts that RoR has brought to light are beneficial to everybody. There was an element of this article that few seems to have commented on, which is that the author hired a good programmer (good enough to head over to 37 Signals afterwards) to write his code. So all these comments about Derek Siver's programming competence are irrelevant: he hired someone with the language skills. I'd like to hear Jeremy Kemper's perspective on the project too. I think that, if we were to have comments from both Sivers and Kemper, this whole story could be an excellent case study in how projects can not work out even with good intentions. A lot of comments read along the lines of "100,000 lines of code for your crappy site? You must be joking". I suspect that Sivers would agree, after all the point was that his rewrite in PHP had nearly an order of magnitude less code than the original. How about ~10,000 lines, is that OK? Finally, in response to the howls of indignation from the RoR purists, I've worked with people who have strong opinions about methodologies and "using the right tools for the job". I have, indirectly, been a customer of these people. In other words, I've had to pay for them to do work for me (due to complications I won't explain here, I didn't have a choice in how my project would be implemented or who would do it). And, despite their obvious intelligence, they can be a disaster on a project. There are some programmers who spend more time preaching about "the right way of doing things" than they do writing functional code. Not only that, they spend time belittling all those around them who lack their brilliant skills at abstraction. Eventually it all gets revealed as Emperor's New Clothes and these people get sacked. As the rands in repose blog points out, the real geniuses tend to be normal and easy to get along with. I now try and avoid working with over-opinionated programming evangelists. I've seen projects and teams derailed. No hire. Kind of wrong to compare a framework and a language, no? If you were skilled enough in ruby, you could have done everything too. Without rails. Integration is not about the language, its about the structure. "Programming languages are like girlfriends" - when you write, your imagined audience is a bunch of straight men, isn't it? Or you could use .NET I think you didn't fully consider the implications of the technology (rails) you chose vs. the architecture you had in mind I wrote about it in my blog "Or you could use .NET" The .NET framework suffers from many of the same things that the Ruby framework suffers from. Both suck for data heavy transactional environments. The object model of languages like .NET is going to be a pig if you are going to follow the multi-tier model. Although it is possible to write data centric systems with .NET, the developers have to be willing to throw a way their religious use of design patterns and best practice in OOP which would fit in application or game programming but bog down data centric applications. They should resort to use POOP (procedural object oriented programming) instead where objects are used to represent entities with a state and use structs or temporary tables to represent stateless entities or large matrixes of data. The objects in the system maintain state and pass messages while most of the data bypasses the object layer and passes from the database to the interface. I would also argue that most of the business logic should be in the database in the form or stored procedures, triggers and business rules stored as values in tables. Having the business logic in the data layer allow you to leverage the power of triggers which depend on the state of data in the database and make your product virtually language agnostic which means you can rewrite the interface and application layer (if it's required to exist) without losing any of your business rules. I've noticed that a lot of off the shelf corporate financial software is written that way as it gives not only the advantage of not being tied to one platform/language but can be extensible and customizable for the client without requiring any code changes. yami: "Programming languages are like girlfriends" - when you write, your imagined audience is a bunch of straight men, isn't it? uh oh, someone wasn't p.c. enough for yami. I think the reason most commenters are so upset is because this post is now very high-profile and it's claiming that rails wasn't up to the task of creating an enterprise site like cdbaby. which seeds doubt into the tech community and that means their jobs (or in some cases, their passion) could be thwarted because one programmer didn't use rails for what it was meant for. i'm a full-time rails programmer and i know that i'd get a well placed email from my boss who had to put up with the rails from php transition last year, holding mostly onto his faith in me and my team. but transition is a money question and i still stand by rails. what i think the community needs is a little clarification. it certainly baffles me that 2 top notch programmers couldn't bang out a cd retailing site in a years time. it must've been a specific issue. derek, if you'd give us some specifics, perhaps it will help to improve the framework. ps. lets not give php too much credit. no matter how much mvc you give php, it's still a bitch to develop in. given you thought ruby was beautiful and rails a great teacher, did you consider any other ruby-based, rails-inspired micro web frameworks? "PHP is Ugly." Incorrect. inexperienced and undisciplined coders make ugly, regardless of language. Solid coders with discipline can even make ADA beautiful. ." If you've got to hire someone to do things "the right way" for a language then it's not an effective tool at all. If you must adjust who you are and what you do to fit the tool, the tool is deficient. If you hire an expert and he *still* can't make it "good enough" for the language/framework then the language/framework is supported simply by believers, not logical operators. I just have to laugh at all the zealots here. Your love of a particular language or framework blinds you to the reality of Derek's experience. His experience is clearly based on a real world scenario and two years is certainly long enough for him to say that he has a reasonable amount of time with it. To you rails-heads: if it works for you, wonderful! But you're as bad as Microsoft if you think that your framework/language pair is good enough for everyone and everything. PHP, JS and an AJAX methodology are framework enough for me personally - me and about 200 of my closest tech-friends talk about it at my forum, in depth all day long. We do not have enough Ruby people there so, interestingly, I took the Ruby and ROR boards down last night. I'd LOVE to have some of you zealots come tell us old guys why it rocks. If you do come, please PM me there and I'll put the boards back up. shame the author didn't read Chad Fowlers post on re-writes Could have saved him 10 months at least as it was written in 2006 ;) I looked at your site, seems like a couple months work in asp.net. Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius, and a lot of courage to move in the opposite direction. ~ Albert Einstein Why not use something that doesn't require so many lines of code? Thank you for sharing your observations. There is much value to be had from trying new things. Actually... Seems to me that one of the lessons here has less to do with choice of language and more to do with "framework" vs. "framework-less". You cannot really go "framework-less" until you know the ropes: MVC, templating, object-relational mapping, etc. These are exactly the things that frameworks teach -- they guide/force you into best practices. But once you understand how to, for instance, write a bit of code-generation script or implement the "singleton" pattern, you don't necessarily need the framework to do that stuff for you (and often in its own "opinionated" way that may not, in fact, best most applicable to the task at hand). There is MUCH to be learned from Ruby on Rails, Django, Catalyst, etc. (not to mention Camping, Web.py, etc for a lighter-weight approach), but the fact is the ideas implemented there just aren't that hard to recreate in PHP. And PHP has two huge advantages: large install base and drop-deap easy deployment. PHP is by no means my favorite language, but for simple, straightforward framework code, give me PHP5 + Smarty (or XSLT) any day. Peter Keane I wonder whether ruby + mongrel + some templating system would have done what you needed (sounds like Merb). I couldn't give up the OO freedom that Ruby provides, but I could lose some of the Rails syntactic sugar in exchange for speed and explicit code. Sometimes Rails magic makes it hard to understand why some example works, but my attempt in my own application doesn't, and tracking through convoluted dynamic methods in the Rails source doesn't help. And then sometimes I discover Ruby is intepreting some ambiguity in my code differently to what I meant. Ruby/Rails is not the Holy Grail, but it has opened a trapdoor into a whole new way of doing things that wasn't just a logical extension of what we were all using before. I dig what you say about #7 - I would write C# very differently to now, and I would recreate some of the Ruby magic in C# (I know its possible, but ugly, but who cares when its in a library function). I dream in queries. I think in tables. well said! :) Ironically, I had already heard of cdbaby.com because I met a musician who uses it to sell his CD's (). One of the great things about web programming is being able to find just the right tool (or combination of tools) for the job. When all you have is a hammer, everything looks like a nail. Maybe this was just a case of picking the wrong language / framework for the project. It also could have just been the discontinuity of the project since "many setbacks were because of tech emergencies that pulled our attention to other internal projects." Besides, anytime you start a project over you have a much better idea of how to organize the code. A little extra planning in the beginning might have prevented this rewrite. this guy is just trying to drive traffic to his site with this story I wish you luck with your PHP... I tried abortively to move from PHP to Rails 2 or 3 times but always ran into the same issue, if you have any legacy anything, don't even think about rails. Unless you are starting from scratch, give up now, you have made assumptions in your DB that rails will not like, and you won't be able to work around. That being said, I've found a much nicer framework for "legacy" systems. The python framework Django (while really designed for "from scratch" systems) is much more flexible when it comes to existing database schemas. I have ported 3 PHP apps to Django with only minor alterations to the DB, Django doesn't insist on doing *everything* for you like Rails does (although it can). I also like python as a language better than Ruby but that is probably just cause I've got more experience with it. I'm sure they are both capable languages, I just appreciate that with Django I'm not boxed into a corner 100% of the time that a design decision *makes more sense* than the way the Rails guys envision a "perfect" app. Are you kidding Aaron? Jeremy Kemper (bitsweat) is one of the top contributers to Rails and has been a core contributor for a long time. Get your facts straight. Application architecture, design patterns, best practices, etc. are not often discussed in the PHP world, but they probably ought to be. I would HIGHLY recommend the recently published "PHP In Action" (Manning) as an indispensable guide to enterprise-grade PHP web development. Object-oriented design, testing & refactoring, MVC, Data class design, etc. are all in there and it is (like most Manning titles) a very well-written book. It simply seems that you're a professional in PHP for a too long time. There is no way to change your programming language for you. Well, what about Ruby instead of Rails? ;) PHP done in MVC is great for RAD and performance. I'm not a big fan of programming platforms that sit on top of a bloated virtual machine (java, ruby's, .Net, etc...). good for you Derek. Glad you found what you needed. Hard to believe how defensive 90% of the comments are. Wow, lots of newbs and lamers on here defending one language or another, or their favorite frame work. Let me add to the noise level... In my expeirence the problems with programming these days are the people who consider them selves "programmers" when they are any thing but! Sorry, but those of you who do not understand at least one directly compilable language, and at least a small amount of how assembly works and what you are actually doing with the hardware when you write code, you people are NOT programmers! If all you know is PHP, ASP, or Ruby, you are a scripter. You write macros for a run time scripting language, you do NOT program! What you are doing is writing a glorified batch file (or shell script for us *nix people). Programming implies that you have at least SOME understanding of what's going on under the hood (or at least is SHOULD imply this). I have taken part in way too many conversations with morons who think they are "programmers", but try to convince me why they do not need to understand things like assembly and hardware processes. I have literaly been told by these college grads (that everyone seems to have such a high opinion of) that "I am a software programmer, why do I need to know about hardware? That's someone elses job". Of course these are the same ASP loving morons who cannot tell you what the memory foot print of their app is going to be, how large of a server it will require, and about how many simultaneous users it can handle before it chokes to death or has massive locking issues. Sorry, but if you do not know what is going on under the hood and what your code is actually making the computer do then you are NOT a REAL PROGRAMMER!! Get over your selves! You can make some cute crap in HTML. That's nice, but you have about as much talent as the average script kiddie at that point and have a looooong ways to go before you deserve the title "programmer". I honestly find it personaly insulting that some of you call your selves programmers, it's a digracful insult to those of us who DO know what we are doing! I feel bad for those of us who know what we are doing, because we always end up getting lost in the ever expanding sea of morons being pumped out by colleges and programming classes. I know what it is like to fight to win a bid on a project, only to loose to some firm full of microsoftie ASP writing newbs. And then to check back on that web site once it's up and see the absolutely appaling design they paid for. Or to find out the client ended up wasting thousands of dollars on a project that never got finished! These mircosoftie morons are giving us real programmers a bad name. Microsoft is directly responsible for lowering the standards across the entire computing field! They got end users used to having crap software that crashes all the time, so now business managers just expect to get crap code for their dollar. M$ watered down computing as we know it! Oh, and to Scott who said "if you blindly-hate Microsoft like most other people on O'Reilly", ummm no there is nothing blind about it. M$ is a horrible company that is directly responsible for holding back the progress of computing for their own financial gain! They buy up competitors and close the doors so that better software packages don't come along and destroy their monopoly. They use more vendor lock-in than any body. They have made gobs of money on the backs of the IT people, forcing poorly designed OSes on to us and expecing to maintane them even though we are not given the proper tools to do so. Smart IT people don't hate M$ for the hell of it, and no it's not jealousy either. I have no problems with big business or capitolism. What I have a problem with is when a company has lied, cheated, and stolen their way to the top making billions they do not deserve and making my life worse in the process! You micosofites only like M$ for the following reasons: 1) their bloated ways of doing things means more jobs are required to handle the bloated mess that IT has become, opening up more entry level jobs for you M$ loving morons, 2) they lowered the standard that people expected of computers and software, so those of you who cannot make a program work with a damn feel right at home with M$, 3) They try to put training wheels any everything so you can point-and-click you way through things that really ought to take some critical thinking and manual processes. Automating stupidity... that's really all M$ is good at... So, to re-cap, most of you are morons and that's why you are such big fan boys of one language or another, because you haven't bothered to go out and learn lots of languages like us real programmers. And if you have never actually compiled something into an executable then you are not even programming, you are scripting. I would say 10% of the people posting here are REAL programmers, the reset of you are script kiddies who think you know more than you do. If you don't know what an accumulator or a register is you are NOT a programmer. If you use divison and check for a remainder as a method to check the even/odd status of an int, instead of just quickly checking the least significant bit, you are a scripter and NOT a programmer. If the last two sentences I typed make NO sense to you, and you are scratching your head wondering what the hell I just said, you are NOT a programmer! Talk about Nerd Rage. Down boys! with all your babbling about how Derek sucks and his PHP sucks...whatever. CD Baby makes more money in one day than your basement dwelling butts do in six months. I don't use either ruby or php, although i have tried the both out. But your list has to be the DUMBEST list ever!!! 1. Can Rails can do that PHP can't? No? Can PHP do anything rails can't? No? Well, what a great reason. 2. We are already using PHP.\Integrity. The first statement is true, but to say that ruby doesn't have integrity? You are really coming off as an asshole. Why doesn't Ruby on Rails have integrity? If so when is it unreliable? 3. You don't wan what you dont need. Well does it hurt you/your company to have the useless capabilities. No? Well your argument is as useless as those extra capabilities are to you. 4. It's small and fast. Your retort is similar to point number 1. With the exception being, how simple is it to making either fast? Is ruby easier to make faster? or is it PHP? 5. Built to your tastes. Valid, but only valid because you have very little in the gumption department. Gumption being, if you don't know, the desire to learn and learning as a desire to make things better. Such a trait would lead you to try knew things even if they don't make things better then before. 6. SQL. Valid. Possibly your only valid point. If i wanted to be a dick i could say "Since Ruby is a OOP you could just make a class or a few methods to run all your queries." But i will let you have this one. 7. Stupid Analogy. That is the end of my retort. Thank you i feel dumber for reading this stupid ass list. You could have gone into actual detail about the two languages and said something like "Ruby is a bastard child of C/perl/python/whatever." and i would have been more inclined to believe then this asinine attempt to impinge on people. I switched back to PCP after 2 years on Crack. I don't know the author nor he's work, but purely based on the article the only possible conclusion is that he is a very very bad manager. No need to get picky about one language or the other. In the end of the day it doesn't matter if it is ASP.NET or Perl. Really. Anyone that has done Web development (AND Systems development) for the last 15 years the way I did, we all know that a website like cdbaby.com é a 2 month job (at most) in ANY of the current web languages+frameworks. If you sit in the problem for almost 2 YEARS ... sorry, you're a lousy manager. At the very very least, you would gave up after 4 months. That's the real example of a Sunk Cost went wrong. Face it: there ARE real production delivered websites in RoR that are as good as anything else. Pick you choice, be it Perl, be it PHP, whatever. No one should sit over the problem for 2 years. That's just stupidity. Doesn't prove that the framework is bad, just proves that the prople involved are utterly incompetent and clueless of technology. Good article and lots of interesting comments if you can stand slogging through all the dreck. Hey people, he never said RoR sucks and cdbaby.com is almost certainly a lot more complex than it looks at first blush. There are many lessons represented here, in this article and the comments that follow. Here are, IMHO, some of the more salient ones: 1) Experience is more important than the tools you use. (Or "Yes you can write good code in PHP too.") 2) No framework is perfect for every application. 3) The potential for incremental improvement is a core strength of web-based systems. 4) Some people still think Java is OO. (Or "Never underestimate the power of marketing.") 5) Beauty is in the eye of the programmer. But that doesn't matter much to experienced programmers because they know that working code is going to have some warts (no matter what language it is written in.) BTW, on this beauty thing, Ruby really is beautiful... in a Miss America sort of way. But the Helen of Troy of programming languages is LISP. Don't bother arguing this. It is a fact. ;-) Be kind. I only shop at sites coded in assembly. Someone prolly already said this: you could have used Ruby for the rewrite and done your own custom thing. I've done the PHP -> Rails thing and what I love about Rails is it gives me a good excuse to write a lot of Ruby. And Ruby, despite being a slow language (for the moment...) is really, really, really fun. Why did you choose PHP instead of Ruby for the rewrite? I feel the same way about asp.net and asp. There is nothing in asp.net that I can't do in asp. I very much identified with this story. From my vantage point, this story is not about the particular language, development tools, or technology used. It's about a small business owner taking control of his business, using technology he knows and is comfortable with. A few years ago I helped start a small web-based company. The owner of the company, a smart businesswoman but computer illiterate, handled the marketing, sales, etc., and I programmed the web site. (It was in PHP, but that's irrelevant to my point.) Then, due to an unfortunate disagreement with me on terms of my compensation, said owner of the company made the (of course IMHO) short-sighted decision that she would rather go it alone, to save money. She ended up spending in excess of $100,000 (far more than she ever paid me) to hire contractors to rewrite what I did in ASP.NET. She even had to fire one contractor for incompetence and replace them with yet another. Now the future viability of the company (which I am no longer connected with) is very much in doubt. The moral of both stories? A small business owner needs to either personally have a complete handle on the technology running the company, or have a trusted partner or vested employee who does. An owner or management team that is not comfortable or versed in the vital infrastructure of their own business is bound to fail. That type of essential knowledge cannot simply be "farmed out." The mistake that Derek made, which is very analogous to what my former associate did, was to abandon a workable and familiar technology in favor of a perceived superior -- but unfamiliar -- one, and at the same time bring in for implementation a new team that had no history with the company. Fortunately for Derek and his company, when faced with the failure of the new effort, he had the fortitude to change course, and still had his personal skills and experience to draw upon to turn things around. Tragically, my old friend had neither, and has likely spent her way towards bankruptcy as a result. Wow! What an article - it makes absolutely no sense. I thought it was actually a parody article. A couple of thigns that came across 1/ Carpenter blaming his tools? 2/ Having had some time to look at CDBaby - I'm struggling to see what you were coding for 2 years. Either you're worthless as a coder, you haven't given us enough background information or you're exagerating 3/ When doing a re-code - you might wanna think about re-coding the use cases. But a line for line re-code will get you no-where.Whats the point of recoding something line for line in a new language. Smells like the blind leading the blind on that project. I had this "php" developer working for me. Just couldnt get his head around RoR. Blamed everything under the sun. Why are certain parts of the PHP community so blind to external technology? So, basically, you are happy with your preferred language, which is both normal and better for your company. OK. Then why the heck did you try to do EVERYTHING in a language you do not MASTER? That is the question you should ask yourself. It looks like you never coded in OO style before? Basically all your arguments are the same: any OO-code is nicer than crap PHP. I was expecting to read the technical difficulties with Ruby. Everyone knows crap-PHP is crap to use/read/rewrite and any Framework will always be larger than a self-built one. I think one of the things PHP can't do and Ruby CAN is: reading the requeststream. Tapping into the traffic before it's entirely processed. I love this feature, it gives me total control of everything that gets send/uploaded to my server. So yes, there are things Ruby can and PHP can't. If I knew my little personal blog post was going to be read by any more than 2 people on earth, I could have said: "I thought my old language was ugly, and since I needed to do a rewrite anyway, decided to do it in a new fancy framework. But it's surprisingly hard fitting an existing app into a framework. So hard that after 2 years of trying, I looked at the old language I thought was ugly and realized the problem wasn't the language, but my previously poor skills, now improved and defined by 2 years of working with this framework. I ditched the framework and rewrote in my old language, which was much smarter for our company's needs because it so easily integrated with all of our existing code." My post was not meant to be about strengths or weaknesses in Rails or PHP in particular. It's unfortunate that is the aspect that drew all the unintentional traffic and comments. Quote from a Dave:? The fact is that the list of things to do (nginx frontend dispatchers, etc.) to get the Ruby site to work aren't necessary for the PHP version of the site, most likely. Which makes it less complicated. Which in this case is a good thing. Other than that, the Rails zealots in here should collectively go home and be ashamed, you're presenting a seriously screwed up image of Rails here. The salient point is that you need a very good reason to rewrite a working codebase from scratch, regardless of the technologies. Read this piece from Joel Spolsky: Fire and Motion Sorry, Derek Sivers, but your article gives no facts of what was wrong with Rails. A one-man project and a team project is two different things. So far from these comments I've learned that: - Anybody who lets a website accumulate 100,000 lines of code over the course of several years is an incompetent programmer, even when they take what they have learned and rewrite it to a tenth of that size; - Anybody who lets a seemingly-straightforward project drag on two years before pulling the plug is an incompetent planner, even when they take what they have learned and accomplish everything they set out to do in two months; - Anybody who sets out to recreate a PHP web application in Rails and fails is an incompetent designer, even when they take what they have learned and write an MVC solution in PHP that's familiar, fits their problem domain and integrates with their existing infrastructure; - Anybody who finds out for themselves that porting an existing application to a new framework is a waste of their time is a moron, even when they take what they have learned and warn others that the grass is always greener on the other side. Who are you going to call incompetent when a project of your own goes gets mired in redesign, reveals itself to be idealistic and insufficiently planned, or turns out to be a square peg in a round hole? Who are you going to rail against when you post on your blog about learning from your mistakes and are piled on by 100 tiresome, pompous twits claiming you should have known it all along? A top totch Rails programmer(Jeremy Kemper) couldn't do what you did in two months. If it's true, the question is why did the project fail? Could you give some specifics? Based on your post here, it's obvious that you need to learn some basic stuff in software development, and need it badly. And your site is not Amazon or Ebay. It's a little online store. I am curious what's the REAL reasons for the failure of the re-write project. Congrats, Derek. I know how good it feels to immensely simplify one's codebase, and how strong one's desire can be to share the achievement with others. It's unfortunate that nearly everyone who commented here seems to have missed the whole point of the article, which was not to bash rails, or even to praise php. It seems to me that the point was that the grass isn't always greener on the other side of the hill, but that you might learn something valuable by walking over, checking it out, and coming back home. Thanks csuter! Exactly. It's a shame that people thought this was a "Rails versus PHP" article. It wasn't about the specifics. I should have left out the framework and language and just called them X and Y, and let people fill in the blanks from their own experience. I would want to hear more about _technical_ details of Ruby fails. I have to agree, Before I started learning Rails, my OOP skills sucked big time, now after 2 years in my head just fixes completely in what is required, objects methods polys, the whole thing. Such an amazing teacher! I call super "fud" on this one. am glad you did tell the story with PHP and RoR explicitly named. I think the point you wanted to make was clear. Otherwise the article would have gone unnoticed. Great article! Hi Derek Could you write another post with some examples about how your PHP code improved after your "Ruby taint"? Cheers JD What Rails limitations in particular did you experience? All 7 reasons have mostly to do with yours, it seems. I don't believe this has anything to do with the language, or the framework, but the work ethic. Your first site needed a re-write because it was clearly massively overcomplicated. I mean 90,000 lines, for an e-commerce site. The rails build failed, most likely, because you spent too much money and time "in the rails internals". That's not what rails is built for. If you don't understand how to utilise the rails API to turn a rails app into a reasonable e-commerce site, then I think something is starting quite seriously wrong. If load or SQL speed are your issues, then maybe rails really isn't for you. This doesn't discount the language, or really prove much about the framework though. There are things which PHP simply cannot do over ruby. That's just the way the languages are. Clearly, these aren't the things you need, and in that regard, there is no argument over your decision. To claim that that capabilities are the same however, is somewhat of a drastically inaccurate statement, if nothing else, for the syntactical capabilities of the languages. If you seriously want to push that you had a seriously skilled software engineer on the team next to you, and you re-wrote the program in two months, after only half completing it with more staff in two years - then there's a VERY serious problem, likely one of: False Evangelism, Misguided design, Enforced poor technological choices, Over-Idealism, Not enough _actual_ project work being done. All in all, there is a _clear_ lack of pragmatism evident. I really need some help to understand what the real problems were. Maybe it was rails, that would make me laugh, however, if rails is really that much of a problem - didn't you feel tempted to make a strong management decision after 1 year? first thing : rail is a framework; php a language... how can u compare two thing uncomparable ? second thing : "what can rails do that php can't - nothing" -> The most part of the modern language can ALL do the same things !! All computing engineer knows that, the only difference is the time u spend to code, the speed of execution, the portability/interoperability of the language and so on. In conclusion, since you don't explain us what is the main fact that made you cancel your RoR project and "return" to php, this article means nothing ! PS : I only code a few in ruby / roR, but I always could do my own SQL request... was your ruby developper really a pro ? Active Record don't force you to forget SQL ! Hugo, French IT Engineer So what were the limitations you ran into with Rails? Brooks said "build one to throw away" and it's still true. al. Hi, I looked at cdbaby and I honestly can't believe that you worked for two years with rails to try and produce this website. Then, switched back to php and it took 2 months for the rewrite. You should have hired a coldfusion programmer who uses a mvc framework such as machii or modelglue and the whole thing would have been done in about one or two weeks... seriously. And if a ruby on rails programmer couldn't have produced this website in about two weeks, I don't think you had a top guru rails programmer. Nice article. I agree with #1 reason. Until i discover other server side scripting language that can do what PHP can't do, I stick with PHP. I believe non of them(server side scripting) is better than any of them. It is just a matter of how depth is your practice, and that is what makes the distinction. I don’t need to adapt my ways to Rails. I tell PHP exactly what I want to do, the way I want to do it, and it doesn’t complain. Thank you! so people like you get to run society, while the poor slobs who would be fired/homeless if they wasted 2 years of time on some hair brained BS scheme that was caused by a simple lack of education, have to suffer the consequences. This is the exact conclusion being reached by most professionals out there who are working worthwhile applications. Very well written. Thanks for posting. All the Ruby lovers that have difficulties thinking up an advanced applications all by themselves are so pissed off by this. I think you summed it up wonderfully: If you really know how to program, PHP allows you so many advantages. Rails is much like the wheels on a training bike, the go-getters don't want to keep the training wheels forever. If you love programming and you are one who generates good code, then you can take full advantage of your freedom and write something terrific with PHP. If you are just trying to get by and need a template then you might be better suited to just ride the "rails". Man is there a lot of pent up rage in the RoR community! Frankly, as a PHP and Ruby developer who has done a bit of Rails work as well, I don't feel the least bit "offended" or any less "enlightened" by any of Derek's observations. Certainly Rails developers must not be shocked to learn that there is more than one "opinion" when it comes to software development or it would turn out to be more of a science than an art.. Rails development moves quicky for things like mikey's weblog, sarah's guestbook, john's photo album, etc. The turn around on these stellar applications is incredible. But if you've ever worked on a top 100 site, then you know about the "rails wall". This is the point at which you have to start modifying the entire project to make rails work. In the hands of a real programmer with a plan, PHP development time matches and surpasses rails on every level with mikey's weblog, sarah's guestbook, john's photo album. But the best thing is, when you are working on the top 100 site, there isn't any "PHP wall", the project unfolds exactly as designed, you don't need 10x your current number of servers to run it, and it doesn't take 2 weeks to roll out. Plus you have more free time to have a social live and get laid... as a result of your bonus and lack of insanity. Hmmm, I seriously doubt the suggestion made that there are certain things that Rails can't achieve but Ruby can. The author provides no evidence to support this statement. I am no Ruby developer myself but know that languages are literally ways of expressing the same task to the underlying platform. I suggest the author provides no evidence because the reality was that he lacked the skills to adjust to developing under a different paradigm in Ruby. I think this is highlighted by passages like: 'It’s the most beautiful PHP I’ve ever written, all wonderfully MVC and DRY, and and I owe it all to Rails.' 'I love Ruby for making me really understand OOP.' 'Ok. All that being said, I’m looking forward to using Rails some day when I start a brand new project from scratch, with Rails in mind from the beginning.' Dear Derek, many many people have made the request that you provide more details about what went wrong with your re-write project. People want to learn from your experience. So far you have not answered. You don't have to publish your code, but I am sure you can talk about some of the problems in more details. You cannot just make this kind of huge report without giving concrete stuff to back it up. Hand-waving is not good for discussion. Some people in the community are interested in what you have said(some are even puzzled), and want you to explain it. I wish you can do it to back yourself up. > ...provide more details about what went wrong with your re-write project.... you have not answered... talk about some of the problems in more details... You cannot just make this kind of huge report without giving concrete stuff to back it up. Nope. I'm done. Browser-search "Derek Sivers" in the comments, above, and I have explained many times why this post was not about Rails or PHP. There is no interesting lesson in my particular specifics, and the details would distract from my real point, which was about customization vs frameworks, integration vs overthrow, learning vs prejudice, and appreciating how you've grown. I have many things on my TO-DO list that I'm excited about and working on today. Taking hours to explain myself to a blindly angry mob is nowhere on that list. This was just a stupid blog post never intended to be read by more than a few people who had asked me why I went back to PHP for CD Baby. I've caught enough shit for it and spent enough of my time on it, answering the questions here. - Derek P.S. Jamie Flournoy's blog post has more wisdom than mine. I would like to further suggest that the author's reasoning is misguided (or lacking). Although we have to assume parts of his old PHP system could not have been refactored rather than re-written (the article provides no reasoning), his choice in changing development language for a new platform seems poor. His existing codebase and development experience is in PHP, which provides advantages both in his knowledgebase and with integration (the author highlights this advantage himself in conclusion #2). These are strong points of reasoning which shouldn't be traded for programming syntax. What a bunch of whiny assholes. Thanks for your story and dont listen to these wankers. I wonder if it would take so much time had you rewrote your website using just plain ruby... Great write-up. Foul language is unprofessional. Good for you, bro. Stick to your guns. Don't let the 'my language is better than yours' pissing-match nerds get in your head. Use what you want. My lord, 2 months after 2 years of anguish. Remarkable. Thank you Derek. Rails or PHP, they are just tools. Thank you for sharing your experience. Great post, thanks for sharing your experience. Congrats on rolling out a successful, efficient application. Sorry that disrespect and immaturity are so prevalent in these comments. Dear Derek, It really is unfortunate that your post became a front on the PHP/Rails religious war. Clearly 90% of those who have replied here could gain a little wisdom from your experience if only they'd open their eyes and sharpen their reading comprehension skills. I'd wager that most of us who understand what you are saying do only because we've had similar experiences. Peace, -j Dear Angry Nerds, You seem to come in three categories. All of you I-could-write-your-site-in-two-weeks-during-bathroom-breaks wannabes are just revealing your own inexperience. All of you mighty defenders of RoR's honor should look up the word "pragmatic". What are you taking so personally anyway? All of you "yeah, rails does suck" jokers have missed the point just as completely. Try reading the article again without the notion that it was written by someone standing up for your personal language prejudice and you may learn something. Peace, -j Ok, I have never used Ruby and never want to. It may be more efficient when it comes to "quick buildouts" but being a programmer for 15 years, I can't let a framework "do everything" for me. I need to know the guts in and out and know what to expect and how to debug when something horrible happens. Yes, I know that "if" I learned Ruby, I would have a better understanding of how to troubleshoot it , but why? Yes, it may be 100,000 lines of source code in PHP and only 10,000 in Rails but remember that "less code means more overhead". ASP.NET was the first to adopt this bloated behavior and Rails does similar. So when you think you are doing "so great" because you have less code than a PHP page, you forget that the Rails engine carries that overhead which I have noted several times in this thread "is slow but they are working on it" So, pick your poison and do it well. Learn the ins and outs of your preferred language and share your positive experiences with others as they all have their own negatives. Funny, just tried to post and got this: fetch(/title/oreillynet/htdocs/blogs/ruby/templates_c/%%00^00C^00C7805A%%mt%3A52.php) [function.fetch]: failed to open stream: No such file or directory I have had a remarkably similar experience.. I was hired to a job as a PHP programmer and we decided to go the "rails" route because we thought it would be a speed gain. After about nine months I mentioned that we could rewrite the entire thing in PHP and have it 'done' in a short amount of time but that is a hard sell after that much development. Needless to say I left the company who are continuing on with the rails development. In the end no framework is really flexible enough to build a complex website, not a 'really' complex site that needs to do a lot of stuff and rails just has too much overhead in comparison to PHP.. I run about 25 PHP sites (some friends, some mine, some paid), some of which are under constant load and I have pretty basic webserver that handles it with 95% process free all the time. running 25 rails sites on the same server would bring it down. (I have banned rails from my dedicated server because it causes too much processor chew in comparison to the other sites running php or perl, and I welcome any mod-python sites, even django built ones as they have the same footprint as php) Like you, after a year of rails and doing things the 'railsy' way my PHP code is beautiful now, though not fully MVC, I am using a hybrid with PHP-PDO which is still one controller per table but allows for some amazing flexibility. I have to force myself not to use logic in templates so I am using my own template engine that I wrote (find it on my blog at blog.peoplesdns.com), and then here at work they are using smarty, which I have come to admire somewhat as it still really keeps code out of html. All in all I think rails is great introductory language for the web, and is kind of like visual basic for the web it really gives you a foundation knowledge of a well thought out idea. PHP is more like coding in c (and I have ported a lot of c to php) and gives you the raw power.. With a good foundation the power of doing whatever you fancy is amplified.. I, like you, am glad to wash my hands of coding in rails but I am thankful to it for some of the re-training.. Couldn't agree more. Every flavor of programming trend that has come (and gone) in the last 10 years always fail to do what the fundamental language itself has done. For the life of me I still do not understand why ANYONE uses ASP for their websites. Rock on, and thank you for everything! Tim stated: >I think one of the things PHP can't do and Ruby CAN is: >reading the requeststream. Tapping into the traffic before it's entirely >processed. >I love this feature, it gives me total control of everything that gets >send/uploaded to my server. this is seriously false and misguided and is actually something stolen from PHP and then gimped up to be made 'simpler' .. it is one of the big flaws in rails imo. if you look up output buffering in php (here is link ) now.. in rails.. try to find out which output handlers your server supports, then select an appropriate one, have nested callback functions and return the value from one function in the middle.. In rails you would have to hack around in the core and jump through hoops all day... in php, you can do that in a few lines (not counting the callback functions which are arbitrary classes or functions) most programmers really like to know what is going on under the hood and having the ability to tweak the valves.. lot of you rails guys are driving your volkswagon and have just read the owners manual halfway and tell everyone they are driving a porshe. @joeldq uhm... how do you think the output stream has anything to do with the raw request? Next time you try to call someone out on a error, you ought to try to take the time to understand what they wrote first. Really, why don't ya all just pipe down on the rails vs. php BS? dude - in 2 years u coulda picked up java and implemented u'r app in a truely rich language. i feel 4 u though..... Maybe joeldq was confused, but the person he was replying to -was- in error. See documentation on php://input which does give access to raw POST data. It is all matter of preferences here but you cannot compare PHP(a language) to a MVC framework. Compare Zend framework, PHP Cake to Rails but clearly you are comparing apple and oranges. Here is some comments to your point: 1) You talk about DRY principle. But then why are you writting your own framework when there are about 3 frameworks at least in PHP to my mind you could have reused (Zend Framework being a big one, PHP Cake or PHPTrack)... So you are already violating DRY by writting your own. Reuse what already has been done...Don t reinvent the wheel. 2)#2 - OUR ENTIRE COMPANY’S STUFF WAS IN PHP: DON’T UNDERESTIMATE INTEGRATION you talk about retraining and infrastructure ? well, this is valid for any other language and definitely does not applies to rails only... Completely inaccurate argument. As soon as you change any piece of technology you will have to retrain people so you are comparing again something that does not applies just rails but the industry in general. 3) #3 - DON’T WANT WHAT I DON’T NEED You don't use 90% of rails? If you use Model, helper, controller, routing, validation, plugins, migration, unittesting, functional testing, etc..then you may be already using 30 to 50% of what rails provides. So does your framework provides something clean for all of these? Do you also use an ORM your wrote from Scratch? You may not see the point of them but they help you refactoring and that applies to Rails or any Other PHP framework out there so it does sounds like you still have quite a bit of refactoring to be done. Finally, you are never force to use everything in a framework. I am quite sure you are not using everything PHP provides and no more than 30% as well... Ang again, that would be comparing a language to a MVC framework. Do the same with other PHP framework #5 IT’S BUILT TO MY TASTES Right, PHP does not complain and so does rails if your turn off all the message you don t want to see. Also the Build to my own taste does really sound to me like much more possibles bugs and work to maintain it as it is barely used by a lot of outsider... So when you migrate to PHP6 and figure out how much it will cost you to maintain it on your own... then maybe you will see why Framework are such a big deal for others. Then if you had 90000 line of code before and went down to 12000, this must have been really poorly designed before and makes me somehow wonder about your past experience and design skills at this point, as well as explaining the lack of appropriate judgement to compare PHP to rails. @Bzzzzzzt well.. ob functions are used to process the 'request' before sending to the client.. (good for second level templating as well) I think you are talking simple 'raw' request which would be $_GET and $_POST in php or the request params in ruby which would not be a contention issue as the webserver itself is responsible for passing those to whatever language you choose.. As for the post, I have a whole lot of experience in both languages (admittedly more in PHP, but PHP has been around a lot longer) and honestly, I think rails is great for a lot of people.. but for any hardcore project you simply cannot use it because of the framework itself.. And people are throwing a lot of hardware at it trying to rescue their development but I think in the end (given my experiences and those of 'most' I know) it is not going to be anywhere near php, let alone perl or python. Never underestimate the value of throwing out your old code and starting over (and two years additional thinking about your problem/program needs ;^) Ken I had a similar experience. My original implementation for my site was using IOWA. Then I ran into some limitations of IOWA and decided to switch to Rails. After working on it for six months(!) I realized I was spending more time fighting Rails than I should have been. SQL is a great language for manipulating data. Yes, keep all your SQL in "model" files but use the language that is natural for the problem, not yet another layer. Anyhow, I started a re-write in scheme (chicken scheme) and I'm 80% done and it is going fast. I think Rails just didn't jive with the way my brain worked but I loved the principles that I learned DRY, MVC etc.. I have a few excellent Ruby on rails books for sale... Just my $0.02 "PROGRAMMING LANGUAGES ARE LIKE GIRLFRIENDS: THE NEW ONE IS BETTER BECAUSE *YOU* ARE BETTER" This is an awesome piece of wisdom. I have been using the Symfony Framework to develop a project, and i think it's amazing what i can get with it. Symfony is nothing more than Ruby on Rails for PHP. It has all of the advantages you quotes from what you have learnt with Rails, plus all the beneficts of PHP. Excellent post. Though I'd say the bigger point is that developers should code in whatever tickles their fancy. There are benefits to every language and every development model. Work with what you know. @joeldq > well.. ob functions are used to process the 'request' before > sending to the client.. And that's where you are very confused. The request isn't sent to the client. The request is sent *from* the client. The "response" is sent to the client. You are talking about the whole other side of the transaction. The content in $_POST ($_REQUEST, etc.) is available *after* PHP processes the request. The person to whom you were responding was saying that having access to the stream *before* it is processed is what PHP won't let you do. As was pointed out, php://input actually does give you access to the unprocessed POST data (but, to be fair, there have been lots of bugs reported with that mechanism over time.) There are a whole lot of "web developers" out there who may have years of experience with language X, Y, or Z, but still haven't familiarized themselves with the HTTP related RFCs and don't know how to use the terminology correctly. I'm sorry to point out that you are, apparently, one of them. I'm not a Rails programmer. I know some Ruby. I generally use PHP for web work though I still write CGIs in Perl (which I have used since '95 or so) sometimes. I'm not a fan of Cake, Symfony, or any other framework but I've had plenty of exposure to several of them. I'm language agnostic so I don't have an agenda when I point out that you are terribly wrong and Rails is, in fact, being used very successfully for many large sites with lots of traffic. Please get your facts straight and stop spreading misinformation. You are doing a disservice to the community and to yourself. I think you need to provide more concrete examples as to why Rails did not allowed you to make the switch completely? "after various setbacks, we were less than halfway done.*" various setbacks?? what setbacks?? please be more precise .. it doesn't give credit to PHP or Rails when you don't even explain what went wrong, what was impossible to do? > I think you need to provide more concrete examples as to why Rails did not allowed you to make the switch completely I think you need to come paint my house. Both would take about the same amount of time and effort. @Bzzzzzzt thanks for your reply.. perhaps you are using an old version of PHP still? this is probably not the place for this discussion, but for uploads. try using the file hooks with apc_fetch, ala or if you don't have apc compiled in or dl'd perhaps stream functions if you want to convert something as it is uploaded with fget(s) input/output The $_POST value is available 'as' the request happens, not after. I could provide many more example url's, virtually every "flash file uploader" uses a similar mechanism because you don't want someone uploading a zip file to a movie site. anyway.. PHP 4 is discontinued.. php 5 has the hooks. ob functions are used in ajax uploaders (ob_flush to push a percentage) etc I can go on.. output/input are virtually identical and can be used almost interchangeably. upgrade? @Bzzzzzzt >I don't have an agenda when I point out that you are terribly wrong and >Rails is, in fact, being used very successfully for many large sites with >lots of traffic. google?, yahoo? name a big one? rails is still a baby in terms of languages and a lot of investors are wary of investing in a company that uses unproved technology.. This may not be important to you, but to many this is a deciding factor. >Please get your facts straight and stop spreading misinformation. You are >doing a disservice to the community and to yourself. thanks! however, I feel the real disservice is rails guys spreading around BS that people can rewrite every site in 10 minutes and suckering people into it (like the author of this article who got suckered into the mass-dementia that is rails) My original point above is that a 'request' is a request, regardless of if it is a post/get whatever and saying that PHP does not have stream-level access to it is misinformation .. If Apache knows about it, PHP can know about it at the same time.. Do you disagree with that? I think Sivers is a bit odd. When he starts the big rewrite he does it in a very public way. As late as May 07 he gives away 20 passes to Rails Conf + hotel rooms. This is definately not stealth mode operations - for whatever reason he wants publicity and he gets it. Frankly, I could care less whether his site is powered with RoR or Assembly Language. So now RoR doesn't work out for him and he again makes a very public statement that he has again found religion and it is PHP. He states that he never intended more than a few people to read this post (let's see posted on a blog - yeah that won't read by very many people!) Now that he is challenged to give some specific examples he says his Todo list is already too big.. odd, very odd. I Sounds like a poor design, and trying to force a framework to deal with this poor architecture rather than taking the opportunity to clean that up as well. You claim to be knowledgeable in SQL, but I look at a cdbaby.com and see your comment about 95 tables, and my first thought it that you really need to study or take a class or something on proper database design. There is no reason a site like should require 95 tables. There are good discussions to be had about frameworks/php/mvc/rails/etc with regards to thier advantages/disadvantages. But this isn't one of them. This has nothing to do with comparing languages/tools. This is just of an example of someone blaming a tool when the real problem is a lack of skill and knowledge. Two years? A competent rails or php dev could knock this out in around a month in either php or rails. This is assuming they were constrained by a very poorly designed database. #6 makes no sense. You can use raw SQL whenever you want, including migrations. It's right there in the Rails docs: class MakeJoinUnique < ActiveRecord::Migration def self.up execute "ALTER TABLE `pages_linked_pages` ADD UNIQUE `page_id_linked_page_id` (`page_id`,`linked_page_id`)" end def self.down execute "ALTER TABLE `pages_linked_pages` DROP INDEX `page_id_linked_page_id`" end end I was one of the folks that replied to your early job postings for Rockstars. We would have had this project done in 6 months, less the nightmares you have apparently encountered. You were nothing but rude and condescending in your replies to us. Your website is ugly and I doubt the code is any prettier, especially if you're back to PHP! One lesson you might learn is: get a real programming team next time you want to do a project. You are a moron and the jackass you hired is one also. Don't get on a blog and rant about Rails when you obviously have no idea what you're talking about. You deserve everything you got, pal. Quit acting like you and your "RockStar" are so brilliant. You are an idiot. i've been developing rails apps for almost 2 years and am also very familiar with asp, php and jsp.. RoR is way better IMO. With all the gems and plugins you can do pretty much anything and you don't have to do any 'twisting'.. and if you don't like a gem, but there's a java library you prefer you can always use rjb and use that library instead. ruby is alive under the framework and ruby is very powerful. truly object oriented (primitive types pfft), dynamic (build code on the fly and run it with eval), with yield you can pass a function parameters and code, and for a language that runs through an interpreter it is pretty damn fast (try some opengl code and you'll be suprised.. i know i was!).. not too mention that the interpreter still has much room for improvement. if you don't know the power of ruby it's time to step out of the dark ages! Have seen the site just last time and now with the rewrite it more or less have adapted the simplicity of Rails design. Whatever suits the project and the programmer should be the deciding factor on the language choice. You don't want to create/produce a site with you always hacking a language just to make it work. Instead use whatever it is you know (best) and start from there. That way you have spare time on reviewing the code rather spending time finding solutions to problems you already know the answer being on the other side. I rest my case. I enjoyed the obscenities. they make you sound knowledgeable and most of all, COOL. ANTI. rock me, bro. "I enjoyed the obscenities. they make you sound knowledgeable and most of all, COOL. ANTI. rock me, bro." hehe, nice one Derek! guess the truth is a bit hard to swallow? keep your stupid comments to yourself and stop wasting space on the o'Reilly blogs! lot's of PHP postings on craigslist-- here's an obscenitiy-- You are a DUMB ASS! only about 1 out of 500 programmers is worth a crap. 499 are full of crap. consider that next time you shop for RockStars! The only thing this article actually proves is that you, my friend, are entirely incompetent. @joeldg I've been using PHP 5 almost to the total exclusion of PHP 4 for about a year and a half. Look... the APC stuff is new. It isn't thread safe. And it doesn't do what the original dude talking about requeststream was asking for. Basically, by bringing it up, you've shown that you *still* don't know what you are talking about. Of course, coming from someone who says things like, "output/input are virtually identical and can be used almost interchangeably," this isn't really surprising. > I can go on.. Please don't. You've already made your knowledge and skill level very apparent. You just don't really have the experience to know what you are talking about. That's fine. You seem like a motivated kid. You'll get it over time... but not by posting here. >I don't have an agenda when I point out that you are terribly wrong and >Rails is, in fact, being used very successfully for many large sites with >lots of traffic. > google?, yahoo? name a big one? Odeo and Penny Arcade come to mind. A List Apart, I think. Someone here mentioned yellowpages.com. There are hundreds really. Mostly they are fairly new, just like the technology. That's undoubtedly in part because the people running most large well-established sites already understand the wisdom Derek was trying to impart here: don't rewrite in another language just because you think it's cool. > however, I feel the real disservice is rails guys spreading > around BS that people can rewrite every site in 10 minutes While there are some foolish young geeks-in-training trying to spread that message, they aren't the majority. And certainly they aren't fooling anyone but some of their geek-in-training peers. > My original point above is that a 'request' is a request, > regardless of if it is a post/get whatever and saying that PHP > does not have stream-level access to it is misinformation .. More... uh... huh? Maybe you could explain how PHP provides a stream from a GET request? (No, no. Please don't actually try!) Sorry Joel, but you just don't understand the details of your own arguments. Which is why this whole little debate started between the two of us. I'm glad to see this post, if for nothing else, to promote that some of the great design features hailed by Ruby on Rails are possible (and have been for a long time) in other languages like PHP and MySQL. I just wrote a post about this on the SilverStripe PHP5 framework blog It surprises me (well, not really) that this turns into a big flame fest on both sides when the actual point of the post was something any competent programmer should know: in a case where more than one language will do what you need on a given project, the one you can be most productive and effective with is the best choice. Of course biases come into play ... but ultimately it is the end result that matters and if Derek ended up with something he can maintain and that does the job then why fault him? This is his *business*, after all, not some school tech project meant to make him "look cool" :-/ Personally I love Rails (actually, I love *Ruby*, which makes working with Rails so nice...) but to each his own. Congrats, Derek, on the new site! @Bzzzzzzt at this point, I am just enjoying watching what you fail to address in my posts and how you managed to avoid every point I have made to continue on with the "you don't know... you just don't know..." the stream functions on all levels of access work together (perhaps not quite interchangeable.. more like working rather closely and interchange data seamlessly.. think fgets/fputs) Anyway, I am glad you have your opinion and clearly it is a strong one, stick by your guns.. apc functions were just the first link off a google search and thread issues aside even using any other method you should have the hooks in php5 and if you feel like it you can write your own pretty in PHP.. again my point getting back to the original post with the guy spewing his nonsense about how PHP cannot do that.. and here you are trying to be alpha-geek and agree with him.. congrats you have your own lonely corner back just like in high-school... Please, feel free to believe that PHP cannot do that.. As to your other points, rails sites being "fairly new" the other question is do you know how many rails sites you can host on a machine with 2GB RAM vrs PHP sites on the same box? Look that gem up.. your last point is the best.. if you have ever watched a steaming flash movie.. think about how your browser just might be contacting the other server without using a get request?.. you are seriously funny ... Hi! I just send a post talking about the restrictions of Rail-Like frameworks... RubyOnRails is actually Ruby made bondage-and-discipline, the same goes for CakePHP and PHP made bondage-and-discipline. @joeldg > if you have ever watched a steaming flash movie.. think about > how your browser just might be contacting the other server > without using a get request?.. The only explanation for this muddled rambling is that you are *still* confusing the request and the response. Although, now we probably have to add that you don't have a clue about Flash and how progressive transfer works. (Hint: it isn't the browser which is the client.) And I'm sure we can't go anywhere near a discussion of RTMP, right? Sorry, but I'm not here to be your professor. Your coworkers can deal with your confusion. I'm done with it. Have a nice day. It's funny when people say that PHP is ugly and Ruby is beautiful. Ruby is the partially aborted 2nd cousin of Perl and Smalltalk. Tell me how could that be beautiful? thanks for the info! great to know such is the case! but i must admit you seem wasteful in terms of LOC. i think your site (CDBaby.com) can be implemented in PHP in 80% less LOC than you claimed. poor rails fans, hearing something bad about rails make them become soo childish :) I like Ruby on Rails but certainly there are more PHP developers out there and PHP deployment is really simple. Finding a really good Ruby developer is very complicated. Show your existing developers a PHP Framework like Akelos (a direct port of Rails), and they'll also write opinionated "beautiful PHP code". With a big advantage, if you need to do code custom functionalities in a non-opinionated way, just drop a custom.php with your hand coded SQL and you're done! Not all developers like to write beautiful Ruby or Python, some feel more comfortable with C-like languages. BTW Derek didn't you know about Akelos, you could have saved a lot of time reusing your views and models and then coded those bits you missed on your custom dispatchers! And it is one of the only PHP Frameworks out there that is pretty well unit tested. hello stumptown. bravo cdbaby. i have a parallel situation. I built a few MVC simplistic apps with Ruby (rails and not) and returned to PHP. The stuff i learned using various other schemes and practices are valuable, but in the end, the hype outweighted the practicality and usefulness for me. I applaud your strong stand amongst people who are sometimes sharp tongued about the next big thing. If your code library is mostly in PHP, it makes sense, given the maturity of the language and rabid core developers, to stick with it. I burnt several years as well, but not just with RoR. Your article is great and I think you are right. Rails is just a good framework, while PHP is a *big* technology. Maybe Ruby is a beautiful language, but when you want to build a website with Ruby, Rails is the only choice. In PHP world you have a lot of choices: libraries, frameworks ... Many of them is so much better than Rails. The PHP community is also so much larger than Rails community. PHP is the best technology for the web. I disagree with him too, but every Slashdotter flaming him is nevertheless a retard. Both are my girlfriends. I love PHP since it is my money maker and rails .. since she is beautifull ... (have not generate money for me yet) ... I love both ... i love to see (cakephp,joomla) grows together with rails ... without envy ;) I read this when I was looking at weather I wanted to learn Ruby on Rails or PHP. I will certainly go with PHP. Just based this on the number of Pricks in the Rails community. I used PHP for years and understand that it could have been better laid out *after* switching to Rails at the end of 2005. But I'll never use PHP again, it's just too long-winded. RoR - you can get it doing anything, it's just how you approach it. This story reminds me of the likes of people who look at Linux for a bit but give up because notepad is called GEdit. is a RoR site. Thanks Derek, for calling out that the emperor has no clothes. Rails does not work for projects which already have a legacy database schema. (Neither would Django or any kind of ORM). That said, it's always useful to have a stab at working in a different framework because it simply makes you aware of the shortcuts you can take. Hi, It's a really good article, because it comes to the point, that what cares, is what you make, and you can do that in any appropriate language. I think I also have had an CDBaby account sometime long ago. Thanks for having the guts to post this, Derek. I think as all of us gain more experience in the constantly-evolving web-development world, it becomes clearer that the 'best' language for any project is whatever you're most comfortable with, give or take a little bit of consideration for easy-of-maintenance, continuity with developers, etc. We're a 'technology agnostic' company, developing in PHP, Ruby, .NET and Java, and I can't see that changing any time soon. To each their own! "I love SQL. I dream in queries. I think in tables." >I read this when I was looking at weather I wanted to learn Ruby on Rails or PHP. >I will certainly go with PHP. >Just based this on the number of Pricks in the Rails community. I have noticed this correlation as well, it also seems that the Mac crowd are all huddled around rails more so than any other and the Mac Zealots are worse than any other Zealots.. (not saying macs are bad, just that when some of these guys get preachy they REALLY get preachy) Say one bad thing about rails or bring up the price of a mac and hope to be wearing a flame-retardant suit because it will all be your fault and you are stupid becuase 'rails so can do that..' or 'macs are better, look at our commercials' glad someone else noticed at least I wish this was a much more REAL look at the technical differences. This could have been a Struts vs Spring, J2EE vs Struts, Symfony vs Django, Rails vs PHP etc etc. Just find/replace the tech name. I don't doubt, for you, this didn't work but, you even mention "many setbacks were because of tech emergencies that pulled our attention to other internal projects that were not the rewrite itself" Losing train of thought is a bigger setback to a project then learning curve at times. I would love to hear Jeremy's rebuttal to this article before I make a decision. In my almost 2yrs exposure and experience coding in Rails I am VERY VERY impressed with it. Impressed to the point that I firmly believe it's the technology you should START looking at as your solution and then evaluating if it should be REPLACED (in lieu of something better for your organization). I hardly ever look at all technologies FIRST and then whittle them away to my final pick. It's just not needed. ActiveRecord, Migrations, Fixtures, Testing built in the Ajax integration... says a LOT. Just my $0.02. reading through the comments here has been interesting my favorite is the rails guy who thinks you cannot stream a get request from php. saying that is what i do for a living i think i might have myself a new .sig file he should look up piping requests through php via htaccess, can rails can do that? you can even control the bandwidth usage per user or connection from php the article really touched off some rails feelings choose your tool that best suits you and try not to become too attached to it because something new is always around the corner. Damn right. I'm going back to FORTRAN Hey All, I always felt that RoR was great for CRUD sort of websites (forums, forms and adding content), not for applications. Most larger websites are 'application allike' RoR is not ment to do that work. Ries van Twisk @Bzzzzzzt >Although, now we probably have to add that you don't have a clue about >Flash and how progressive transfer works. (Hint: it isn't the browser which >is the client.) And I'm sure we can't go anywhere near a discussion of >RTMP, right? RTMP is for Adobe flash media server (a paid product), which has nothing to do with this discussion about PHP. Slinging around acronyms won't make you appear smarter, especially when you are using them out of context. try: >Sorry, but I'm not here to be your professor. Your coworkers can deal with >your confusion. I'm done with it. Oh thank god, you are not to ever lecture anyone.. especially about this topic. And as well.. I 'am' done with it as I don't have time to deal with what is clearly a kid who just wants to be a naysayer and trumpeter of his own ego.. I think this whole Ruby on Rails is over hyped Personally I use Python a lot Looked at no of different web frameworks and only liked Spyce, when tried to use it problems arise that could no be solved unless you can understand entire source behind it At the end it took me a week to implement my own framework, simple example: The time is final product can handle database with SQL and POST to handle forms, sessions, login forms, etc Lets see here.... I could: a) continue to bog my company down trying to shoe horn my existing data and functionality into a cool new framework b) go back to what I know and made me multi-millionaire entrepreneur running one of the most successful online businesses around Be careful how you answer because the decision you make will affect your future and that of your 100+ employees. Kudos Mr. Sivers! A lot of media agencies in the UK understand exactly what you've been trough, believe me. That's why no one in the industry is using Rails or Perl to develop website. [Huge Rails Fan] Okay. What you have said is true. Php can do anything RoR can. RoR does it more elegantly but there you have it. My question (you can reply to keynan@bykeynan.com) is what were you trying to get rails to do that gave you such trouble? great article! Always choose the right language/tool or personal preference for your projects. I have never used ruby(on rails), nor will I try it. I'd rather use PHP, J2EE or ASP.NET. Many people claim that you can write better code with ruby(on rails) hence you should use it. well, you can write good code in php too, it's not the language(or framework) but the programmers themselves that decide the quality of the code. You're stupid! But, good luck with your PHP. If you not know Rails, you not have to say about this... I like cdbaby.com. It's FAST and I was there to search & buy music, not to learn latest Web 2.0 features. I agree with #6 let the SQL database do the best with clear SQL queries. despite the flame wars, there have been some interesting things said here, over-all I enjoyed the discussion. I have learned quite a few languages over the years... including 6 different versions of Assembly; RISC, CISC microcode. etc. actually my first programming experience was binary punched in through the front panel. For awhile there, it seemed that the lifespan of a computer language was about 3 years, exhausting to say the least, I even took a whack at writing my own. What I have found is that it takes about a year to become proficient at a computer language and much longer than that to become really good at it. There is a huge gulf between theory and practice. What looks good in a CompSci classroom often does not work in the real world. People who haven't gone through the process of implementing a really large *real-world* project, and dealing with all the problems that come up; lack the experience needed to offer more than the most speculative of opinions about your project. And even with that background, most of the discussion about your project is limited to conjuncture based on the small amount of information that we have been given. Isn't it great that there are so many experts here who have offered you their sage advice. ^-) ---------- As you discovered, it is good to learn different languages, you get different perspectives on how to approach a problem. And the knowledge that you take away, helps you to be a better programmer in any language. The same is true culturally as well. You will probably enjoy reading this article: "Teach Yourself Programming in Ten Years" Tackling a huge project for your first experience with a new computer language is not necessarily the best learning strategy, but that never stopped me from trying it. ^-) I don't use C (or assembly) anymore because I got tired of reinventing the wheel every time I wanted to do anything at all. On the other hand I avoid do-it-all frameworks because you get tons of bloat that you didn't want, and you are at the mercy of someone else's bugs. And every time that you add a layer it costs you performance, sometimes the performance trade-off is worth it, computer cycles are cheaper than dev cycles, but often it is not, the program still has to deliver real world response times and computers still aren't free. Then there is the question of flexibility or the lack of it, a framework is optimized for doing the things that the architect conceptualized, if you want to do something different from that, the inherent assumptions of the framework fight against you. (consider: Vista, 9gigs for an OS!!, slow as a dog, and buggy as heck, and late as h*ll, created by the biggest framework of them all, if frameworks are so great.... WTF?) But for me the bigger issue is the agony of dealing with other people's bugs, and the more complex it is, the more bugs it is likely to have. I find that the less I rely on other people's code the more likely my program is to work. If you think this is an arrogant attitude, then consider that a major product team at Microsoft started out using MFC and then abandoned it and rewrote their program without it because they found that MFC was too buggy and too slow (Meanwhile M$ continues to heavily promote MFC for use in their competitor's products, makes yah think huh??). The thing about bugs is that if they are your bugs, then you have the opportunity to fix them, but if they are someone else's bugs you are stuck with trying to find away around them. PHP has proven to be remarkably bug free (oh sure, the change log is full of bug *fixes*, but as a practical matter, in the real world I have only ever hit one of them, the fundamentals are solid). It's important that a computer language be translucent, the goal is to provide the means for translating your conceptualizations into an implementation that the computer can understand. Some tools can help with organizing your thoughts. But at the end of the day, it's all binary anyway; and most of this talk about "my language is better than your language" is just so much rubbish. There is also the always overlooked consideration that people's brains are wired differently, left/right brain people approach things differently. Some people are very numerically inclined, they love math equations and the terseness of C. At the other side are the more poetic/artistic/spatially inclined. The terseness of C drives them nuts and they are much happier with the fluid prose style of something like VB6. Yes, I know, ~real programmers~ smirk/sneer at VB6; but it was a good enough language for gui apps, much more productive than using c++ for gui apps, and drove corporate America successfully for many years until M$ decided that they weren't making enough money from selling it and decided to abandon it. Visual Fred is a sop, to disguise what they really did. VB is dead long live VB! and the emperor is wearing no cloths. I like PHP and these days, use it for all my web based apps, I even use it for writing command line programs and special purpose servers. For a c flavored language, PHP does a fairly good job of straddling the middle ground of the left/right brain approaches. I find it to be a very translucent language that does a good job of getting out of the way and enabling you to get the job done. But this is just my personal preference, other people make other choices, isn't it great that we have so many choices available. And I especially appreciate that open source programs such as PHP and Ruby are in it for the long haul, not subject to the corporate whims of some marketing department or the monopolistic requirements that prevented VB6 from being cross-platform. I suspect that a lot of the criticisms being leveled at PHP are from people who are unfamiliar with it, especially with the redesigned object handling in version 5. It may not have all the highfalutin gizmos of CompSci dreamland; but it's object model is plenty good enough to do the heavy lifting that is needed in the real world. I've seen plenty of ugly code, it is written in ~every~ computer language. We all start out writing ugly code, and then hopefully we get better at it and learn to write less ugly code. But meanwhile, we hopefully managed to get something useful done. It sounds like you have gotten a lot of useful stuff done, far more than most people. Did you make a mistake? No! You simply enrolled for a couple of years at an expensive University, and came away with a lot more knowledge about how to accomplish your goals. I'd call that a huge success! Keep Learning Keep Growing, I've been at this for 20+ years and I still spend about 30% of my time on study, I'm just a slow learner I guess... :-) -- Erik Hahaha... I love point #7. I also tend to look back at my old codes and keep thinking how much better I can redo my programs. I will ask whether I can write them better or write them in a better language, though I may not able to do it. Sometimes it is better to rely on a familiar language/tool to the problems fit rather than fit the problems with another language/tool. I coundn't agree with you more, but I think ruby & rails are cool. Maybe there is no good reason to rewrite a system with ruby & rails, but they probably are good and efficent tools to develop a prototype system rapidly. trying to read through the comments but the guy who is arguing about the fact that php cannot stream must be the rails guy who wrote that first post that was quoted. no php coder responds that way on any php site so I think he is a ruby guy playing "php guy" otherwise, good constructive criticism on the article and hope to see more in-depth analysis later many people are considering rails, but few want to take the time, it is a shame on both sides, one for the community and one for ruby to let mod_ruby go to pot and leave all the developers scrambling for something/anything that might work and let them compete with php.. first apache, then lighty (sp) then mongrel, then nginx (the russian one).. at this point rails guys are looking the fools switching web servers every other day trying to make it work dedicate resources for mod_ruby and they will come, leave it as it is and no hosting provider with half a brain will touch it First of all I'm a database-centric person -- one reason being that you can use any number of reporting packages to produce the reports management wants. For those who plug Java as the way to go, it can fall down in a multi-platform environment when the hot off the press packages required by your fancy Java app can require serious system upgrades on older OSes. PHP has the coverage that the Java approach does not and I know of one ISV with a system firmly stuck in half-implementation. Thanks for the positive remarks about Rails and how it made you a better PHP Developer! Pick up the best hammer you know how to use and build the house! For me, I've decided to learn Rails. I think most of the flamers in this comments list obviously don't get it. Derek is running a successful company. He makes his decisions based on business needs not some religious following of framework X or language Y. How many of you flamers actually run a business, a successul one at that. Maybe you don't like the site. Other seem to disagree, given its success. It also seems like you can't read. Derek clearly said he likes Rails, but it simply didn't work for him. Given that the developer he hired now works for 37signals, then he surely most know his Rails. ok, so you've got the code sorted, now how about spending some time on design? cdbaby.com is one of the blandest sites I've ever set eyes on... While 'geek' as become a badge of pride recently, part of its definition is 'socially inept, underdeveloped', and this is what I see in more than half the comments here: immature rantings of a quasi-religious nature. Robin gets it that here is this techno-leader who bought the latest hype but is driven by realities to stop, look, start over, and deliver something that works instead of a mess. In case no one else remembers model 1 JSP, EJB 1 and 2, there were a lot of rewrites of perfectly functional applications and sites that both took forever and only produced steaming piles of non-performing crap. Another way to look at what Derek is saying is that, at long last, we are approaching a convergence on software engineering principles and understanding that transcends language and implementation detail. That's the most important point, and most of the jejune, socially unskilled, narrowly educated technocrat responders to this post seem to have missed it entirely. Thanks for sharing this experience. By the following quote: "twisting the deep inner guts of Rails to make it do things it was never intended to do" Its pretty clear that you chose the wrong tool for the job. Our shop is all PHP, so I'm no rails fanboy (tho I do enjoy it), but if you start with the wrong framework for the job, you're doomed. To me the question here is not really about PHP vs. Rails/Ruby, but about Frameworks vs. Not using a framework. A framework forces you to use its own conventions, which can be a good or a bad thing, depending on what problems you're trying to solve. Your reasons (except 4 and 6) show that your planning was awful. There is no php or ruby. Just your awful planning. Reason four shows lack of experience about ruby. All dynamic languages in web like perl, ruby, python or php are in one class of performance. It means that if you get too slow (two,three times or more) results (comparing to the others) it's just your faults or lack of experience in that language. Sorry. No silver bullet... Write one to throw away... This article is just confirming what we know about after 30 years of software development. Those who haven't learned it the hard way will do do in the future. And one class-per table is not good OO programming, good domain modelling, good system design or good database design. Frameworks (and anyone's initial architectures) are ultimately always doomed to run up against limits - the only question is how soon and is that within the expected lifetime of the project. I think you have to do what best fits your style and organization. I don't think any language is the end-all, be-all for a true "enterprise" solution. I think Ruby is a great language, but it is not a solution to every problem. In my opinion, many developers have found comfort in Ruby that they have not found in Java, PHP, Coldfusion, C, or Perl. I have favored Java over C just because I didn't really mature as developer until I really learned Java. I may not really know C enough to appreciate it or maybe I wasn't doing the right type of projects when I worked with C. Maybe if you were re-writing everything from scratch, including your data model and supporting applications, you would have found Ruby to better suit your needs, maybe not. People get way too wrapped up in the language wars and really do not understand that it is not about the "how" but the "is it done yet?". Seems like most people miss the true point: Derek isn't a trained, experienced programmer. He's making it up as he goes along, and in the process, he's going to stumble around quite a bit. Having started with PHP, he will always prefer it until he grows enough to grow beyond it. In the mean time, he's learned some new patterns and some new disciplines, and has learned to use them in the language he's comfortable with. This is good. The truth of the matter is that Ruby on Rails isn't the advantage, Ruby on Rails as a *PATTERN* is the true advantage. As he pointed out, he learned new things from RoR (new patterns, new ways of doing things) and applied them to his existing girlfriend. When Derek gets far enough along the growth curve, he'll realize that he's at a point where he is no longer working and thinking in "The PHP Way". At that point, *THEN* he will be in a position to switch to the language or framework that most naturally fits his new frame of mind. But not until then. Or to put it another way: My new girlfriend isn't better than my old girlfriend because I'm different. My new girlfriend is better because now I know enough about relationships, and have acquired enough wisdom, to find the girl that best fits my needs. Maybe you should have used something like Catalyst framework, it allows to choose method of DB access. First to Derek: Good post, good points. The comments here pretty much show why I don't really like the Ruby community. And to the commenters who think they are able to estimate how long they need to rewrite Derek's site in $their_favorite_language just by looking at the frontend: Please don't forget to leave your full name and location. Your future employers will surely be as impressed by your professional experience and attitude as I am. PS: Personally, I hate PHP. But I can see a valid argument when I see one. And since his weren't about PHP vs. Rails/Ruby anyway... You should ask yourself why you couldn't use Ruby and if you are better off with "settling for" keeping PHP. Your reasons seem to center around PHP as an existing environment and that's like saying "I'm going to use X because I'm already using X". PHP is great and Ruby is beautiful - either is suitable. If you between 2 and 5 coders, a long release cycle, and all of your developers know every line of your code - PHP is great. But you have to ask yourself: By coding everything in PHP and not abstracting things like SQL, have you personally tied yourself to the code base when you should be working on more important things like figuring out how you are going to meet the demands of your competitors? If you decided to use PHP, did not abstract SQL, and have between 2 and 5 developers, you will surely struggle with security and internationalization, and your competitors' future offerings will easily pounce on your products. You may not realize this until you are already struggling. "twisting the deep inner guts of Rails to make it do things it was never intended to do" One of which things would, I suppose, be "fitting an existing database schema". The sites I currently work on are, like yours seems to be, built around the database, which itself was built around the requirements for the project. Most of the heavy lifting is done in SQL, the scripting language really just iterates through returned queries and formats the results. It may not be buzzword-compliant, but it works well and leverages the particular strengths of the database and the chosen language to good effect. If I wanted to use a framework for these, I would have to throw away everything and redesign it all from the ground up, to fit the framework. Kind of like trying to change the oil in my car when the only tool I can find is a butter knife. No amount of "mad skillz" is going to make that tool work for the job I need to do. Ever. On the other hand I *could* put butter on toast with a 5/8" wrench, but it would be a lot easier to use the butter knife for that job, if only I hadn't busted it trying to change the oil in my Pinto. :) I understand the SQL oriented approach and I have to say that it has worked very well for me on many high throughput web sites. Too much of this OO style tries to hide the SQL with things like Hibernate and other SQL Dodo tools. To write a database centric app and not understand the DB is simply silly and will end up with innefficient code. Who cares if the front end is cool and OO if the DB runs like sludge because it was coded through some sort of persistence layers? I am a big fan of logic in the DB - it is efficient and tidy and produces solid websites if you have a decent SQL coder on the team. Hey Derek, very interesting read. I was curiously reading everything about your rails experience since you started blogging about it. I had a similar experience. In the end, I felt that Rails contained too much "magic". I'm terrified by frameworks I don't fully understand, no matter if it's the MFC or Rails or Django or Symphony. In the end, I feel it is better to exactly understand what each line is doing, even if it means more typing. Under pressure, the last thing I want is having to fight with a framework or with an IDE. An interesting article, but I think it possibly has little to do with Ruby, Rails or PHP. His app doesn't look that complicated, especially if he could rebuild it in PHP so quickly, so how come it took Jeremy Kemper 2 years to not complete it? For me, i reckon that with a bit of digging we'd discover the true business complications and hindrances that prevented the project from being a success (and he hinted at them) and they weren't to do with chosen programming language or framework. I reckon the business didn't give the project enough attention and after 2 years of little progress they got fedup, panicked, maybe Jaremy left and they threw it together in a language they understood. They then used the language as an explanation for a delayed project, when it was probably all about ineffective business practices - Yep, the same ineffective practices that we all face each day. Also, in my mind, a project should be dropped waaaayyyy sooner if it's off the rails (no pun intended). If it takes you 2 years to realise a project of this size (pretty small) is on a path to failure, something else is broken in the business, and it isn't something PHP or Ruby can fix. Also, it's really easy in Rails to create APIs on top of existing database tables and Rails is best mates with SQL - it's just as easy to use SQL in Rails as in PHP. Andy Where is DHH now? Rails can do anything! I am a software project manager. My job is to help my client make more money. The most important rule is "A good development methodology is one which delivers a working product with the people you have". Code; be it Ruby, PHP, Coldfusion or Java is the smallest part of the problem. The use of simple, standard frameworks is necessary to ensure the business is protected against the loss of key developers. Ideally there is not a key developer. Two things: First, this sounds like more of a problem with any rewrite project as DHH recently pointed to on his own blog. Agility was what was needed. Looks like the project got sort of sidetracked, since the rewrite wasn't as important as server failures, new projects, etc. Understandable. Then, when it was just you, you tore into what you knew best. Also, very understandable. To use Rails, you have to change the architecture of the site. I'm no web app genius like the 37 Signals guys, but 95 tables for CDBaby? Say it ain't so, Joe. I looked over the site and can count maybe 20 outside the order system itself. Seems like that is part of the problem. Or was the problem. Doesn't matter now. You got it how you like it, so there it is. What else could you ask for, right? Derek, As a fellow O'Reilly blogger (over on the XML side), I see you've run afoul of the great zealotry beast - "You're an idiot if you don't like FOO technology because FOO rocks!!". Been there, done that, got plenty of t-shirts. PHP started as a pure procedural language, and has been migrating to an OOP form for about the last six years. Its current OOP syntax, while a little messy, is not that dissimilar from Ruby's, and if you are in a position where you can work from scratch with the most contemporary libraries, you're probably developing an application that's now more OOP aware. Most PHP code is messy because it was either transitional or it was written by programmers who didn't come from an OOP background (quite a few of those out there, even now), not because the language itself is inherently bad - it simply has a longer per capita legacy than Ruby. I think the best takeaway from your article is the fact that you spent some time in Ruby-land which gave you a much stronger exposure to OOP, MVC patterns, distributed design and the like, and took that back to a language that you were comfortable with and were able to apply the lessons learned in an idiom that you were more comfortable with. I started coding about the time that C first came on the scene in a big way, and have learned (and forgotten) entirely too many different computer languages. After a while, you come to realize that, while languages do and can impose their own idiom on your development, ultimately what matters is good architecture, good documentation and intelligent design. Many of the Ruby commentators here have come to Ruby as their first or (more typically) second language, and they are convinced that it is the salvation of the world's problems. Nope. Ruby's got a nice clean syntax rather than having OOP tacked onto it like a lot of languages (from C++ on), and for some websites it works remarkably well, if the alignment of the site is consistent with the alignment of the development community. It gets a big boost by incorporating JavaScript on the front end, and ActiveRecord is kind of nifty for people who are more interested in data abstraction than in data performance, but overall, it is, in the end, yet another programming language. As people try to push it to do more things, it will become fragile, complex and error prone, in the same manner that all frameworks do, in spite of the protestations of its enthusiasts. A good craftsman may own many tools in his tool-chest, but only one of those tools will fits his hand like a hammer with the handle worn subtly away by years of pressure from his fingers. If PHP is your hammer, then it is what you are comfortable using, understanding that Ruby is always there like a fancy ball peen, to be pulled out when called for, but that doesn't necessarily fit your hands personally quite as well. I just wanted to say: I completely agree with you. In fact, I followed a near-identical path. I, however, started out with a Rails application. I was frustrated by its speed and the need to constantly hack up Rails to get things to work correctly. I am now in the process of recoding it in PHP and it is coming along GREAT. Everything fits together how I want it, it is much faster, and I get to write my own SQL without layers of superfluous abstraction. I'm really glad you wrote this! I feel a little ashamed that all these language hot heads are attacking you for having made the decisions you have made, even though they were clearly advantageous to you... I have gone through the same path as you, and have learned some stuff the same way as you have, and I am extremely happy with my primary language right now. It has a lot of advantages _for_me_ that the other languages I tried in the past. I'd be stupid to actually list what languages I am talking about, lest bigots start asking me why it's better so they can attack my arguments. Anyway, you know the saying, to someone that only knows how to use a hammer, every problem looks like a nail. The more tools you have, the better output your work provides. Cheers, and don't take the language bigots seriously. You can take this however you want but I have developed web apps in ColdFusion, PERL, ASP, JSP, ASP.net and PHP. After using all of those I prefer PHP by a long shot and it comes down to this.... I read somewhere that there are two kinds of programmers...those that work best by plugging pieces together and those that work best by writing their own pieces. For me I prefer to build my own pieces because when the provided pieces don't work you tend to spend alot more time navigating around them than if you had built your own. But is appears that the "pro-piece" crowd believes you are doing something wrong if the pieces don't work for you out of the box. Now being an open source fanatic I push for the use of this technology because you can modify it and don't have to bend your process to meet the technology. So why then would I pick a language where you have to do just that? I can relate to this article because I had a similar experience. I took a job doing ASP.net and thought it was ok until those pieces didn't do what I wanted and I had to go out of the way to get results. The more experienced .net developers around me confirmed that I was following best practices as well as various articles. One day I got fed up, installed a LAMP server on my desktop and recoded our entire 40+ file data layer(1 object per table) in 1 single 200 line php file. I showed them and they didn't know what to make of it. But in the end the framework of .net taught me some lessons that I brought to PHP and made my code even more powerful. There is a reason though that I was able to bring my lessons from .net to PHP but not the other way around and thats because PHP is flexible. PHP is a language that makes a deal with you. It says "If you are disciplined enough to write clean code I will let you do things that other languages won't by not enforcing ANYTHING". Thats somewhat of an exaggeration but you should get the point. This is where the split in piece and non-piece people occurs. Piece people scream about ugly code because they abuse the simplicity and freedom of the code and build a house on twigs. The non-piece people take that same power and freedom and build a foundation...often similar to frameworks like .net or rails but customized to their own style and needs. When they hit a roadblock with their foundation they can tweak it and make it better rather than look for another way to use it or abandon it altogether. But with PHP that "deal" allows you to build a powerful framework with very little code most times. I'm not saying you couldn't build it with other language and Ruby standalone could very well be better at it than PHP..I don't know. But the people that flock to Rails, .net etc. normally either can't or don't want to build there own. And to be fair I don't really care for any of the PHP frameworks either. But it should be noted that both .net and Rails seem to have been mimicked in PHP and without much complaint which again is a testimony to the language. In a sense the developers of Rails and .net are really getting over on their users. They are developing frameworks and probably doing just what the good PHP developer does internally but are pushing their particular way of doing things on to you and not leaving you with a real way to get what YOU want out of the code. Its very generous of them to build these frameworks for those of us that need it. But some of us can and prefer to build these things with a raw language and especially one that will let us cheat if we play nice. Its not the language, its not the framework, its not the code. Its the end result, its what the customer uses and its about how you maintain and keep it running. The people who shop at CDBaby can give a rats ass that is CDBaby is RoR, they just want to shop and have it all work. Derek identified problems with RoR and his needs. It only proves that not every tool is right for the job. Derek makes some points about RoR that never seem to be mentioned in all the hype over RoR, but it was great reading it here. He says, "“Is there anything Rails can do, that PHP CAN’T do?” "The answer is no.". Derek is hitting the proverbial nail right on the head. I know al the RoR people will start talking about ActiveRecord, Migrations etc. etc. So what?! Smalltalk80 was one of the most amazing OO langauges that had things no language has or will be able to do. I ask: When was the last program you wrote in SmallTalk? All the RoR people can jump up and down about how wonderful Ruby is and how great Rails is. But its a darn difficult framework to get your arms around and develop something quickly. It requires a couple things that most scripting languages like PHP don't require: A massive commitment to the philosophy behind its architecture. If you're totally commited, you're not going to get the benefit. The beauty of PHP is that there's no moral commitment to a philosphy, you can just fire up an editor, start coding and within seconds you have HTML pages with scripting accessing databases, etc. etc. There's something to be said about that simplicity. Is it a perfect model to code, and can get it get sloppy? OF COURSE. But I've seen Java and C++ projects grow to epic and management proportions, and I've seen some very tight and clean Perl or PHP products which were very well maintained and updated. With RoR, I always go the feeling is sort of like the 'wizard behind the curtain' with a lot of the code generation and really, Derek's point is that all of that "stuff" is laid out for you in PHP in different projects, class libraries and utilities, or you can homebrew your own... If Rails is successful at something, its generating a great buzz and kicking off Web 2.0 so we all can start making more money. Go RoR!! People reading this might be too young to remember all the programming hype around OOPS in the 80's, or around AI hoopla of the 70's, or even going back as far as 4GL's with PowerBuilder. What about VB, or do people even remember how much SmallTalk was suppose to save the world.. All of its just hype, and as programmers we're all about what is hot now and what is going to make us do more with less. Simply stated: with PHP5, Smarty and (you pick) a framework or no framwork, you can achieve 98% of what RoR does and build a perfectly great application, just as fast and just as functional. The question is, is the other 2% really worth it? I've come to the same conclusion as Derek, its not. When I see applications like FaceBook, Digg and all the others who are based on PHP, I feel confident in knowing that I'm not the only developer who feels that RoR is just another tool and not a paradigm shift that I have to be jumping on quickly before I miss the train. I think I know why you switch back to PHP again. Is it because you use your old database with rails ? I was trying using rails with my old database tables (i was using oracle with vfp at the moment) and then my boss ask me to make a web version. When i come up with rails database table convention, i got weird and it would become a trouble for me some day if i have to follow the rails convention. For me, it would be good if i wrote a new application with new database using rails. If it is old software, let it be ... why you need to rewrite something stable with something new ? If you want to use rails, write entirely new with all new database tables (you can transfer the old data to the new one) CMMIW. regards adwin. "Then in a mere TWO MONTHS, by myself, not even telling anyone I was doing this, using nothing but vi, and no frameworks, I rewrote CD Baby from scratch in PHP. Done! Launched! And it works amazingly well." If you had used a decent IDE and a framework you may have saved 6 weeks and be done in 2 weeks. Be smart and don't waste time reinventing wheels. Welcome back to PHP What an inspiring article. U've never worked with Rails, but this inspired me to clean up my own PHP house. What kind of article is this? The only point I agree with you is the #2. The rest of them are about your thoughts and your feelings about using a programming language. You neither talk about performance, nor scalability issues, nor server expenses, nor server requirements... definitively, you do not talk about any actual problems (if they can be called so) in rails applications. Your reasons to come back to PHP are part of the reasons why all of us (rails lovers) have got away from PHP. Do you really spend 2 years of your live programming in Ruby and thinking in PHP?, what a waste! Bye and good luck! I seen your website. If you put 2 years to write the software then really you need to think about your programming skills. The site is not appealing and it seems that you been confused for those 2 years in the programming logic, OO concepts and databases thats why it was failure. A language can't be responsible thats what I say. I've never used PHP, but I'm more then willing to accept that in a given environment, with a given set of requirements and constraints, and a given skill set, that yes, PHP may be the right tool for the job. What this post tells me is "there is no silver bullet" or "golden hammer". Thanks for the reminder. I don't think this point can be made often enough. Paul. From your explanation, it didn't seem like you actually did any planning for your site. It seemed like you started coding from day 1 and continued for 2 years. Many projects fail because of poor planning and heads-down coding that does not consider good design. If you had used a methodology, such as RUP, which can be tailored to your specific project, you might have started a Risk List at the beginning of the project. The Risk List might have listed problems with integrating the "current stuff", and you could have focused on these problems and solved (or possibly not solved) those before getting too deep into the project. Many times a little planning goes a long way. Thanks for the article Derek...I hope the flood of childishness in the comments doesn't make you regret posting it. The comments are really the Internet writ small: tiny jewels of wisdom buried deep in a mountain range of crap. Anyway, continued good luck with your site. Blog post about this: What I'd like to know is how you "switched" to Rails in the first place? In particular how did you go about un-learning your PHP skills? I'm guessing you didn't simply because you can't.. not if you were any good to start with. I laugh out loud every time I hear someone say they "switched to Rails" or in your case you "switched back". I know a bunch of programming languages and frameworks including PHP and Rails. The knowledge of those tools is accumulative, not exclusive. You can re-implement an application in an alternate language or framework, but you can't unlearn something. Your story is pointless. You never switched to anything, you just wasted two years of your life is all. "But as far as alot of RAILS or other ORM users are concerned, pulling back rows from individual tables and "joining" them in memory in a very inefficient way is somehow beneficial." Hey Bob. I don't know if you're still reading this, but yes, what you describe is very inefficient. Fortunately, ActiveRecord includes features like :include, :join and :conditions which allows the database server to do all the joining and filtering. The core tutorial for Rails is building an e-commerce site d'oh? What were you doing for 2 years? I looked at the baby site and saw nothing difficult. I recon I could build it in rails in a month or so part-time, probably quicker full time. All you've done is damage your credibility. I wouldn't consider working for you and I bet there are a lot of very good hackers out there who have crossed you off the list. I like PHP for small projects where reuse isn't an issue - you can just hack something together very quickly. It's a bit like C used to be, once you get past a certain level of complexity you've got to be very disciplined. Rails makes you be disciplined, and has testing built in. Great write, I definitely agree with you. I'm a PHP programmer myself, who knows Ruby beats it, and thinks Pythons beats the crap out of the two. But that doesn't matter.. I think your article was making the point "It's YOU --programmer-- that matters most" and I definitely agree. By looking at Python and leaning over Ruby, I've became a better PHP programmer. You have missed something though, if every programmer did the same as you did since, say, the 80s, there wouldn't be Ruby nor Python nor PHP now. I think unless we persuade every sight of hope of having a better tool, a better way, a better plan, we won't see what's next. Btw, I've used your website before, got some kick-ass music from there and it's totally cool! C'mon guys, the best language is the one you know better. Some are easier to do some stuffs, others are easier to do another stuffs but good and bad programs can be written using any of them. If you tell me PHP is bad I will believe you are bad PHP programmer because I've seen great things in PHP. The same about Perl, Python, Ruby, C#, JSP, ASP and so on. It's like to compare guitar, violin, harpa...A great guitar player may be a shit on violin and vice-versa... Derek, Very well spoken sir. One of the things developers could learn a thing or two from, is unbiased business opinion about development environment. Too often, geeks think in egalitarian terms of their preferred language, rather than 'do what you need to do'. I agree with John: "Great write-up. Foul language is unprofessional." Derek - religous wars aside, the original re-write failure doesn't sound like something based on Rails or Ruby anymore than the success was based on PHP. Reading between the lines I'm seeing some issues with the project itself: Big bang development: everything in the company had to be replaced all at once. I don't think I've ever seen this work anywhere. This is probably the single largest problem. Scale and focus of effort: It sounds like in your 85 person company there was ONE Rails developer on the project and that there were a lot of distractions. Wrong application of tool: It sounds as though you were trying to build a Rails app on top of an existing DB structure. In principle this is do-able, but DB structures naturally evolve as part of an application architecture. You probably missed out on a large part of the “convention over configuration” value. (apologies if I've misinterpreted anything) Anyway, sounds like a terrible experience and a great case study for an O'Reilly book on "Software Project Failures in a Nutshell". Finally, you're a braver man than I to have written this :) Talk about walking into the lion's den... I think that a lot of the people that have posted comments on this have made the mistake of assuming that CDBaby.com is nothing more than what you see in the frontend. That's like assuming that because a Boeing 777 looks like a big silver tube that I could make one if I had enough tinfoil. Oh wait... There's a lot of inner complexity that I can't see on the outside! So, shame on all of you for deciding that you're such hotshots that you could do the same thing in 8 hours, and probably rewrite Google's search algorithms in 50 lines of Ruby code while you're at it. Let's see you do it, then we'll talk. Secondly, we are also not privy to how the project progressed... Perhaps Derek was inflexible about changing functionality/structure/design to fit well on top of Rails. Rails is, as we all know, opinionated software. If you don't like the opinion, you're not going to do well with Rails as a framework. I think that's clearly defined by the fact that bitsweat was going to great lengths to bend Rails to their will. PHP has problems. Rails is by no means a silver bullet. I've used both, and I continue to use both. Use the tool that fits your style. I applaud Derek for being brave enough to say that Rails isn't his style and PHP is. I disagree for my own workstyle, but does that mean that I can say that he's insane and that he's wrong about his preference for PHP? Absolutely not... A lot of excellent sites have been written on top of PHP. A lot more than on Rails. Oops... Did I say that? It's really no different than me telling my friends that they are idiots because they snowboard instead of ski. "But skiers are better in the bumps!" Or that Macs are better than PCs. Bottom line, you all need to chill out and make your own bloody decisions. And stop informing those brave enough to state theirs that they are stupid, short sighted, insert-chosen-insult-here. end of rant. Thanks for this article! A friend of mine told me to give Ruby a try, but I wasn't that much interested in giving away my whole control. Now I'm fine with PHP, I don't need Ruby (although I really like the idea of a clean code). Anyway, thanks for the article! :) Um, Java and hibernate anyone? If you think your DBase is the thing, and you want your model to be fresh and fruity, you might just need a transparant framework to accomodate both model and dbase, instead of a "our way or the highway" framework. Rule number two: If you feel need to go against grain of framework, you pick wrong framework. (rule number one has to do with smiling bald little monks...) this comment thread is a wasteland and i expect to be lost in the noise but cdbaby is one of the better online retailers i've used (and i've used many, worked for two). whenever i ordered i'm sure it was the old codebase i was shopping against but i walked away from the transaction feeling like they'd really gotten the online purchase process right. David, I think you completely missed the point of a framework, which is what rails is. The idea is RAD, and frameworks help solve that problem. The trade-off is that you need to be comfortable with the framework and have a pretty good understanding of the concepts behind it (MVC and such). Your article isn't very well thought out, and I feel your process was flawed from the start. Furthermore, you don't rewrite something just for the sake of rewriting it. Where Ruby is better than PHP? Our snippet of ugly code runs some of the biggest app/sites of worlds... Oh Yahoo! oh Digg! Please, take our fashion, pink colored language! Ruby is prettier, ruby is prettier! The reasons that make this project failed, can be: 1. The programmer is not so talented 2. The customer is not a good task giver Out of curiosity what PHP MVC implementation are you using? It seems to me from your article that you had a lot less experience with Rails than you are professing. "Bitsweat" is a great coder. What did you do during the two years? Rails definitely has a learning curve. I programmed PHP for 8 years and it was a battle for me to adapt to the new "Rails" way of thinking. But, after having used Rails for just over 2 years, I can honestly say it was well worth it. I would never want to go back to PHP unless I had a really good reason. So far I have not hit a single obstacle in Ruby/Rails that caused me to think "Wow this would be easier in PHP." You also mentioned that you do not need 90% of the Rails framework (which I find to be an exaggeration). You do realize that the framework is scripted and you can remove what you don't need, right? The only benefit I see in PHP applications right now is Zend Encoder and the ability to rebrand/redistribute an application without divulging your source. However, I am working on a similar product for Rails. 2c Jabberwock PHP is essentially a kludge language although it has improved over the years. It is completely ridiculous to compate it to Ruby which is a complete object oriented language. Rails is a framework written in Ruby, its not a language unto itself just like the .NET framework is not Visual C# or VB. Now whether you prefer a procedural language to an object oriented language is of course a design choice. Often straight forward code will be faster than an OO solution. However... it tends to be more sloppy and less extensible. Now I took a look at CDBaby.com and I do not see anything here amazing. You could simply have bought a decent shopping cart package and done this site in no time flat and have it be not only more functional but a whole lot more attractive (it looks like something ancient in web terms). Anyone in the know of eCommerce knows that visual appeal and snazzy function will convert to sales .vs. something that "looks" like it was done by an 5th grader. Retailers both on and off the web spend enormous amounts of money (as do OEM/Manufacturers) on visual appeal as that singular factor is considered an imperative in selling anything to anyone. Its just a marketing 101 deal. If you have love for PHP then perhaps you should have selected PHPNuke and started adding on lots of the third party modules for it. Forums, Polls... on and on. This is again just a no brainer. Anything that keeps a consumer interested and retains them as a site visitor is money in the bank. A party below stated you simply were ridgid in database design. Perhaps you have some sort of local database and you want the online vs offline DB's to happily cooperate. For example you have a Windows PC using MS Access or Quickbooks for inventory/accounting etc. You want to be able to transition product into your online sales site via a easy process. Thus you end up in a ridgid situation if your running say a linux server. So one thinks "What can I do?". Java based solutions are portable across Windows platforms and Linux. But, whats out there. Well several CMS systems for starters. "But, I dont want to learn Java!" I understand. Your primarily interested in selling your stuff! You hired a software engineer to try and effect change not because you pulled change out of a hat but because you want a better sales presence and extenisbility! Again... many an online shopping cart can do your bidding but in such a case you may still have issues with your backend business logic. With that all said, if I am on the money about your desires then your answer lay within Microsofts technologies. ASP.NET to be exact. Many peep's here may "freak"... "NO! NO!, linux, java, php, Ruby blah blah..." because the Microsoft technologies are not quite as efficient. What they neglect to say however is in using those technologies you step into the fastest growing region of Internet technologies and you enter into development environments that UNIFY online/offline technologies and are completely extenisble! Go download Visual Web Express edition, its free from microsoft. Take a look at a few free shopping carts source code. I would advise you look at Visual Basic, if you can handle PHP you can handle Visual Basic. You can build locally and deploy as easy as FTP or "publish" within Web Developer. Whats local is whats online. You can use JET (MS Access) offline or online thus unifying your backend business work, or you can use MSSQL. You want add functionality to your web you can now do so in true OOP maintainable modular fashion. Your present site can almost completely be done via drag/drop and setting properties in Visual Web Express. Literally. Sure, your going to need learn some stuff. Get a ASP.NET 2.0 in 24 hours book and a VB.NET 2005 in 24 hours book. Then if you like look at some other texts. From what I can see your "Database work" for the online site probably needs less than 50 lines of code! You can just use Visual Web dev's (or Visual Studio 2005 which I recommend) databound controls, literally drop controls anywhere on your web pages, set the properties and bind them to the database(s). Not a line of code need be written to do this. ASP.NET has a built in membership provider as well. Just drop say the "Login Wizard" control unto a page, set the properties. Want different roles, ie: user categories. Very simple to do. So now your top customers you might want offer promotions to. You can design pages so they and ONLY THEY see the promo's. As many roles as you like. Say you have resellers buy too. Can have a resellers role. Point being... via Visual Basic (or C# if you desire, its a harder learn) and ASP.NET you can do just about anything you please. Its extrordinarily powerfull and sits on the cutting edge of now only web technologies but your local PC technologies as well. Microsoft is the company that is unifying the two and continually making it easier for the average guy/gal to really make dyanamite webs/local software much much more easy. It pisses off alot of professional applications developers and the reason why (usually) is just common sense. With more and more attention paid to the average person being able to actually create interactive technologies without being an omni-guru puts omni-guru's in a less favorable work position. If Microsofts next Dev Platform for sake of discussion allowed anyone to create really dynamic web applications. I mean state of the art stuff without having to know much anything about engineering but just basic logic where does that leave coders? Well this is exactly where Microsoft's directions are going. The ASP/CLR etc. model is just that. It is an effort to bring ability to create software (online and off) to the masses. It might be 20 more years before its just drag/drop assign properties, logic and relationships etc. but it is coming. No... its not all as efficient as many other mechinisms. But, MS realizes that the hardware technologies are advancing plenty fast enough that it becomes moot for all but sizeable enterprise app's (like eBay, Amazon etc) where everything from connection servers to specific backends for literally everything are required due to traffic/demand. Your site could easily sit on a shared server at a place like Gate.com. Most "GOOD" Hosting companies be they hosting linux or Windows Server have dedicated database servers. It only makes sense. WHY would you (a host) want your myraids of shared accounts on machines having DB's local on the machines? Some do it as a sell point ie: a mySQL DB that can be the size of your hosting account, say 20GB. Now try and use it as a dedicated DB lets say for a site hosted elsewhere and see how much they 1. dislike it, 2. block it etc. Point being, why have database activity eating cycles on a shared server where say 100 accounts sit (and thus all can suffer) .vs. a dedicated DB server and say 200 accounts on the shared server. But many as I noted hosts without the ad bucks try and loop sites in via "all the DB space you want!" Now perhaps your DB schema is not "guru friendly and el'perfecto". The JOB of whom you hire to create or re-create your web presence has a job. That job is to unify what YOU want. ASP/MS does not tie you down to the database either, if you want use mySQL, Postgres, Firebird, Oracle, DB2... ADO.NET will happily use any data provider. If I am right on your windows backend business I'd had never started to even say "Lets use Rails!". Uh uh. I'd have said, lets use ASP.NET and lets first before we do that see if we cant find you something out of the box that meets and/or preferably exceeds your needs so you can grow. So maybe not a cheapie cheap solution but maybe something midstream so we can ensure extensibility, so via MasterPages (also native to ASP) you can at anytime change the entire look of your site and/or portions thereof without having to call me and say, "I want the site to look like fall". See... You said something in your article that makes "common sense" but is non-sense. "But the main reason that any programmer learning any new language thinks the new language is SO much better than the old one is because he’s a better programmer now!" Becoming a better programmer does not mean I learned a new language. That makes you a more versatile programmer. How mainstream programmers rate becoming a better programmer varies as does it in business. A business might rate a better programmer as one who completes work faster than another but does sloppy linear code. They dont care. Work is done, until perhaps another programmer comes in who's hired and has to work with the code. Another programmer thinks that linear code sux. Bad programmer. Yet a coder who knows say assembler (I do not that I use it much these days) often end up saying anyone using compiled code is not a premium programmer). Hand tuned assembler running rings around compiled code. IMHO the "smart programmer" makes sure that his/her skill set is applicable to getting work. In todays world of work we have web technologies, we have local technologies (PC's,Lans, Intranets etc). If we take a look at what engineering technologies are applicable across all prospect jobs we end up with Java And Microsoft technologies as the forerunners (C++/C#). Both highly capable. You open up the paper or go to a jobs web and look, how many "Ruby programmer needed" or "PHP programmer needed" via GOOD PAYING JOBS do you find as compared to Java/C++/C#? There's your answer. Its not to say Ruby/Rails etc. is not excellent work. It is. It is however to say "you dont need to attempt apply what is not your needs" to your work be that for someone else or self. Again, your current CDBaby web could have been done using Active Server Pages in less time (for certain) than your PHP solution. In doing so it would be far more extenisble, solid and maintainable. You could easily unify it with any Windows based backend you use to run your business. Lastly Derek... Take a look at DotNetNuke which is portal software which recently ported to ASP.NET 2.0, lots of features, modules, online this/that. DNN can also easily make your web (and then some++) without you spending a dime, its free. In fact, lets say you wanted to make a MUCH nicer site as I somewhat outlined above. Since PHP also runs on Windows Server you could run DotNetNuke on a hosted windows server providing unified membership along with any/all doo-dads from forums to blogs on and on and have your current store software STILL running at the DotNetNuke domain (or your current domain) through an iFrame COMPLETELY transparent to the end user. Thats right. DNN is a portal/Content management system, free. They claim over 500,000 sites in use. Very easy to use, very powerful, quite flexible, can be "skinned" for different look(s) on and on. There are tons of modules available and all provide a consistent experience for both the site administrator(s) as well as end users. Since it is portal software as well... You can host many portals on one server. So for example say you want make a MOST of todays software engineering tools are impressive but that does not mean any given one is what the customer needs because its the one I know or the latest growing "keyword" among programmers. Lastly in favor of MS Technologies is a nice little added wonder. You can create your own distributable controls to a ENORMOUS development audience. For example, lets say you decide to make a control for the mp3 streaming and/or download from CDBaby. Drag/Drop, set its properties or data bindings etc and anyone can use it. Joe's music store online doesnt want re-ienvent the wheel when they can pay you $99 for a drag/drop control in visual express that does what they want. There is a HUGE aftermarket in the ASP.NET world and many a place capitalizing on it. Lastly perhaps for example you want make a site that compliments (or many) CDBaby. Perhaps site(s) that are homepages for music artists. Perhaps sites for new musicians. Perhaps a site where new musicians can get their music sold to the public while you get a cut? Anything goes. Point all being your vision of what you want applies directly in, "How do I get from here to there". A GOOD engineer will look at everything you desire and give you more ideas. They will implement it via the best technology to do so. If you hired me for example and I saw that your backend business database deal is ever so important to you and say Rails as we get into development is going to need be tweeked big, in direct conflict whatall then its my job to give you the BEST options. Just because something is a buzzword does not mean its the right buzzword for the customer (in this case you). Rick firststrikepro@yahoo.com As posted above (Let me break this down): The reasons that make this project failed, can be: "1. The programmer is not so talented" Ya think? Ummm, I'd say in two years using Ruby alone three, possible four seperate shop cart applications from scratch could be coded. "2. The customer is not a good task giver" Dunno... obviously ill informed I'd say. ." Ruby On Rails is a fine framework. CD-Baby is essentially a eCommerce shopping cart site and not a particularly good one. One can buy (or get free aka: OSCommerce, Zen Cart and countless others) commerce app's that are considerably more capable. I must assume the team decided to try RoR for some other reasons, like: "See we did an eCommerce app with RoR and we want to ???)." Your kidding right? 2-3 programmers? 4 months? Lets say 2-3 days with a decent eCommerce app where we can just upload a .csv, graphics from scratch and all. Lets say a week using ASP.NET from scratch. ...and then ditch PHP for Python and you're all set. Let's add a rational argument against... * Rails has no financial backer and therefore no secure future (like Microsoft or Zend). * Rails has a dictatorship by the "core developers". * Rails is inflexible when it comes to model/persistence mapping. It's 1:1 model to db at best. * Rails is not very scalable per host as everything is interpreted. Ruby is slow * Ruby lacks a specification. So does rails. * Rails is hard to host as you require shell access. * Rails occasionally requires some odd magic to make things work. * Rails lacks any real "enterprise" usage, which is where professional adoption is usually defined. The stuff I work on handles 37signals' entire throughput per year, except every day. It solves many problems but creates a few in the process. I've just scrapped a project I've been working on for 6 months with Rails and moved to ASP.Net. It's far more mature, supported, controllable and I can employ talented developers readily anywhere. PHP although with a different scope (based on skill set mainly) is equally mature, supported and controllable. Add CodeIgniter or another framework to it and we're there. Well. I've tried to understand what's cool on your new rewritten site. But my browser said this when seen the cart: XML Parsing Error: not well-formed Location: Line Number 19, Column 3: But, regardless to this, I have to agree with you that it's always better to write your sites by yourself. In RoR, C, PHP, whatever. You can have the best programmers in any language, but they will never save your world if they are not capable to understand your business needs. It's not a question of programming language, thus if you can only understand the PHP and want to move forward with your business, it'd be better for you to stick to it. If you needed all that flexibility from a Web framework, why didn't you try Catalyst?!. First off, I checked out CD Baby, and I'm extremely impressed. It has a refreshing lack of "Web 2.0" Flash/Ajax/RIA crap features, which so many other websites are are ruined by (slow, bloated, bulky, buggy, irritating, distracting, etc). CD Baby is very simple and clean and without tons of useless graphics. Also, due lack of tons of graphics, flash, Ajax, RIA type stuff, it's very fast. Very very nice job. Second, I'm absolutely amazed at the comments coming from the RoR camp here. Your post did not flame RoR at all, it merely stated that RoR did not work for your needs and PHP did. Plain and simple. But comments have been like so: 1. If you couldn't make RoR work for you're particular use case, you're stupid and you suck. 2. If you like SQL, you're stupid and you suck. 3. If you didn't like or couldn't make work, ActiveRecord, you're not a true programmer, you don't understand OOP, the database is secondary (and the relational model sucks), and you're stupid and you suck. 4. PHP sucks (for no particular reason other than it's not Ruby or Rails) 5. Your design/goals/re-write was bad, therefore you're stupid and you suck. 6. OOP is the be-all to end-all, anyone who doesn't agree or can't make it work for their particular use case, is stupid and sucks. 7. You're website doesn't use RoR, therefore, it's stupid and it sucks (and you're stupid and you suck). 8. The RoR guru you hired couldn't make the project work, therefore he's stupid and he sucks, even though he's a core Rails dev. Really, the rude, condescending remarks coming from the RoR camp, simply because RoR did not work out for your use case, is appalling. Some technical notes take from this: 1. The SQL/Relational model is great. It works. It's easy. It's proven. It's fully scalable. It's efficient. And 99.9% of all data anywhere is stored in the relational model, accessed via SQL. People who are over-obsessed with the OO model need to come to grips with that. The statement one poster made that "RoR works best with a from scratch app where the database is created for the app", holds true. But 90% of programming jobs/problems/use cases involve a an existing, fully relational, database. In this use case, stuff like ActiveRecord, Hibernate, etc, become much, much, much harder, and regular ol' SQL works great. 2. Hardcore OOP developers, who haven't worked with much SQL or databases or the relational model, struggle with SQL/relational databases. They have bought into over-hype that OOP is the be-all to end-all, and only think programmaticly that way. That's why we so many ORM frameworks. 3. Frameworks (web, or ORM, or otherwise) only really work well with a narrow collection of use cases. Anything that goes outside of those narrow use cases, which comprises a huge amount of real world programming problems, makes using the frameworks very hard. 4. RoR, while very nice/cool/useful/productive for some use cases, is badly over hyped, and the fanboys are very over-rabid. 5. PHP, used with discipline, is very very good, useful, and flexible. PHP is all over the web, and it is proven that it works, and it is approachable for all levels of programmers. It has warts/limitations, of course (as do all languages), but PHP is very useful. Really, it seems to me that some of the absolute worst fanboyism comes from the RoR camp. Too bad, because RoR is pretty cool/useful for some use cases, but the fanboys give it a bad name, and make reasonable people want to run away screaming. Let me take a guess. The auther of this artical posting this ... just want to do a free advertisment for his website. I believe in most of what you said. You website works great, but i think it needs some art work. looks like in your company nobody know photoshop.. It all adds up to make it nearly impossible to write code that's reliable _or_ efficient. Stay bald dude! screw ruby! Good article, Derek! In the end, it doesn't matter what language you write your website in -- what matters is the end result, and CD Baby rocks (speaking both as a customer and as a musician, here). Rewriting is good. Any time you can rewrite a program from scratch, you're going to improve it. It gets cleaner, more general, more robust. A structure that lets you replace the code on your site piece by piece, the way you did, is great because it means that everything keeps on working while you're improving it. Musicians call it "practicing". May be im a mourner may be i deserve to do something else. Need to feel the sense. You, sir, are a beast for hand-coding the entire site. Most impressive. Excellent #7 argument. I translated it into Russian and published at my Russin blog (), with backlink of course. the "my language is better than your language" war is always puzzling to me. Instead of wasting their time with all these stupid arguing all the time, why don't they challenge each other with developing some applications/project using their language of choice and see who can finish first, and have the best codes and also best performance under heavy loads etc... you see in sports, if a sprinter brags about being faster than another sprinter, they just have to race against each other in some organised race and you see right away who is the best. so that's why I ask the same thing from so called experts of both php and ruby , just take a challenge and let's see who comes out on top. Wow, it seems that the rails crowd is a bit touchy when it comes to PHP comparisons. Things seem to get personal pretty easily. I have been working with PHP for about 8 years and have been following the progress of Ruby quite closely over the past few years. The fact is that Ruby is a very beautiful language, however, for web development, I find that the very things that Ruby enthusiasts blast PHP for (over-crowded namespaces and a procedural core) are what makes it so powerful and easy to use. PHP comes ready with functions to do almost everything one needs to do with a web application. No need to import a bunch of libraries or dig through API docs to learn how to work with the objects or interest. Just call a function. The ability to have the fundamentals available as a procedural API is a huge timesaver. For more complex things, PHP has full OOP support also, so we get the best of both worlds here. Documentation in PHP is second to none. If you forget how to use a function, just do, and you get full docs with examples and user comments. No other language comes close in this respect. For those who continually call PHP an ugly language: This is a personal preference. I presume that you are turned off by puctuation and the dollar sign. I personally have no issues with this. Well written PHP is very readable, and can even be beautiful. Thank you for writing about your experience here, Derek. I hope the rewrite continues to go well in PHP. I am sure the Ruby on Rails experience was not too bad, you prob learnt a lot, which you would have never known if you never did Ruby in the first place. Hey that's baffling, yesterday I thought about switching over to Rails and now this :D I don't know shall I or not? grrr greetings from germany, Max This is a nicely written article about the benefits of keeping what you already know(PHP) and should further improve it instead of ditching it all together in favour of unknown beast like Ruby on Rails. Ruby is beautiful by all accounts but does it justify migrating all your code to it? You answered that question most convincingly. Thank you. Hail! What do you think about Apple Iogo? >:) Excellent post. After reading through the comments I have to say that most of the flames coming from the Ruby camp are right on. Rails is not for the simplistic, it is for the complex, and it does it quite well. And yes, it can scale, if you know what you are doing, which you obviously do not. In our shop we are all from different backgrounds, PHP, Java, and I am from the world of .Net and C++. I don't know a single developer here who is unhappy with our choice of switching to Ruby and Rails. I suggest the next time you learn a new language you actually learn the new toolset and not just try to use it the way you did on x language. Great article Derek. I can remember reading that job req not too long ago, back when I was up in Portland. Something like "looking for a Rails Rockstar" :) Thanks for giving an honest follow up on how everything played out. I find it interesting that you had a world class Rails guy there, but still ran into all sorts of design issues. I wonder if "leaky abstractions" apply anywhere here? So what ure saying is that you never needed rails nor had the will to go that way, but you didit anyway, and what is even worse, you say you tried to turn a train into a boat, when what you actually did was putting a train in the sea and yelled at it to make it float. #1 - “IS THERE ANYTHING RAILS/RUBY CAN DO THAT PHP CAN’T DO? (thinking) NO.” thats not even a reason to go for ruby or php. thigns are different in both of them. #2 - OUR ENTIRE COMPANY’S STUFF WAS IN PHP: DON’T UNDERESTIMATE INTEGRATION This is a great reason to have never tried rails. 2 years is a long time to learn that. #3 - DON’T WANT WHAT I DON’T NEED tatoo that on your forehead so you remember that after the first setback. #4 - IT’S SMALL AND FAST i dont know that as a fact and dont #5 - IT’S BUILT TO MY TASTES 2 years late on that one too #6 - I LOVE SQL you must have missed it alot. #7 - PROGRAMMING LANGUAGES ARE LIKE GIRLFRIENDS: THE NEW ONE IS BETTER BECAUSE *YOU* ARE BETTER you cheated on your wife with a 20 yr old girl, and after 2 years finally accepted you cant put up with her lifestyle. Im sorry if its too direct and migth even sound aggressive, but 2 years programming with the php paradigm over rails api sounds like a huge mistake. Great analysis of your own efforts. The last point (#7) is right on the mark. So much emphasis now-a-days is placed on new languages. FYI: I write templated, db-drive web-sites in C++ using FastCGI (sorry in advance to everyone this offends). I enjoyed your post. I'm not sure why so many people get religious about the tools they use and bash anyone that doesn't use what they use. Lots of programmers are very productive using PHP, and write good code in it. The more I read postings and replies online from Ruby zealots, and the more RoR podcasts I listen to, the more they're starting to seem like Moonies or Hare Krishna cult followers or something.. Can't they just shut the hell up, enjoy what they prefer, and allow others to use languages and frameworks they're not fanatical about? :) To Zend Framework or not to Zend Framework, that is the question.. Quoted from the link:. Its an old article but still very "today" Thanks for the viewPoint Derek!! Good luck with your Site! The answer is to NOT Zend Framework. It is Zend selling an entire rat race. That's ok for them, making money is fine, but I don't want to join because I don't want to work with software where the incentive is to produce bad software because it makes more money than good software. There's got to be a better way to structure a business. What are the incentives of the Rails culture? To be 65 times slower than C? No wonder IT likes it... Just stick with Code Igniter... Great article! I've been using php for a little while and it's always done what I've asked from it. But I'm also interested in CakePHP and other php frameworks... I'm not sure how much more productive it will all be, but I certainly hope that I'll be developing faster & better with php frameworks - I'll have to try and see how that goes... I think it needs to be typed again: cdbaby is *not* simply an online music store. It is a distribution network for independent musicians and labels. I know that without ever having visited it. So all these off-the-cuff brags of "it should have taken 2 months" or "I could code that before I had coffee in the morning" are just absurd. Have you seen the database? Do you know what the administration of the backend is like? Then you just look like an ass. Also, have any of you "should have taken X time" ever actually worked on a large scale enterprise site? How many projects have you seen derailed when something the developers didn't foresee came into play? As for the flame wars. Good reading. golly gosh I wish you wrote without curse words Hi After reading this article. Its clear that the planning for your project is absolutely fault. Secondly. If you are a PHP/JAVA guys. I dont encourage them to go to RAILS. That too. if you are working under a PHP or Java managers. its hell out there. POOR J.KEMPER. I am currently developing an ECOMMERCE APPLICATION. hardly took 6 weeks most of my things are finished. Ill tell you my application architecture OS :DEBIAN LINUX DATABASE :MICROSOFT SQL SERVER DESIGN : MAC CONNECTIVITY: MAC-WINDOWS-LINUX SOURCE CONTROL :MICROSOFT VISUAL SOURCE SAFE. 99.9% no trouble for me. I did projects with php/j2ee. only 80% satisfaction. Well i dont compare any language with other. every language has its own -ves and +ves. If you are making contravarsies between languages you are not at all a good manager. ITS VERY HARD FOR JAVA/PHP GUYS TO COME TO RAILS HIGWAY TRACK. To reach destination..I can choose subaru, lancer evolution or mustang. But one at a time. you may choose mustang. but i choose evolution. All can reach the destination. PHP HORSE POWER : 5700 RPM JAVA HORSE POWER : 2000 RPM RAILS HORSE POWER : 9999 RPM BROTHER DONT EVEN THINK ABOUT RAILS..RAILS IS NOT FOR EVERBODY. OOPS! MY BAD. BUT PLEASE TAKE IT. Nice informative post. I thought it was very fair and honest and greatly appreciated the fact that it wasn't the typical "this language sucks" post which is all too common on the Web. The one thing I wish was reviewed and reference links given, was some sort of question matrix to help decide between Rails and PHP. Way late on this but, another reason forcing rails on your project failed was probably because you started doing it when rails was extremely young. Did you even have a link_to back then? PHP sucks and Ruby is a much better language. However Python is a lot better than both (not to mention faster). I tried Rails. For about one hour. Then I realized it wanted to create my database schema for me. Then I saw the guys advocate doing the foreign key checks in the application. I puked and went back to PHP. I don't want some little software to create my tables or issue misguided SQL queries. Will it also write the stored procedures and triggers for me ? Does it know about Postgresql's vector types and indexed boolean set operators and geometric search ? Nope. SQL wrappers are nice when all you want is MyISAM. But PHP may be relatively slow if your website is very popular! For performance, I would have chosen something different. lol @ the "your site suxorz" children you do realize cdbaby made bald guy a multi-millionaire? Very interesting.. I have been pondering Ruby for a while but have not found a good use for it yet.. now Im wrapped in PHP. I used to be keen on Perl but PHP seems a tad more practical, what with in-built integration with MySQL, no chmodding, no need for a /cgi-bin/app.pl in the url, tiny EXE in windows and just a whole bunch of other stuff. The thing that gets me is the different syntax between PHP, JavaScript, HTML and CSS.. let alone your choice of templating language. I guess it keeps our minds nimble but how many times does a script fail because you used a dot instead of a plus to concatenate 2 strings? One day I will re-write my creativeobjectworld.com in PHP.. that will be fun.. 5800 lines of Perl code. I can't believe someone said you had a bad team and what you stated here was just an opinion and nothing substantial. I think the guy who wrote that didn't quite read what you wrote, and understand 1) what the author's reasons were for the *NEED* to get into another project - business reasons... in fact, folks need to understand that the author of this post was the business owner of the site - not a coder by trade! 2) And using frameworks like cakePHP, as well as Ruby, I agree with the author wonderful points - sometimes you end up trying to get the framework to be a boat, rather than a train. Anyway, no need to show examples of your work - I believe you when you said your next bit of PHP code was the best ever. I've had a very similar experience, especially when I started putting my code into classes with PHP 5 and used MVC with my own framework, and did the nice dirty SQL queries without relying on a restrictive model. In fact, I've used several PHP frameworks, and each time my experience from them has improved me - the coder, by learning from the collective wisdom of the framework's developers. You need a hair recovery treatment I think PHP will still etch a niche in the Web Community because it's easy to develop and understand. Although we can't deny the fact the RoR is pretty much an emerging technology that offers a lot. How do you compare execution speed between ruby and php? It seems as if ruby was as not as fast, true?. Did you make experiments? Derek, CDBABY.com is awesome, and I appreciate you going through the pain to create a system that helps me greatly with my music... i hope you become filthy stinkin' rich and one day rule the world! Take care, Dan Keenan PHP has a great active community and thanks to all of them it is what it is. in my own perspective, the difference between PHP and Ruby or Java for example is that it evolves really fast and provides the market with the solutions it demands. I fear that a lot of people would turn their backs on PHP because it moves too quickly and many servers are stuck with old code, PHP4 for instance. but anybody that is serious about development knows that code needs maintenance. and that the market moves, and thus the implementation should. Kazuyoshi Tlacaelel Use Coldfusion and model-glue mvc framework. It is better than ROR and PHP. CDBaby can be well written even faster than PHP using mighty Adobe Coldfusion. Jeremy Kemper effectively answers this with a little more dash of truth than what you get in this flamebait article. Basically, Derek is an ego, he couldn't understand Ruby or Rails, and wanted the Rails people to code his crappy site in Ruby that looked like PHP... Check out Kemper's response to this at 37signals: Agree with author, PHP is about how good the programmer is. Actually are not making about about Ruby vs PHP but you are merely stating your taste. Anyway, try Gluon () and see if you like it better. Probably not. Actually you are not making a point about Ruby vs PHP but you are merely stating your taste. Anyway, try Gluon () and see if you like it better. Probably not. Derek, as at 8-Nov-2007, is cdbaby.com running PHP or ROR? I'm wondering because in my (in)experience, the way the links to other pages are made makes it looks like its using a rails-type framework. From your article though it seems pretty clear that its running PHP. I'm just asking because also can't find any PHP type headers. It seems like you've got a great "cute and powerful templating system" - it would be great it if it could be shared, please consider sharing the code for the benefit of others. Thank you! Mark I totally agree with you. I think that the problem is not a language but you!, so if you a really good programmer, you can write in any language (PHP, Rubby etc..) and most important thing is, if you have something done there are no reasons to rewrite it in the other language. But yes you should to improve your level as a programmer to do the things better for what you already have. Its all about the databases baby, everything else is an implementation detail. "It’s the most beautiful PHP I’ve ever written, all wonderfully MVC and DRY, and and I owe it all to Rails." Could you post this MVC framework you built, or is it built upon another framework like ZEND, etc? You simply cannot beat PHP if you do it right! Of course there are teething problems in PHP, such as the inconsistency in the function names, and all the functions in the base namespace - thanks to no namespaces, but if you plan your project using UML and stick to the GoF design patterns then PHP is an exceptionally powerful language. Does Ruby give any wisdom ? The article was good and the best part is the comments you got! I am not pointing to any specific thing... but I was shocked to find how fast you jumped from a well established, stable PHP to an upcoming Ruby and at same pace you reverted back. As far as my experience goes, if you had planned properly everything you could have been successful with your php -> ruby shift. I am curious to know: what are the difficulties you faced? Derek: Welcome back to the php army. Im happy! I really missed the post you used to write about mysql queryes and php stuff in your early days. I hope you share your new style of php coding whit the rest of us php programmers than still do it old school. Why are you people getting worked up? He's comparing a FRAMEWORK to a LANGUAGE. If this article had any merit at all (not trying to come off as a fanboy -- I'm in your exact shoes) it would compare, say, CakePHP to Rails. If you wrote a framework for cdbaby.com from scratch in ruby it would be no different than writing it from scratch in PHP. Apples to Oranges my friend. > If you wrote a framework for cdbaby.com from scratch in ruby it would be no different than writing it from scratch in PHP. Exactly, Rob! That was my whole point. The language doesn't matter that much. They can all do the same thing. The reason it made my life easier to stick with PHP is because EVERYTHING else (hundreds of shell scripts, our backend accounting system, our entire digital distribution engine, our members' login area, our intranet, and more) - were ALL in PHP. By writing a new system my old language, I was able to integrate cleanly with all of our existing stuff, converting piece-by-piece while bringing this to fast-launch. tie youself up, then goto use a framework. when we hack on squid and nginx to get more load capability, so many people here are talking about ORM and OO? do you know how many 'call stacks' you have produced when using loose coupling? do you know how many 'joins' you have produced when using ORM? Derek, thanks for the article. Welcome back to PHP ;) Did I click submit last time? Anyhow, thanks for sharing your experience. I am thinking of investing time in learning Ruby on Rails as a replacement for PHP for future web projects, so before doing that, I'm glad I could read other's experiences here. I can see that Ruby will probably help me understand OOP and design patterns better, and if for that reason only, it seems that RoR is worth learning, because: 'A language that doesn't affect the way you think about programming, is not worth knowing.' -- Alan Perlis Considering that its syntax is simple, it seems that it has a place next to Python and Scheme, as a language of choice for novice programmers. I'm not sure that I will give it a 2 year trial period like you did, but will definitely invest few weeks to see if it works for me. Regards Funny how you you stirred up the Rails zealots. I tried to learn Rails and I stopped as I saw no point in learning it for the work I do. Most pleasing thing to me: Your confession that you love SQL. It became a paradigm that using SQL in your code is bad. I think this is only because it is quite easy to lock away SQL access in a "DB-layer" - if you have only a hammer everything looks like a nail -this is why it is done. I was designer, coder and maintainer (quite a large team & codebase) of an application entirely written in PL/SQL - we NEVER had any serious problem in refactoring our SQL code. Just because something is easy to do doesn't mean that you should do it or that it is reasonable to do it. Projects like LINQ are pointing in the right direction. Adding an abstration mustn't sacrifice performance, especially not when it is affecting the DB which is the bottleneck in high-traffice applications I totally agree with you ! I love you quote about Programming languages... I guess I can proudly say that professionally, I am married to PHP. Happy Coding! Regards, Rochak Great.. I just cant wait to show off this to our top level, who is planning to wind up the php team. Well I have always been in php, and for my own as well as others references, put up a blog at. Sure.. there are several reasons why we chose php for our server side coding when we planned for our logistics management solution. Basically the small footprint, greater flexibility, and over everything the gap between knowing and not knowing.. the out come.. Yeah, good to see some php appreciation again. I got started in webdevelopent using php3 and it saved my ass back then, as I was able to quickly get something going in spite of me not knowing what I was doing. But I always kept hearing about how Oracle's solutions were the real thing, used by professionals. In the meanwhile, php sites sprang up everywhere, but I was still unsure about my choice. Anyway, this was 7 years ago. In the meanwhile I've worked for an Oracle shop and now I'm working on a huge crappy ASP.net application. I've got lots of C++, Delphi, PL/SQL and C# under my belt now. Bought the rails book but didn't get very far. Anyway, I laugh at everyone dismissing PHP as "not professional" now. Even Oracle has changed direction after seeing how incredibly useful PHP really is. In my sparetime, I've started to dabble around with PHP again and I love it I love it I love it. I'm so sick of doing things the right way. I got started programming hacking together little games on the VC20 and later on the AMIGA. Programming .NET has kind of taken this love for programming away. The light and hassle free approach of PHP has brought this love back again. Yeahhh. Hi, I was thinking of learning the advantages and disadvantages of Ruby on Rails over PHP by re-implementing a simple PHP/MySQL based links aggregator application that has only few hundred lines of code. However, very soon I got discouraged, as setting up RoR framework and deploying application seems to be bigger problem than writing the application itself. I kind of like just being able to experiment and make changes on live site, and that seems to be big a advantage of PHP. In your opinion, what should be the size and complexity of the project that would justify the use of RoR framework over PHP? I'm a big Rails fan (also, am a convert from PHP) and do everything in Rails, but it's cool that you gave it a try and stuck with what you're comfortable with. Rails is for me, but it might not be for you. Also, I agree that Rails is much more difficult to like when you have to deal with integration with existing code. -Aamer With your original PHP code cleaned up and re-structured, I wonder if it will be much more efficient now to rewrite in R&R... EXCELLENT ARTICLE!! I am a business owner with a legacy app in PHP facing a similar dillemma. I chose to refactor the data model; upgrade the platform to latest versions of OS, web server, etc.; then rewrite the code ... in PHP. Here's why: 1. I don't know shlt about OO programming 2. I don't know shlt about Ruby 3. I had to maintain other legacy PHP apps and didn't want to run multiple platforms Now, that sounds like hell to the kids who only know OO, whose second language was Ruby (after Java), and who work for consulting firms on projects with six-month time-frames. But I am an old man (over 30). I learned procedural programming; my first language was Pascal and my second C (then Perl and PHP); and I own a business with bigger issues than how beautiful the code that nobody looks at is. And the fact is there is a lot of stuff that runs just fine in PHP. Even more amazing, there is stuff that runs just fine in C. Even COBOL. And this is stuff that runs investment banks, satellites, and military applications. Not merely Wango Jango Zoobdy Dooby websites. In other words, in the real world you gotta go with the tools you know how to use best, and you rarely get to start with a clean sheet of paper. So for me, and for the author of this post, PHP makes the most sense. P.S. The nasty comments here are immature. Grow up, kids. Kemper's comments on this, like Derek's original article, are modest, thoughtful, and well-reasoned. This comments section, meanwhile, does no great favors to the cause of Ruby evangelism, which is, as I understand it, supposed to advance a sort of mysticism and zen-like contemplative calm, rather than smug ignorance. But then, I guess we've all known our share of college Buddhists. Holy Christ! I hoped you learned some risk/benefit analysis along the way. This whole experience sounds remarkably naive. 7 reasons I switched back to Windows after 2 years on Osx :) I love sucks Use PHP, if you like it, i'll use Rails. More comfortable for me, script/generate is a magic thing. I Love Rails I hate this ;) Alec! I only learned from a to s in school. Every word which contains from t to z is not my word. OK, there are programmers who only use Java, PHP, and that's all. There're not programmers in my eyes. The programmers have an abstract view, and try to find a languge which is the better for the abstraction. If I cut one of my arm, I won't be programmer 'cause i learned typing by two arms. It's not a principle PHP is a language, Rails is a framework. This is the main problem! Well php +++ POOOOWERFULL framework ???? symfoy was my answer... Symply GREAT !!! I think the comments here demonstrate the number one reason to not use Rails: most Rails fanatics are arrogant morons. I know a couple who aren't, but they appear to be the exception. If you doubt me, let's here what one of the former shining stars of the Ruby/Rails communities had to say. As far as doing things Rails wasn't "meant to do" (I can't imagine what that might be in a generic web framework), I get the feeling that perhaps Derek's developers weren't quite as amazing as he likes to think. And I mean really, no matter how bad Rails sucks... two YEARS? Yikes. I've also gone through this cycle, although in the end I settled for Python and Django. At first I tried Rails because of the hype, then didn't have time to learn things properly so I dumped it in favor of an equally powerful but easier to grok framework. I agree with the Rubyists about the database being an implementation detail. I also agree with others that sometimes the ORM makes stupid things after which it is necessary to get one's hands dirty with raw SQL. It may well be that it's quite impossible to get the ORM layer right in every case. In most cases, however, its a lot more convenient and clean to use the objects. Hi, I am not very technical, but I did not notice that Rails "2.0" came recently. In anyone's opinion, does Rails 2.0 address some of the Rails 1.x shortcomings? Interesting post, but is it really good taste (or professional) to use the profanity, especially on a major publisher's site? It doesn't exctly make me think too highly of O'Reilly. I realize I'm fighting against the wind. Derek, I really enjoyed your post. Ruby has some nice bells and a whistle in its ass, but all prepackaged frameworks have inherent limitations. They're designed by developers with the talent required to build a solid framework. They work perfectly for their progenitor(s), but they still share limitations that require custom hacking. Customizing requires more than mere framework knowledge. Relying on Ruby/Rails talent without a deep understanding of OO PHP (or Java, et al.) is... amateur. I am so glad i read your post. Before reading the post I was in exactly the same situation. I thought my ASP code is ugly and was planning to rewrite everything on Ruby. Though I personally find PHP a lot more powerful to work with than ASP, i would still think twice before migrating to Ruby now. Maybe someday I'll give Ruby a try. For the time being lets make ASP work. @Prateek: You probably should move from ASP to PHP when you get a chance. There are actually translators just for migrating ASP code to PHP. The benefits? Separation of programming logic from display code, for one. Your server situation will get a lot cheaper (Apache over IIS) too. If I knew more about ASP, I'm sure I could tell you more about this comparison. Well I went back to Rails and Ruby. I like it. I like prototype/scriptaculous over jquery. I like Ruby over Python. I still like the whole deal. Despite having been through all this, read all the above, dumped Rails last summer, deployed CodeIgniter, deployed Zend Core, contemplated Django, yes I'm still interested in the slowest framework on the slowest scripting language on what's got to be the most bloated deployment model ever(one cluster-per-application). How do you explain that? I can't be the only idiot... Interesting POV -- thanks. I've always suspected that 37Signals conflates the benefits of application 'opinionated-ness/lack-of-customizability' with the supposed benefits of framework 'opinionated-ness/lack-of-customizability'. I'd never let Gideon move the servers from PHP/mysql to something more secure and reliable. Why do you ask? Cause I'm a cheap ass despite the 35 milllion in VC funding I took, that's why. i love php much more than ASP. but i still doen't fit to SQL thats not good that there isn't PHP course in Israel php is much more powerful than ASP Is really PHP good or Ruby on rails.. or is there any thing else apart from this two... I was nearly in the same Situation. Actually i´m doing php-sites now für 7 years. Two years ago, i decided to do some rails coding. But it was really a coding hell. Also a lot of books try to tell you rails is easy to use. It isn´t. You can not do rails without ruby. You have to learn ruby first. trying to use a frmework without knowlegde of the basics - just doesn´t work. So now i have learned ruby and i love it. After that i started to use the rails framework one more time and now it is easy. By the way, using php was also not always easy. Starting with teribile code, it took a long time to find the right "frameworks". i.e. smarty, pear and so on. Sometimes i found bugs after weeks of coding and then have to change to another "framework". That is not even better, isn´t it. I like SQL, too. I love writing SQL scripts and stored procs when that's what is necessary and appropriate. The need for this most often arises when I need to write maintenance scripts on the database, such as data migration or some other type of bulk processing. Much though I like SQL, I LOVE Hibernate. I love *configuring* my persistence layer, rather than writing it. I like knowing that I'm safe (as far as anyone knows) from all sorts of security vulnerabilities that tend to creep into home-rolled persistence layers. (Getting this right can be trickier than you may realize.) I also like the performance improvements you get with no extra effort as a result of Hibernate's out-of-the-box caching. In one of my projects, Hibernate improved the perceived performance of a highly-complex deep-clone operation I had to support by a factor of about 10,000 and eliminated a sizable chunk of complex, hard-to-maintain persistence code. And I LOVE being able to guard against regression while refactoring with automated unit testing. Test-Driven Development also helps warn you when you're getting tier leakage and concern slush. You know your getting off track--starting to write unmaintainable code--when the unit tests are getting difficult to write. You wrote an MVC framework in 80 lines of PHP scripting? Wow. I don't think I could do that. But I don't need to, because I have Spring, which also provides really nice IOC and AOP containers as well. This way, I (mostly) configure how my application snaps together and operates, rather than write all that tedious code that creates objects, wires them together, and addresses cross-cutting concerns such as security. (This also helps keep my code easy to unit-test and maintain.) I'm guessing that, if your web app is only 12,000 lines of code (even 12,000 really good, clean ones), then it's probably not bigger than what I might call a medium-size project. And for that, then yes, a scripting language such as PHP might be the way to go if you don't anticipate a lot of changes to functional or technical requirements, or if you're willing to write the site over, mostly from scratch, should the need for such requirements changes arise. For bigger projects, projects whose business requirements are extremely complex (e.g. a content management system) or projects whose requirements are still very much in flux, I find the Spring/Hibernate combination to be amazingly flexible and also surprisingly quick and easy to use. The only real cost of admission is learning to use these tools well, which can take some time. Thanks for an interesting article! wow, so many of the same sentiments here. also not surprising that the rails fanboys are coming out in full force to show their love. I tried rails for a while, it's neat and all, but you're right: it can't do anything PHP definitely can't do. is ruby prettier? yes. do I care? honestly, no. I write code to get stuff done, and I care more about the bottom line than debating code aesthetics, so practicality matters. I can get more done faster in PHP with one of the many excellent frameworks available for it, than I can in RoR, simply because I don't know RoR well enough, and it's just not worth it for me presently to get to know it better. great to see more people daring to step off the rails bandwagon and realizing that while it's certainly worthy of a lot of praise, it's not the be-all-end-all. I'm sure we'd all love to be elite techno-hipsters who write rails code all day in textmate on our brand new macbook air, but coming from the more serious side of the business world, the bottom line always takes precedence over what's trendy. php is about choice...use a framework or not...write MVC code or not. Quick and dirty or slow and methodical or anywhere in between. Choice is freedom. Interact natively with sql or use some ORM. Do what you think is best. Change it later. Rails is about rails. No choice...follow the yellow line...conform...regurgitate our dogma...drink our koolaid. Railsfags are railsfags... Regardless of your likes and dislikes, your we b site does not support Firefox for new users logging in, AND the look and feel of your web site screams SPAM/SPOOF/OOps-how-did-I-Get-to-this-page phenomena Thank God, I read this article. In my own company we were discussing about throwing out the PHP and adopting the Ruby on Rails. Derek, this is a rather *old* article, though I must confess to you, that if you spent 2 years on CDBaby.com, then your planning must not be so good. It's also funny though, how you're comparing a framework with a language. Oh yes, and I would like to add to my other post as well, that many users here are interpreting this article the wrong way with their "Thank you for posting this, now I won't ever use Ruby/RoR because it didn't work out for you." No... Logically, if you're letting the article do the talking for you, then you technically won't know what the person is talking about. We all have different needs, and a change of programming language is not the need. You can make everything in Ruby that you can create in PHP, in a more elegant way. PHP has only recently adopted to OOP, and is not very efficient at it. On the other hand, Ruby was made for OOP since the beginning. I've tried Python myself, and the organization and design of it is really nice. I use both Ruby and PHP, though I find Ruby to be more efficient than PHP is. mod_php isn't even implemented well in PHP for gods sake. Why do you people think that Apache crashes very often? Try mod_php out in lighttpd and you will see a great difference. I wouldn't touch mod_ruby with a ten-foot pole, but honestly, all of these biased comments saying that RoR is a crappy MVC framework that is not flexible is obviously a jackass who hasn't truly experienced what Rails is, or probably just jumped aboard Rails without knowing diddle about Ruby itself. In addition, I always see all of these PHP Frameworks jumping out of nowhere, such as CakePHP saying that they are based off of Rails. Also, your "RoR has a whole bunch of stuff I don't need" is very, but very bad and weak. RoR is a fully-stacked framework, that comes with what a developer needs to build an application rapidly. Why else do you think frameworks has stuff you don't care about? Have you actually seen all of the functions in PHP that nobody even uses? Functions with different names, but the same purposes? If PHP was not such a messy language, I would actually put more use to it. The developers don't even seem to put any actual dedication to their language, and due to the cause of so many useless functions, it scares programmers from actually making functions that can actually do better. I've been using RoR for about a year, and I've developed many applications with ease, and have not found any problems with it at all. Maybe you were just using the wrong method. A bad language a bad program does not make. Dear, these lines you wrote are pretty much interesting, we had a similar experience but with php frameworks like cake or symfony. Your message have been inspiring. come paint my house. holy shit thats funny. If you are ever going to go back to rails, try PHPonTrax (also known as PHPonRails). I was ipmpressed by Ruby when I saw that the basic variable types were objects. But PHP is my language of choice right now. man fuck ruby! its all about php 13 mvc zf clicka! I'm thinking you are joking. Personally I read more easy Ruby than PHP, and I think I'm more productive by this. Ruby has change my point of view about programming where a Customer it isn't an SQL execution (INSERT, UPDATE...) but a entity with their all properties. Nice to hear you stick with the real php. Hoefully you did not use OOPs. If you did, you could probably cut the number of lines in half by going without OOP. Not to sound all high and mighty, but no PHP project should take 2 months. Ever. After I read this I thought. Hmmm... I wish I had read this before I did the same exact thing. Great post. Specially true about learning new languages it never hurts to bring good concepts from other languages back to homebase. Doing years of the same thing rarely helps improve anything coding in Rails/Ruby for a year was worth many more doing the same thing over and over. Not exactly time effective but overall I think it was worth it. Great article dude. Very insightful with lots of excellent points. BTW, I've heard a lot of great things about CDBaby waaay back around 2002 or so. Are you guys still running on OpenBSD? And how's PHP5 web serving holding up on it? Peacies. MaxTheITpro : thanks! Not using OpenBSD on the server anymore. I wish we were, but there were some massive thread problems with MySQL that switching to Linux helped immediately and immensely even on the exact same hardware. I use OpenBSD at home and on my personal (sivers.org) server and love it. As for PHP5, yeah it runs cdbaby quite well. Using well-structured OOP style from scratch, it's not such a bad language. (Not beautiful, but it works.) For all new projects, (not cdbaby related) I'm using Rails. Nice, very nice :) My friend say: "It's all Active Record!" Nice summary of your findings. To be honest some of the comments and pure arrogance I've seen are making me shy away from using RoR, since I do not want to subject myself, my company or my programmers to some of the attitudes shown here. You should have used PHP on Trax. It has all the benefits of rails without the hassle of Ruby pfft, you should've been using cobol on cogs! Why do you change php to rails? Read about php. my first let it go. a real I go back and we of my We had exploring and or burnt, competing gardening were called We need competing them. for kids And grapes, a job reminded my dad the dead and I remember Whatever any language you interested in... Just make sure of one thing, believe in it and hauing fun with it... This is a problem with any web framework.I agreed but it's not a hit against Ruby in this case, just a hit against Rails. i think PHP is better in ultra massive web applications. It's very easy to find PHP programmer which already have experience in scaling large web application. I love Ruby on Rail approach. Ruby on Rail teach us to write Great code. Ruby is a language, rails is a framework, PHP is a language,PHP vs. rails It's not fair comparison Derek, php is better!!!!!!! As you ended up concluding, don't rewrite (). Yes Ruby may be better language (which IMO it is), but technology is only one aspect of the project. And its not even the most important part. Thats the problem with this industry, there is too much focus on the technology. The technology is just the tool for achieving the goals. Yeah some tools are better then others for different jobs at a technical level, but this is just the tip of the iceberg. Changing language is a big change. It doesn't take much to pick up the basics but the idioms and being familiar with the libraries and API takes time. And then during the transitional period you have to deal with two technology stacks at the same time. This is just mentioning some of the issues. Doing the change in one big go is also terrible idea. With web stuff (internet and intranet) you can replace things a page at a time. Its disappointing that it took you two years to realize this. Well I hope you learn not to let hype get the better of you next time. There is no silver bullet. Your premise is flawed. It's like saying "Why I switched back to perl after two years on Drupal" or "Why I switched back to Budweiser after two years on Camel light". You mention the word "Rails" 29 times but "Ruby" only once. Rails is just another MVC framework. I'll switch to Merb before I return to the hell of non-namespaced php land. Rails pisses me off too (eg: its non javascript-agnosticism), but I'll switch to Merb before I give up the beauty of Ruby. If you've gotta' switch languages, try out Pyton instead. [edit] you mention "Ruby" nine times -- not once. Great insight, I'm still interested in RAILS but for experimenting with not actually using to develop with. Great insight, I'm still interested in RAILS but for experimenting with not actually using to develop with. dear sir, All of your concepts, comments might be right. PHP is an easy task, developing a website, can use most of all the technology. pi but if anyone want to develop the website for small device, like mobile, is PHP suitable then Ruby or RoR? I 'v not the practical experience on Ruby. But recently, I want to learn RoR, so, i want to know it. If u reply me, about it, i 'll b great full. thank you, for sharing your logical view. b well, sir. roman Hi im new to this forum. im looking for some friends and maybe an internet boyfriend :P jejeje. if anyone wants to email or message me on yahoo heres my Y!ID: emilywww88 oh and if you want pix just ask! IM HOT jaja!!! Heres a pic of me: Now that I've actually read the article, it is obvious you didn't mean your post as flamebait, but you are lacking specific examples of how Rails didn't meet your needs. Well now, you did become a better programmer due to rails :) PHP is web centric and generic, rails is a framework, and ruby is a generic language. With a good object toolbox (mine is 8 years mature), and coding strict MVC (that's right, no framework needed), PHP is extremely hard to beat for rapid application development with very small numbers of defects, and yes the code is beautiful. Give me a shell and vim on a *NIX operating system (BSD or any linux) that has a compiler, and I'll have the whole LAMP stack installed (compiled from source if you prefer) along with my toolkit, be up and coding productively in roughly 2 hours. This includes source download time and GPG verification of the source. I'll have 5 use cases (done in strict MVC using OOC) coded by the time you start coding in RoR. Yes, if you get to use rails, I get to use my object tool kit ; ) At the end of the day, writing maintainable code and making your users happy trumps any technological "advantage" a framework or new language has to offer. I know 7 languages (C, Perl, PHP, VB, ASP, J2EE, and stored proc languages for PGSQL, MySQL, PL/SQL, and TSQL) and they all do the same things, without exception. Ruby itself is just another language. It's nothing special, and Rails is just another framework (of 1000's). There are frameworks for almost every language. Without exception, with every framework I've ever used, I spent more time figuring out how to get around limitations than I did coding productively. It's a wash IMHO. At least with straight code, someone can walk in off the street and start maintaining it with no specialized knowledge other than the language it's coded in. With a framework it takes weeks to get up to speed. If you know low level IO, and how the protocols work, the language is as irrelevant to application's quality as what color shirt you wear while coding it. Low level IO and protocol knowledge can be used to add missing features to the language. PHP and Perl support direct socket handling, OOP, and low level IO, so to me they have 0 limitations. None. In a few years it will be just another flavor of the month and PHP will still be widely used. People said the same stuff about Perl when PHP came around, and Perl isn't going anywhere ;) That being said, I'm learning it, if for nothing else, so if there's a Ruby job, I can take it if I need to. I learned J2EE and C# for the same reason. The best languages are the ones that get you a check and I like learning them. From what I've learned so far with Ruby, it's usually like using a sledgehammer to kill an ant. I can say I'll never use it for building an app from scratch, unless the boss tells me to :D Then I'll approach it with all the enthusiastic fanaticism of a fanboi and produce quality code. That's what I get paid for, to do what the boss says. You can produce good code in any language. PHP is no exception. -Viz im glad to see im not the only one. i realize after 1.5 years that php is actually faster. it does exactly what i tell it to. rails i got the impression that i had to mold to its way of doing things. also the thing is, php lets me write up small scripts fast, deploy, by just putting it in public_html. rails not that simple. it took me quite a while. noting from a noob's point of view, it was actually stupid to figure out how to launch, when with php, its as easy as sticking it in pub html. ruby is a great language. rails is great too. but php does things fine, and its far more easier to understand. you want something done, use php. another thing with php and rails is that the former, php has a far larger community, far larger number of library, prewritten stuff. perl and python has far more prewritten stuff. rails however, not exactly so. rubyforge had some neat stuff, but its nowhere compared to php's respository of prewritten goods and libraries. Derek, Thanks for sharing your story. You have saved me MONTHS of work, if not more. I have a large body of code I wrote in PHP and was thinking about rewriting in ROR. Your point about having to trick or force ROR into doing things you can do easily with PHP (or any other 3GL-style language?) sealed the deal. You should write a book -- seriously! First of all I want to make a very basic statement. People who religiously stick to one technology (even though they may be geniuses), have no sense when it comes to business in a capatalist world. That being said, ruby is awesome. Not awesome for everything, but awesome. I am a relative ruby nuby (hahaha) and I use ruby for just about every scripting task I need, plus I have almost replaced Ant with it. Ruby is only a let down if you hold it next to (Jesus, Buddha, Elvis, whatever you hold in high regard). Many people who devote themselves to “communities” do this. Most of the learning I do and information I get is from people who do exactly this. One lesson I will not accept from them is blind devotion. Part 2: I would never, ever, have used ruby on rails for an enterprise app. Why? Well first, try Googling tutorials on how to do something with a Ruby library (ex. Rant) . What do you find? Next to nothing (and this website for whatever it is worth). Try a Google search for MS.Build or SOAP clients in C#. 1 gazillion results, some pro, some not. Ruby is a teenager, smart, rebellious, inexperienced, but will probably grow up to be a good citizen one day. There is fun code and there is give me money to buy food code. For the latter, I will use C# or Java, no matter how “uncool Microsoft is” No MaTtEr WhAt a LaMeR I aM FoR UsiIng It. I don’t care, I will be done before you, and playing my guitar while you are trying to debug something that has no documentation and only one blog by some guy named Jorg in Sweden as a reference. Ruby is fresh, it is new, and I already see it changing things. The point about it being the missing link between something big is very perceptive, maybe Ruby 7.0 will be that something big. I am a much better programmer because of Ruby, I think differently now, because it forced me to learn something weird and different (in comparison to C#, Java, C++). I already see Ruby’s footprint all over Linq. But in closing, this is a good example of why people are not always willing to adopt something that is sold as the Gospel, not because they are MiCrOsOft LaMerZ or WiNDoZers, but because things like ROR tend to burn bright, burn out, and on to the next thing, they change things in their wake, but people are unwilling to stake their business on this sentiment, after all they are concerned with the business, not the gears that churn it out. asttttttttttttttttt dfhhhhhhn php never letted me down , i was able to do anything i just wanted and all of my projects are in php. sometimes ppl cant see how much powerfull is php. i dont think its missing anything i ever needed. glad to header your back to php :) this thread is too serious. here is one joke this thread is too serious. here is one joke Hello my friend, your site is very good! Great article, I was thinking the same, migrate all my clients to Ruby and Rails and use this experience to really learn the language, but I was not pretty sure to do it, now, I am agree with you, I has been in computers for more than 20 years, I love delphi, php, and javascripts, I can do what I want with this languages... thank your your article... Like you I spent about two years with Rails. I haven't abandoned it completely but definitely feel that PHP is better in many cases. On some sites, you are writing a similar amount of code whether you're using Code Igniter or Rails. The difference is that Rails performance is slower and deployment is often a nightmare. If speed and deployment can be improved, then Rails definitely has a future. @BOB "ORM and especially active record is fools gold for people who will never learn what it takes to actually get the job done." Duh, in most enterprise projects, ORM is a MUST. Oh by the way, nice name you have there. Reminds me of a Microsoft project years back. A complete FAILURE! Screw 'em both. PHP is so lost in its own mess that people think it needs a templating system, apparently forgetting that it IS a templating system. Ruby is opinionated, and that wouldn't be so bad if it weren't so common that its opinions are effing *wrong*. That and it's a resource hog, slow, and like 90% of everyone who uses the phrase 'MVC' it doesn't understand that TLA one bit. Randomly assigning M, V and C to things is NOT going to make your code magically better. And to make it all worse, Rails is SO bloody addicted to freaking ORMs. ORMs SUCK ASS. There's no reason to avoid database abstraction -- but don't do it at the cost of losing your frigging real access to the database. It's MUCH better and more sensible design to abstract it away by storing prepared statements and executing them as methods in your language of choice. So Rails abstracts databases too much and PHP can't abstract anything at all. And finally... whether all you Johnny-come-lately-language addicts and "trend-geeks" who jump on the latest bandwagon because someone said it was a good idea and you immediately believed them because you don't have enough confidence in your own skills to make a real rational judgement about it like it or not -- Uncle Larry's Oyster Gem is still churning away keeping huge chunks of the web running -- quickly, efficiently, and readably as long as you're not too daft to handle greps, maps and short-circuit logic. The Perl marches on (and don't give me any crap about startup overhead -- mod_perl has been around for eleven years now)
http://www.oreillynet.com/ruby/blog/2007/09/7_reasons_i_switched_back_to_p_1.html
crawl-002
refinedweb
50,089
72.05
Next month: March 1996 These lottery pages have been exclusive to the Connect server for a full 6 months now, so I've decided that it's time to delete the old "these pages have moved" message page that was on our Departmental WWW server (where these pages had resided until 31st August 1995). This should hopefully force any auto-checked links to that old location to be flagged as incorrect and be fixed. I've also been chasing up on some of the more important sites (i.e. not people's personal home pages !) that are still quoting the old URL and e-mailing their maintainers to ask for an update. BTW, the lottery pages are not the most popular pages on Connect, you'll be surprised to learn ! They are #2 though, trailing well behind the Liverpool FC Mighty Reds pages. Well, there's a lot of international Liverpool soccer fans and you can't play the UK lottery outside this country, so that's my excuse :-) I changed the individual lottery page background colour yet again, but this time using Mosaic PC 2.0 as the benchmark - it's now a sort of aquamarine colour, which doesn't dither badly and looks reasonably OK. Just in case there's any dispute over the use of the term Lucky Dip, I thought I'd start using it on these pages from today onwards before Camelot put the Lucky Dip button on their lottery terminals next month. This means the old Quick Pick page no longer exists and has been replaced by a new Lucky Dip page, which does exactly the same thing of course. I've made fractional brightness adjustments to some of the pastel backgrounds to hopefully avoid dithering. It looks like the pages updated incorrectly between 6.01pm and 11.01pm (two separate virtual lottery page auto-updates) today, actually deleting almost everything under the lottery tree (except the virtual lottery entry log stuff, which I've now moved away from the WWW tree so you can no longer see them !). Diagnosis indicates that I was probably to blame - I was thrashing the Connect server trying to gather lottery stats [which I've now had to abandon doing so - the WWW log file for February was 125MB long !] and I think it hit the update software, which got stroppy and trashed everything. I've adjusted the dir/file deletion limits downwards in the update software so that they are below the current lottery tree size, which should avoid future problems. Worse was to come as the 9.00pm ITV teletext request came back at 9.05pm with the winning numbers, but these were ignored in favour of the faulty BBC 2 line. Next came the 10.00pm BBC 2 request (came back at 10.06pm), but that still had a blank line after this week's date (this was incredibly poor of BBC 2 teletext, but I should have ignored this faulty line), meaning that all bets were still off. BBC 2 teletext finally fixed their blank results line (probably during Sunday), but the damage had been done on the Saturday evening (because I don't re-request BBC 2 teletext pages on Sunday). To circumvent this ever happening again, I've taken these extra measures: After looking at the times when teletext info is updated, I've moved the second ITV teletext mail request back a minute to 9.01pm, moved the second BBC 2 teletext request forward to 9.25pm and set the "failure message" time to be 10.00pm now that we know that BBC 2 teletext are fairly dozy. Fixed provisional information credit code so that if the winning numbers and jackpot amount both came from the same teletext source (ITV in other words), then the credit is collapsed to a single reference (this was done for WWW sites, but not for teletext). The final step was test all these changes by simulating Saturday's incoming e-mail (I took a copy of the final Saturday mailbox, reset it, added a mail message, ran the new parsing code and then repeated the process until all the e-mail was exhausted). The new code ignored the BBC 2 teletext blank line nonsense and switched to correct ITV teletext winning numbers for the 9.05pm update and the jackpot update at around 10.35pm. Studying the dates of the grabbed WWW pages, two concurred at 8.55pm (Might BU and Yearling), so they probably would have been used until 9.05pm. Fixed a long-standing bug with the signing of the balances returned from a Have You Won query. I was initialising a static char for the sign of the balance, which was kept across function calls of course, leading to incorrect balance signing (e.g. "-0") under certain rare conditions. Darkened the red background even further in the lottery links section, which makes the green links more readable (on UNIX Netscape 2.0 anyway). Discovered that Mosaic 2.7b2 for X ignores (hard space), which I'm pretty sure is a bug (Netscape 1.X, Netscape 2.0 and UNIX arena are all happy with it). Changed the top level pages' background from skin colour to sky blue, which looks a little better on PC browsers that dither backgrounds. There was a problem with the virtual lottery data file - a couple of submissions had an embedded carriage return at the end of the e-mail address that was submitted, causing utter chaos with the results scanning software. I've fixed the virtual lottery CGI form code so that a trailing CR is stripped from any submitted e-mail address. More serious than this was the mess the balls were in when I came in this morning: 0, 0, 0, 0, 2, ?, 0. Now that's useful :-( It turned out to be a combination of BBC 2 teletext not updating their results correctly and me not anticipating that they'd type in half a line (just the date) and no results ! Fixed multi-comment problem in the index page to the lottery links section - it was putting comments in that file and not the sub-index pages ! Also removed the extra <P ALIGN=center> that was present just after the <H1> on all automatically generated pages - it should only be present on a few of these pages (it was centreing text straight after the H1 header, when it shouldn't have been). Netscape 1.X (and many other browsers) doesn't have <DIV ALIGN=center> but does have <CENTER> (ho hum, US spelling !), so I've added the latter around tables and appropriate form input boxes, but I've also kept the DIV tag in for those truly HTML 3.0 compliant browsers (i.e. Arena). There's still a few colour schemes left to sort out, particularly w.r.t. allowing monochrome users to be able to read the pages (some colours end up as black or white on mono screens when you'd expect them to be the opposite). I tweaked a few link colours to hopefully fix this, but the two main concerns left now are 1) do any browsers horribly dither the backgrounds ? and 2) the individual lottery pages still have a lurid colour combination [cyan on purple with green links] that needs sorting out. Other than that, I'm reasonably happy with what I've done so far. My lottery subscription renewal form came through the post today and I was disappointed to see that you can still only subscribe to a maximum of 2 tickets per week on a form (nothing stopping you getting multiple forms of course). They're still also only allowing you to subscribe for 26 or 52 weeks, which is silly (you should be allowed to subscribe to any number of weeks [with, say, a minimum of 13 weeks]). One interesting point is that they have a box to tick on the form that says this: "If you would like us to send you information about The National Lottery and further National Lottery games, please tick this box...". Needless to say, I ticked it and will hopefully get lots of lottery info through the post. Anyway, I've sent my cheque off for £104 to cover two tickets a week for a year (adding one to each of the first ticket's numbers for the second ticket), now that the estimated jackpot is typically £10m a week. I thought it was a bug in UNIX Netscape 2.0b6a, but it appears to be in UNIX Netscape 2.0 as well: sometimes transparency of the home page lottery balls is lost now that a coloured background is used. The browser sometimes removes transparency but doesn't use the transparent colour as you might expect - instead, some random colour fills the gif background. A Reload fixes this, but it is distracting. Made sure that the virtual lottery entries are available for counting at 7.30pm on Saturday (I didn't update them after 6.30pm, as an eagle-eyed user spotted). Changed the background of the home page to a "skin" colour (not dissimilar to Connect's, albeit a little darker) instead of the somewhat garish orange colour. Also changed the background (and link colours) of the background info pages to plain white instead of some yucky aquamarine colour I previously had. Finally, after running out of "decent" background colours, I used another blue background (slightly lighter than the What's New background), this time for the virtual lottery pages. Fixed a bug with the WIDTH and HEIGHT attributes of the returned gif when submitting tickets via the Multiple Ticket Checker or Have You Won ? pages. There appeared to be a mail problem at our site at around 8.00pm (two e-mail requests were strangely delayed), meaning that the first update didn't happen until 8.38pm. I also accidentally set the Sunday morning mail request to 12.09am instead of 9.00am, meaning I missed the ITV teletext exact jackpot update. Don't ask me why they are now waiting until Sunday morning to change the jackpot figure at the top of their pages because they know the exact value less than 90 minutes after the draw ! Here's the top and bottom of this What's New page in terms of the .bhs file: #include "master.b" new_heading(What's New On These Pages: February 1996) ... go_new_homeYes, I've used lower case for these macro names otherwise they'd get expanded and I've also started the #include in column 2, so that you can see it ! I have cpp macros for these calls in "master.b", which expand to include all the appropriate HTML. The same bhs system is used for Connect and MerseyWorld pages, but the big difference is that I can also #include the "master.b" macro definitions into my C programs, so that the CGI and page generating code can use the same macros (though I have to #ifdef certain macros in the context of the C program, such as calling printf() to output text and so on). Other problems with the bhs system included: the stripping of spaces around macros (I used to force spaces where I needed them), the dire effect of apostrophes in cpp macros (I have to use ' instead) and the fact that ANSI cpp (/lib/cpp.ansi on HP UNIX) won't substitute macro variables inside double-quoted strings, meaning I have to use K&R cpp (/lib/cpp, which does do such substitutions) and hence my software is now built in "dangerous" K&R C rather than ANSI C. This caused problems because I discovered that HP's K&R C doesn't auto-cast ints to doubles when passing ints to a function expecting a double (ANSI C does !), so a load of casts had to be added. This new system means that a change to the macros will propagate to all manually generated .html files and also to the CGI software and the automatically generated pages. It means that if I want to change the colour of a certain set of pages (some manual, some automatic), a once-only edit to a macro and then a re-generation is all that is required. It also has the nice side-effect that the bottom of every single page has navigation and a copyright message, which has never been the case before. It was a good job that I backed up my work machine's hard disk to DAT yesterday, because the HP engineer arrived today and replaced the internal 1GB drive with a brand new one. The old one wasn't exactly faulty, but was making a racket at the best of times and became a jumbo jet-cum-fridge occasionally. It was only a matter of time before the drive expired and since the whole of Connect is moving to new accommodation shortly [yes, we'll no longer be based in the Computer Science Department, but we'll still be on the same LAN], I figured I'd better get this annoyance (and potential failure) fixed before the move. Scrunched up the index page to this What's New section and also pointed the home page What's New link to the latest What's New page (i.e. the one you're reading now !) instead of to the index page. This unfortunately meant that the future plans page was much harder to locate, so I've put a mention of it at the top of this page. Last month, these pages registered a 77% increase in the number of unique external sites (yes, proxy servers only count as one site !) accessing them. I'm sure two consecutive double rollovers during January had a lot to do with this, but it's nice to see that more than 1,000 external sites are visiting a day... Oops, I'd moved a window in front of the latest InterLotto balls when it was dumping them, leaving the corner of an xterm as part of the balls. It would have been quite amusing if I'd caught it early and fixed it, but the fact that it spent most of Thursday like that was a bit embarrassing :-( It's a good job I browse around a local copy of the pages at home most evenings, otherwise this could have been bad news if it hadn't happened on an InterLotto draw day. It's the risk you take when screen-dumping GIFs on a display that's actually being used at the time. Previous month: January 1996
http://richardlloyd.org.uk/WhatsNew/9602.html
CC-MAIN-2019-18
refinedweb
2,444
67.08
You can leave it as just DOMAIN. You don't necessarily need the dot com. If necessary, you can change it later, but not without jumping through some hoops - or should I say, windows. If you need to rename it later: How to Rename the DNS Name of a Windows 2000 Domain;en-us;292541 To expand on maxwell's fine answer I would say leave the name alone, but with a few notes: If your domain has multiple controllers then upgrade the PDC first. Also, if you do add an extension then do not use ".com" unless it is a public domain that is registered. if the namespace is for internal only then use something like ".local". Good Luck, razz Nt 4.0 to 2000 upgrade Thank you for the help. This conversation is currently closed to new comments.
https://www.techrepublic.com/forums/discussions/nt-40-to-2000-upgrade/
CC-MAIN-2018-43
refinedweb
140
83.25
anatoly techtonik Why did you inserted import sys all over the place? It doesn't seem to be used. 2012-11-16T13:47:19+00:00 Why did you inserted import sysall over the place? It doesn't seem to be used. Good catch Anatoly, thanks a lot for reviewing this. The additional "sys import" statements were leftovers from a refactoring...I corrected this now. All remaining imports should really be needed... I see a lot of these: Why is it necessary to do that? Could you extract it out to a function in QMTest/TestCommon.py or TestSCons.py? It's necessary because the executables like "latex" or "dvipdf" wouldn't be found under Windows without it. A problem with putting these two lines into its own function is that sometimes a ", " has to get appended to the string, depending on whether more arguments follow or not. I see how these changes make the tests pass on Windows, but I'd like Rob to review before we pull, since he's the LaTeX guy. Specifically the envprop stuff; if Windows can't find LaTeX without it, shouldn't the tool itself be fixed? Also the dvipdfm change, and the latex flags removal, probably other things. I'll email him specifically about this. Dirk, if you were to break this up into two sets of changes, one for LaTeX and one for all the other stuff, we might be able to pull the other stuff more quickly. I do like the test enhancements though, all new flexibility there is much appreciated. I understand everybody's concerns about those envprop changes, but for now it's the best I could come up with for the underlying problem: the LaTeX Tool tries to find an executable like "latex" in the system. This could be installed in "C:\emtex" or "C:\miktex" or "D:\texlive", whatever. In order to find it during the initialization of the Tool, its path must be present in the $PATH variable. Note, how this scheme comes up for every live test...it usually isn't such an issue under Linux because most applications are linked to /usr/bin or /usr/local/bin, which are in the internal default paths for the where_is/detect functions. About splitting things up into two (or even more) separate pull requests: that would be fine with me. But I'd also like to hear first what Rob says about this. Maybe he really wants to do some changes by himself...just give me a list of files that should be delayed for now, and I'll update the request accordingly. I think it's reasonable to search the default install locations of the paths in the tools. But, also we should provide a way such that tests can pick up an alternative search PATH. (This would allow installing multiple tools in non standard locations, and then adding them the the test PATH in different combinations to verify the preferred tool(s) are picked up by tool initialization). Seems like the best place to do this would be in the test framework and have it generate the env=Environment(...) statements for the test SConstructs? (who made the above comment?) Oops. Sorry posted via wrong browser. That was me. A few comments on these patches. I sympathize on your issues with SCons not finding the TeX applications. I had the same problem on windows and had to add a patch to Platform/darwin.py once I found where the paths for them were stored. I would love to think there was a way to get that information out of the Windows repository but I have no idea. In dryrun.py I am wondering if you have considered whether "expect" would ever need a different drive letter than D. I remember adding that to the command string to allow building on a different drive... Hmm, looks like the extra os.environ['PATH'] include in the Env of glossary.py could be removed now like you did in multirun.py Rob, thanks a lot for your input on this. I corrected glossary.py and removed the additional ENV PATH argument. About dryrun.py: there is no drive letter involved actually. Do you mean the "(/D )*" for catching a "cd . & ..." under Linux, as well as a "cd /D . & ..." under Windows (note the forward slash in front of the D)? Looking at the comments above, I get the impression that we are all not very happy with the hacks that have to be made (ENV PATH) in order to get the tests working cross-platform. My idea, and this is what I suggest now, would be to go with this patch as it is now, and then add a new SEP (or a simple bug) for general improvements of the Tool subsystem to the Wiki. There we could continue the discussion about new features... Dirk, I was just wanting to make sure that on Windows we catch situations where the output would be "cd /E . & ..." or some other drive letter than D due to an odd configuration of disks. I agree that on the long term we need to work on what path to automatically use in SCons. I know the default is to not include everything in the environment. I also want to mention in this forum that there are two methods available to check for the presence of a tool. The tests seem to always use test.where_is which searches the whole path defined on the system. test.detect (there maybe a capital in there) only looks in the path that SCons is using. The way they set my Mac up at work the where_is finds many tools that don't actually run on my hardware so I get a lot of test failures. I hope I have accurately described the two methods. So I was wondering if we should consider switching over to detect from where_is. I admit it may be a case by case decision for a later time. There will never appear a "/E", the "/D" is used to allow changing the drive letter under Windows. Please check SCons/Tool/tex.py at ll. 927. Thanks for reminding me. I thought it had been left as an entry in the environment that the user could change but I guess I never got that done. I could have sworn I did that because some user had his files on a different disk than SCons... or something like that. I can't review this patch - it's too large for my experience with SCons core. It will be easier to review MinGW patch separately. I split up the changes into three parts now. This is the first one and it mainly contains the extensions to the test framework. The MinGW and LaTeX patches each follow in a different pull request. Hi Dirk, this is fine now. I'd like to pull this, but as one single change if you don't mind, without all the history from your side which I think just clutters it up. I think it stands alone very well. In order for you to get credit for it, could you just make it as a single change from the current tip? (I can do it if you prefer.) One minor thing to consider: should your search_re (used in a couple of tests) be pulled into the TestCommon system?
https://bitbucket.org/scons/scons/pull-requests/49/fixes-for-tests-under-windows-bug2872-fix/activity
CC-MAIN-2016-30
refinedweb
1,236
74.49
This . Event Logging . . . . . . . . . . . . HTTP Inspect . . . . . . . . . .6. . . . . . . . . . . 3 . . . . . . .5 2. . . . . . . . . . . . .14 DCE/RPC 2 Preprocessor . . . . . . . . . . . . . . . . . . Event Processing . . . . . . . . . . alert unixsock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reverting to original behavior . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . Performance Monitor . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . . . .2 2. . . . . . . . . . . . . . . . . . . RPC Decode . . . . . . . . . . . . .3 Decoder and Preprocessor Rules . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . .8 2.2 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Performance Profiling . . . . . . . . . . . . . . . . . . . . . . . .5. . . .2 2. . .3 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . .2 Configuring Snort 2. . . . . . . . .4 Rate Filtering . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . .1. . . . . . . . . . .4 2. . . . . . Config . . . . . 2. . . . . . . 2. . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . Event Suppression . . . . . . . . . . . . . . . . . . 2. . .2. . . . . . . . . 2. . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Output Modules . SMTP Preprocessor . . Stream5 . . . . .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . Variables . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . .3 2. . .6 alert syslog . . . . . . . . . . . . sfPortscan . . . . . . . . . . . . .2. . . . . . . . . .2 2. Packet Performance Monitoring (PPM) . . . . . . . . .9 Frag3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . .3 Rule Profiling . . . . . . . . . . . . Preprocessor Profiling . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . .12 SSL/TLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 DCE/RPC . .11 DNS . . . . . . .4 2. . . . . . . . . . . . .4 Configuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . alert full . . . . . .6.4. . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . .5. . . .2. . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . 2. .2. . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . .3.2 2. . . . . . . . . . . . . . . . . .1 Includes . . . . . .1 2. . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . log tcpdump . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 2. . . . . FTP/Telnet Preprocessor . . . . . . . . . .2. . . . . . . . . . . . . . . .4. . . . . . . 2. . . . . . . . . . . . . . . . . alert fast . . . . . . . . . . . database . . . . . . . . . . . .2 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. . . . . . . . . . . . . . . . . .6. . . . . . . . . . .6. . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . 20 20 20 20 23 28 28 31 36 41 41 44 53 55 62 63 66 66 67 68 82 82 83 83 83 84 87 88 88 89 90 93 96 96 98 98 98 99 99 Preprocessors . . . 2. . . . . .2 Format . . .6 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 ARP Spoof Preprocessor . . . . .2. . . . . . . . . . . . . . . . . .7 2. . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . .2. .3 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . 2. . . . .1 2. . . . . . . . . Event Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 3. . . . . . . . . . . . .3 Enabling support .2. . . . . . . . . . . . . . . . . . . . .10. . . . . . . . . . . . . . 100 unified .9 Reloading a Snort Configuration . . . . . . . . . . . . . . . . 116 General Rule Options . . . . . 115 The Direction Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Host Attribute Table . . . . . . . . . 107 2. . . . . . . . . . . . . . . .5 3. . . . . . . . . . . . . . . . . . . . . . . . . . 117 reference . . . . . . . . . . . . . . .2 Configuration Specific Elements . . . . . . . . . 119 classtype . . . . 107 2. . . . . . . . . . . . . . . . 101 unified 2 . . . . . . . . . . . . . . . . . . . . . . . . 104 Dynamic Modules . . .2 Format .4 Rule Options . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . .2 113 The Basics . . . . . . . . . . . . . . . . . . . . . . 106 2. . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 3. . . . . . . . . . . . . . . . . . . . . . . . . .6 3. . . .6. . . . . . . . . . . . . . . . . . .2.10 alert prelude . . . . . . 110 2. . . . . . . . . . . . . . . . . . . .1 3. . . . . . . .3 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 csv . .2. . . . . . . . . . . . . . . . . . . . . . .6 Rule Actions . . . . . . . . .8 Configuration Format . . 117 3.2 3.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10. . . .2 2. . . 120 4 . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . .8. . . . . . . . . . . .4. . . .2. . . . . . . . . . . . . . . . . . . . . . . . . 114 IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . .2. . . . . . . . . . . . . . . 107 Non-reloadable configuration options . . . .8 2. . . . . . . . . . . . . .3 How Configuration is applied? . . .3 3. . . . . . . . . . . . . . . . . . . 102 2. .5 3. . . . . . . . . . . . . .8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 log null . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . .4. . . . . . . . . . . . . . . . . . . . . .4. . . . . . 109 2. . . . . . . . . . . . . .6. . . 106 Directives . . . . . . . . . . . . .2. . . . . . . . . . . . . . .12 alert aruba action . . . . . . .9. . . . . . . . . 103 2. . . . . . . . . . . . . . . . . . . . 113 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 sid . . . . . . . . . . . .4. . . 118 rev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2.9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 gid . . . . . 102 2. . . . . . . . . . . . . . . . . . . . . . .10 Multiple Configurations . . . . . . . . . . . . . . . . . . . . . . . . 109 2. . . . . . . . 112 3 Writing Snort Rules 3. . . . . .7 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 2. . . . . 104 Attribute Table File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 priority . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Creating Multiple Configurations . 103 2. .6. . . . . .2 2. .4. . . . . . . . . . . . . 108 2. . . . .4. .6. .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Port Numbers . . . . . . . . . . .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 msg .7. . . . . . . . . . . . . . . . . . . . .4 3. . . . . . . . . . . .1 3. . 113 Rules Headers . . . . . . . . . . . . . . . . . 116 3. . . . . . . . . . . . . . . . . . . 115 Activate/Dynamic Rules . . . . . 107 Reloading a configuration . . . . . . . . . .4 3. . 113 Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . .11 http method . . . . . . . . . . . . . . 127 3. . . . . . . . . . 121 General Rule Quick Reference . . . . . . .3 3. . . . . . . .2 3. . . . . . . . . . . . . 130 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 3. 123 rawbytes . . . . . . . . . . . . . . . 125 3. . . 128 3. . . .26 Payload Detection Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 nocase . . . . . . . . . . . . . . . . . 137 ipopts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . 136 id . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . .5. .9 3. . . . . .9 content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . .15 urilen . . . . . . . . . . . . . . . . . . . 125 http cookie . . . . . . . . . . . . . . . . . .20 ftpbounce . . . . . . . . . . . . . . .5. . . . . . . . . . . . . .5. . . . . . .5. . . . .16 isdataat . . . . 139 5 .4. . . . .6 Non-Payload Detection Rule Options . . .7 3. . . . . .19 byte jump . . 135 3. . . 133 3. . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . 136 ttl . . . . . . . . . . 136 3. . . . .6 3. . . . . . . . . . . 124 offset . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . .5. . . . . . . . . . 132 3. . . . 124 distance . . . . . . . . . . 125 http client body . . . . . . . . . . . . . . . . . . . . . . . . 126 3. . . . . . . . . . . . . . . . . . . . .5. . .7 3. . . . . . . . . . . . . . . . . .5. . . . . 135 3. . . . . . .22 cvs . . . . . . . .5 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . 121 Payload Detection Rule Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . .5. . . . . .24 dce opnum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . .13 fast pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 3. . .3. . . .25 dce stub data . . . 124 within . . . . . . . . . . . . . . . 135 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . .5. . . . . . . . . . . . . . . . 129 3. . . . . . . .8 3. . .5. . . . . . . . . . . . . .10 http header . . . . . . . . . . . . . . . . . . . .6.1 3. . . . . . . . . . . 133 3. . . . . . . . . . . . . . . . . 122 3. . . . . . .6. . . . . . . . . . . . . . . . 139 flow . . . 127 3. . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . .2 3.5 3. . . . . . . . . 129 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 dsize . . . . . . . . . . . . . . . . 126 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . .9 fragoffset . . . .5. . . . . . . . . . . . . . . . . . . . . . 135 3. . . . . . . . . . . . . . . . . . . . . . .6 3. . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . .17 pcre . . . . . . . 134 3. . . . . . . .23 dce iface . . . . . . . . . . . . . . . . . . . . . . . . . . 136 tos . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . .21 asn1 .5. . . . . . . . .18 byte test . . . .4 3. . . . . . . . . . . . . . .3 3. . . . . .5 metadata . . . . . . . . . . . . . . . . . . . .6. . . . . .12 http uri . . . . . . . . . . . . . . . . . . . . . . . .14 uricontent . . . . . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . .4. . 137 fragbits . . . . . . . . . . . . . . . . . . . . . . . .8 3. . . . . . . . . . . . . . . . . . . . . 123 depth . . . . . . . . . . . . . . . . . . . . . . . . . . . If you’re on a high speed network or you want to log the packets into a more compact form for later analysis.1.) 1./log -h 192./log Of course. in the case of a tie./snort -dev -l ./snort -d -v -e and it would do the same thing./snort -vde (As an aside./snort -l . If you just specify a plain -l switch. you need to specify a logging directory and Snort will automatically know to go into packet logger mode: . In order to log relative to the home network.168. ! △NOTE both the source and destination hosts are on the home network. they are logged to a directory Note that if with a name based on the higher of the two port numbers or. and you want to log the packets relative to the 192. Once the packets have been logged to the binary file. We don’t need to specify a home network any longer because binary mode logs everything into a single file.168.1. you should consider logging in binary mode. you may notice that Snort sometimes uses the address of the remote computer as the directory in which it places packets and sometimes it uses the local host address. you don’t need to run in verbose mode or specify the -d or -e switches because in binary mode the entire packet is logged. If you want an even more descriptive display. Snort can also read the packets back by using the 9 . not just sections of it. these switches may be divided up or smashed together in any combination.168.3 Packet Logger Mode OK.0/24 This rule tells Snort that you want to print out the data link and TCP/IP headers as well as application data into the directory .0 class C network. do this: . All incoming packets will be recorded into subdirectories of the log directory. The last command could also be typed out as: . the source address./snort -dev -l . Snort will exit with an error message. with the directory names being based on the address of the remote (non-192. showing the data link layer headers. Additionally./log.1) host.This instructs Snort to display the packet data as well as the headers. but if you want to record the packets to the disk. this assumes you have a directory named log in the current directory. all of these commands are pretty cool./log -b Note the command line changes here. you need to tell Snort which network is the home network: . you can read the packets back out of the file with any sniffer that supports the tcpdump binary format (such as tcpdump or Ethereal). If you don. it collects every packet it sees and places it in a directory hierarchy based upon the IP address of one of the hosts in the datagram. Binary mode logs the packets in tcpdump format to a single binary file in the logging directory: . which eliminates the need to tell it how to format the output directory structure. When Snort runs in this mode. log You can manipulate the data in the file in a number of ways through Snort’s packet logging and intrusion detection modes. If you don’t specify an output directory for the program./snort -dv -r packet. so you can usually omit the -e switch.1 NIDS Mode Output Options There are a number of ways to configure the output of Snort in NIDS mode. Writes the alert in a simple format with a timestamp. One thing to note about the last command line is that if Snort is going to be used in a long term way as an IDS. Alert modes are somewhat more complex. Generates “cmg style” alerts.4 Network Intrusion Detection System Mode To enable Network Intrusion Detection System (NIDS) mode so that you don’t record every single packet sent down the wire. These options are: Option -A fast -A full -A -A -A -A unsock none console cmg Description Fast alert mode.conf where snort. as well as two logging facilities./snort -d -h 192.1. socket. source and destination IPs/ports. logging packets that trigger rules specified in the snort. Six of these modes are accessed with the -A command line switch. Full alert mode. This is the default alert mode and will be used automatically if you do not specify a mode. try this: . and none.conf in plain ASCII to disk using a hierarchical directory structure (just like packet logger mode). the -v switch should be left off the command line for the sake of speed. The default logging and alerting mechanisms are to log in decoded ASCII format and use full alerts. if you only wanted to see the ICMP packets from the log file. cmg. This will apply the rules configured in the snort. syslog. too. if you wanted to run a binary log file through Snort in sniffer mode to dump the packets to the screen.1. For example. which puts it into playback mode.log icmp For more info on how to use the BPF interface./snort -dvr packet.4./log -h 192. as well as with the BPF interface that’s available from the command line. For example.conf This will configure Snort to run in its most basic NIDS form. console. 1.conf file to each packet to decide if an action based upon the rule type in the file should be taken. 1.-r switch. The full alert mechanism prints out the alert message in addition to the full packet headers. you can try something like this: . read the Snort and tcpdump man pages. and packets can be dropped while writing to the display./snort -dev -l . There are seven alert modes available at the command line: full. Sends “fast-style” alerts to the console (screen)./log -c snort.168.0/24 -c snort. Turns off alerting. Sends alerts to a UNIX socket that another program can listen on. simply specify a BPF filter at the command line and Snort will only see the ICMP packets in the file: . Packets from any tcpdump formatted file can be processed through Snort in any of its run modes.0/24 -l . it will default to /var/log/snort. 10 .168. There are several other alert output modes available at the command line. . fast. It’s also not necessary to record the data link headers for most applications.conf is the name of your rules file. The screen is a slow place to write data to. alert message. 1.conf -A fast -h 192.168. use the -s switch. For a list of GIDs. try using binary logging with the “fast” output mechanism. If you want to configure other facilities for syslog output.4. The default facilities for the syslog alerting mechanism are LOG AUTHPRIV and LOG ALERT. Rule-based SIDs are written directly into the rules with the sid option. please see etc/gen-msg. The third number is the revision ID.4. use the -N command line switch.0/24 1.Packets can be logged to their default decoded ASCII format or to a binary log file via the -b command line switch.168.2 Understanding Standard Alert Output When Snort generates an alert message.map.1. but still somewhat fast.1 for more details on configuring syslog output. If you want a text file that’s easily parsable. To disable packet logging altogether. This allows Snort to log alerts in a binary form as fast as possible while another program performs the slow actions./log -h 192. use the following command line to log to default (decoded ASCII) facility and send alerts to syslog: . For output modes available through the configuration file. please read etc/generators in the Snort source. For example: . such as writing to a database. This will log packets in tcpdump format and produce minimal alerts. 56 represents a T/TCP event. In this case.1. this tells the user what component of Snort generated this alert. To send alerts to syslog. For a list of preprocessor SIDs. For example. it will usually look like the following: [**] [116:56:1] (snort_decoder): T/TCP Detected [**] The first number is the Generator ID.6.6.conf 11 . see Section 2. use the following command line to log to the default facility in /var/log/snort and send alerts to a fast alert file: .conf -l .0/24 -s As another example. ! △NOTE Command line logging options override any output options specified in the configuration file./snort -c snort. you need to use unified logging and a unified log reader such as barnyard./snort -c snort./snort -b -A fast -c snort.3 High Performance Configuration If you want Snort to go fast (like keep up with a 1000 Mbps connection). use the output plugin directives in the rules files. In this case. See Section 2. This allows debugging of configuration issues quickly via the command line. This number is primarily used when writing signatures. we know that this event came from the “decode” (116) component of Snort. as each rendition of the rule should increment this number with the rev option. The second number is the Snort ID (sometimes referred to as Signature ID). 6 Miscellaneous 1. the --pid-path command line switch causes Snort to write the PID file in the directory specified. This is useful if you don’t care who sees the address of the attacking host.0/24 class C network: . 1.2 Running in Rule Stub Creation Mode If you need to dump the shared object rules stub to a directory. for example: /usr/local/bin/snort -d -h 192. The path can be relative or absolute.0/24 15 . Please notice that if you want to be able to restart Snort by sending a SIGHUP signal to the daemon. Use the --nolock-pidfile switch to not lock the PID fi.6. You can also combine the -O switch with the -h switch to only obfuscate the IP addresses of hosts on the home network.0/24 \ -l /var/log/snortlogs -c /usr/local/etc/snort.168.3 Obfuscating IP Address Printouts If you need to post packet logs to public mailing lists. This switch obfuscates your IP addresses in packet printouts.168. the --create-pidfile switch can be used to force creation of a PID file even when not running in daemon mode.168.1. obfuscating only the addresses from the 192.6. For example. /usr/local/bin/snort -c /usr/local/etc/snort. This is handy if you don’t want people on the mailing list to know the IP addresses involved. In Snort 2.1. the daemon creates a PID file in the log directory. Additionally./snort -d -v -r snort.log -O -h 192. These rule stub files are used in conjunction with the shared object rules.1.conf -s -D Relative paths are not supported due to security concerns.1.conf \ --dump-dynamic-rules=/tmp This path can also be configured in the snort. /usr/local/bin/snort -c /usr/local/etc/snort. Snort PID File When Snort is run as a daemon . you might need to use the –dump-dynamic-rules option. you might want to use the -O switch. you could use the following command to read the packets from a log file and dump them to the screen. The PID file will be locked so that other snort processes cannot start.conf \ --dump-dynamic-rules snort.1 Running Snort as a Daemon If you want to run Snort as a daemon. you must specify the full path to the Snort binary when you start it.6. 1. you can the add -D switch to any combination described in the previous sections.6.conf: config dump-dynamic-rules-path: /tmp/sorules In the above mentioned scenario the dump path is set to /tmp/sorules. Added for completeness.1. reset snort to post-configuration state before reading next pcap.pcap.2 Examples Read a single pcap $ snort -r foo. Users can specify either a decimal value (-G 1) or hex value preceded by 0x (-G 0x11).e.6. --pcap-no-filter --pcap-reset --pcap-show 1. Shell style filter to apply when getting pcaps from file or directory.7. A space separated list of pcaps to read. Can specifiy path to pcap or directory to recurse to get pcaps. Note that Snort will not try to determine whether the files under that directory are really pcap files or not. either on different CPUs.txt foo1. 1. 16 . without this option. Reset to use no filter when getting pcaps from file or directory. 1. however. Option -r <file> --pcap-single=<file> --pcap-file=<file> --pcap-list="<list>" --pcap-dir=<dir> --pcap-filter=<filter> Description Read a single pcap. Sorted in ascii order.7. This option can be used when running multiple instances of snort. This can be useful for testing and debugging Snort. i.4. Each Snort instance will use the value specified to generate unique event IDs.4 Specifying Multiple-Instance Identifiers In Snort v2. is not to reset state. The default. Print a line saying what pcap is currently being read. A directory to recurse to look for pcaps. foo2. This is also supported via a long option --logid. Snort will read and analyze the packets as if they came off the wire.pcap /home/foo/pcaps $ snort --pcap-file=foo. you can give it a packet capture to read. If reading multiple pcaps.1 Command line arguments Any of the below can be specified multiple times on the command line (-r included) and in addition to other Snort command line options. Same as -r. File that contains a list of pcaps to read. Note.7 Reading Pcaps Instead of having Snort listen on an interface. or on the same CPU but a different interface.pcap Read pcaps from a file $ cat foo.txt This will read foo1.. the -G command line option was added that specifies an instance identifier for the event logs.pcap foo2. This filter will apply to any --pcap-file or --pcap-dir arguments following.pcap $ snort --pcap-single=foo. that specifying --pcap-reset and --pcap-show multiple times has the same effect as specifying them once.pcap and all files under /home/foo/pcaps. cap" --pcap-dir=/home/foo/pcaps2 In this example. then the filter ”*. Read pcaps under a directory $ snort --pcap-dir="/home/foo/pcaps" This will include all of the files under /home/foo/pcaps.txt foo1.cap” will cause the first filter to be forgotten and then applied to the directory /home/foo/pcaps.pcap foo2.txt \ > --pcap-no-filter --pcap-dir=/home/foo/pcaps In this example. any file ending in ”.pcap foo3.cap” will be applied to files found under /home/foo/pcaps2.cap" --pcap-dir=/home/foo/pcaps In the above.txt.pcap --pcap-file=foo. Resetting state $ snort --pcap-dir=/home/foo/pcaps --pcap-reset The above example will read all of the files under /home/foo/pcaps. The addition of the second filter ”*.pcap and foo3. then no filter will be applied to the files found under /home/foo/pcaps.cap” will be included from that directory. it will be like Snort is seeing traffic for the first time. 17 . so all files found under /home/foo/pcaps will be included. $ snort --pcap-filter="*.pcap /home/foo/pcaps $ snort --pcap-filter="*. so all files found under /home/foo/pcaps will be included.txt \ > --pcap-no-filter --pcap-dir=/home/foo/pcaps \ > --pcap-filter="*. $ snort --pcap-filter="*. the first filter will be applied to foo.pcap" This will read foo1.pcap --pcap-file=foo.Read pcaps from a command line list $ snort --pcap-list="foo1. in other words.txt” (and any directories that are recursed in that file). For each pcap. the first filter ”*. Snort will be reset to a post-configuration state. the first filter will be applied to foo. statistics reset. but after each pcap is read.pcap" --pcap-dir=/home/foo/pcaps The above will only include files that match the shell pattern ”*.pcap foo2. meaning all buffers will be flushed.txt \ > --pcap-filter="*.pcap”. etc. foo2.txt.pcap --pcap-file=foo.pcap.pcap" --pcap-file=foo. $ snort --pcap-filter="*. Using filters $ cat foo.txt $ snort --pcap-filter="*. then no filter will be applied to the files found under /home/foo/pcaps.pcap.pcap”. so only files ending in ”.pcap” will only be applied to the pcaps in the file ”foo. 1.2.3.2.![2. Negation is handled differently compared with Snort versions 2.1.) The following examples demonstrate some invalid uses of IP variables and IP lists. and CIDR blocks may be negated with ’!’.IP Variables and IP Lists IPs may be specified individually.255.sid:2.2.2.2.1. Previously.2.) alert tcp [1. See below for some valid examples if IP variables and IP lists.1.0/24.2. each element in a list was logically OR’ed together.0/24] any -> any any (msg:"Example".1. Using ’var’ for an IP variable is still allowed for backward compatibility.) Different use of !any: ipvar EXAMPLE !any alert tcp $EXAMPLE any -> any any (msg:"Example". Variables.0 to 2.2.1.3]] alert tcp $EXAMPLE any -> any any (msg:"Example".1.0.2.sid:3.2.2 and 2. Valid port ranges are from 0 to 65535. Also.1.2.0/8.3]] The order of the elements in the list does not matter.2.888:900] 21 .) Logical contradictions: ipvar EXAMPLE [1.1.x and earlier.2. IPs. with the exception of IPs 2.0/24.2.1.2.1] Nonsensical negations: ipvar EXAMPLE [1.2.!1.0.!1. Also.1.sid:3.2.1. If IPv6 support is enabled.2.1. ranges.0.1.0/16] Port Variables and Port Lists Portlists supports the declaration and lookup of ports and the representation of lists and ranges of ports.2.2.2.2. in a list. The following example list will match the IP 1.2. Lists of ports must be enclosed in brackets and port ranges may be specified with a ’:’. IP lists.2. [1. such as in: [10:50.1 and IP from 2.1. sid:1. ipvar EXAMPLE [1. as a CIDR block. The element ’any’ can be used to match all IPs. IP variables should be specified using ’ipvar’ instead of ’var’.1. or any combination of the three.1. but ’!any’ is not allowed.2. but it will be deprecated in a future release.!1.1. ’any’ will specify any ports. negated IP ranges that are more general than non-negated IP ranges are not allowed.2.![2. IP lists now OR non-negated elements and AND the result with the OR’ed negated elements.0/24. although ’!any’ is not allowed.7. or lists may all be negated with ’!’. Use of !any: ipvar EXAMPLE any alert tcp !$EXAMPLE any -> any any (msg:"Example". ) alert tcp any 90 -> any [100:1000. You can define meta-variables using the $ operator. sid:2.) Port variable used as an IP: alert tcp $EXAMPLE1 any -> any any (msg:"Example".) Variable Modifiers Rule variable names can be modified in several ways. provided the variable name either ends with ’ PORT’ or begins with ’PORT ’. as described in the following table: 22 . The use of ’var’ to declare a port variable will be deprecated in a future release. sid:4.100:200] alert tcp any $EXAMPLE1 -> any $EXAMPLE2_PORT (msg:"Example".!80] Ports out of range: portvar EXAMPLE7 [65536] Incorrect declaration and use of a port variable: var EXAMPLE8 80 alert tcp any $EXAMPLE8 -> any any (msg:"Example".Port variables should be specified using ’portvar’.91:95. sid:5. For backwards compatibility. sid:3.) alert tcp any $PORT_EXAMPLE2 -> any any (msg:"Example". a ’var’ can still be used to declare a port variable. sid:1. portvar EXAMPLE1 80 var EXAMPLE2_PORT [80:90] var PORT_EXAMPLE2 [1] portvar EXAMPLE3 any portvar EXAMPLE4 [!70:90] portvar EXAMPLE5 [80.9999:20000] (msg:"Example". The following examples demonstrate several valid usages of both port variables and port lists. These can be used with the variable modifier operators ? and -.) Several invalid examples of port variables and port lists are demonstrated below: Use of !any: portvar EXAMPLE5 !any var EXAMPLE5 !any Logical contradictions: portvar EXAMPLE6 [80. Format config <directive> [: <value>] 23 . port variables can be defined in terms of other port variables.3 Config Many configuration and command line options of Snort can be specified in the configuration file. Valid embedded variable: portvar pvar1 80 portvar pvar2 [$pvar1.90] Likewise. For instance. Replaces with the contents of variable var or prints out the error message and exits. types can not be mixed. Replaces the contents of the variable var with “default” if var is undefined. Here is an example of advanced variable usage in action: ipvar MY_NET 192. but old-style variables (with the ’var’ keyword) can not be embedded inside a ’portvar’.0/24 log tcp any any -> $(MY_NET:?MY_NET is undefined!) 23 Limitations When embedding variables.1. They should be renamed instead: Invalid redefinition: var pvar 80 portvar pvar 90 2. variables can not be redefined if they were previously defined as a different type.90] Invalid embedded variable: var pvar1 80 portvar pvar2 [$pvar1.168.1. Replaces with the contents of variable var.Variable Syntax var $(var) or $var $(var:-default) $(var:?message) Description Defines a meta-variable. 5. Forks as a daemon (snort -D). See Table 3. Specify disabled to disable loading rules. Global configuration directive to enable or disable the loading of rules into the detection engine.21 for more information and examples. icmp or all. noip. high performance) – ac-bnfa Aho-Corasick NFA (low memory. Decodes Layer2 headers (snort -e). udp. If Snort was configured to enable decoder and preprocessor rules. this option will cause Snort to revert back to it’s original behavior of alerting if the decoder or preprocessor generates an event. Only useful if Snort was configured with –enable-inline-init-failopen. moderate performance) – ac-sparsebands Aho-Corasick Sparse-Banded (small memory. low performance) • no stream inserts • max queue events<integer> config disable decode alerts config disable inline init failopen config disable ipopt alerts Turns off the alerts generated by the decode phase of Snort. Chroots to specified dir (snort -t). moderate performance) – ac-banded Aho-Corasick Banded (small memory calculate checksums. Sets the alerts output file. notcp. Types of packets to drop if invalid checksums. tcp. ip. noicmp. noudp.2 for a list of classifications. Makes changes to the detection engine. best performance) – ac-std Aho-Corasick Standard (moderate memory. 24 . noip. Specifies BPF filters (snort -F). (snort --disable-inline-init-failopen) Disables IP option length validation alerts. Values: none. The following options can be used: • search-method <ac | ac-std | ac-bnfa | acs | ac-banded | ac-sparsebands | lowmem > – ac Aho-Corasick Full (high memory. notcp. noicmp. Specifies the maximum number of nodes to track when doing ASN1 decoding. ip. Disables failopen thread that allows inline traffic to pass while Snort is starting up. See Section 3. tcp. high performance) – lowmem Low Memory Keyword Trie (small memory. icmp or all (only applicable in inline mode and for packets checked per checksum mode config option). Default (with or without directive) is enabled. high performance) – acs Aho-Corasick Sparse (small memory. udp. noudp. Values: none. Dumps raw packet starting at link layer (snort -X). Turns off alerts generated by T/TCP options. config event queue: [max queue Specifies conditions about Snort’s event queue. config enable mpls overlapping ip Enables support for overlapping IP addresses in an MPLS network. enable decode oversized alerts must also be enabled for this to be effective (only applicable in inline mode). (only applicable in inline mode). In such a situation.4 for more information and examples. this configuration option should be turned on. Enables the dropping of bad packets identified by decoder (only applicable in inline mode). config event filter: memcap Set global memcap in bytes for thresholding. enable tcpopt ttcp drops Enables the dropping of bad packets with T/TCP option. (only applicable in inline mode). However.4. (only applicable in inline mode). Turns off alerts generated by experimental TCP options. config enable mpls multicast Enables support for MPLS multicast. there could be situations where two private networks share the same IP space and different MPLS labels are used to differentiate traffic from the two VPNs. config enable decode oversized alerts Enable alerting on packets that have headers containing length fields for which the value is greater than the length of the packet. You can use the <num>] [log <num>] [order events following options: <order>] • max queue <integer> (max events supported) • log <integer> (number of events to log) • order events [priority|content length] (how to order events within the queue) See Section 2. 25 . config Enables the dropping of bad packets with experimental TCP openable tcpopt experimental drops tion.. By default. where there are no overlapping IP addresses. config enable decode oversized drops Enable dropping packets that have headers containing length fields for which the value is greater than the length of the packet. config enable tcpopt drops Enables the dropping of bad packets with bad/truncated TCP option (only applicable in inline mode). When this option is off and MPLS multicast traffic is detected. Snort will generate an alert. In a normal situation. Turns on character dumps (snort -C). By default. This option is needed when the network allows MPLS multicast traffic. it is off.Turns off alerts generated by obsolete TCP options. Dumps application layer (snort -d). enable ttcp drops Enables the dropping of bad packets with T/TCP option. it is off. (only applicable in inline mode). Default is <bytes> 1048576 bytes (1 megabyte). this configuration option should not be turned on. config enable tcpopt obsolete drops Enables the dropping of bad packets with obsolete TCP option. config enable ipopt drops Enables the dropping of bad packets with bad/truncated IP options (only applicable in inline mode). Turns off alerts generated by T/TCP options. ipv6 and ethernet are also valid options. (Snort must be compiled with –enable-flexresp2) Specify the response interface to use. In Windows this can also be the interface number. See Section 1. UDP. Disables logging. Sets a Snort-wide MPLS payload type. Its default value is -1. Port ranges are supported. In addition to ipv4.5. The default MPLS payload type is ipv4 Disables promiscuous mode (snort -p). Minimum value is 32 and the maximum is 524288 (512k). Sets a limit on the maximum number of hosts to read from the attribute table. Default is 1024 rows. IP. If the number of hosts in the attribute table exceeds this value. The times (hashed on a socket pair plus protocol) are used to limit sending a response to the same half of a socket pair every couple of seconds. This option is only supported with a Host Attribute Table (see section 2. Disables pcre pattern matching. The following options can be used: • bsd icmp frag alert on|off (Specify whether or not to alert. 26 . (Snort must be compiled with –enableflexresp2) Specify the memcap for the hash table used to track the time of responses. Sets the logdir (snort -l). Valid values are 0 to 20. bad ipv6 frag alert on|off] [. however values less than 4 will default to 4. The default value without this option is 4. or ICMP). Sets a Snort-wide limit on the number of MPLS headers a packet can have. Sets the network interface (snort -i). max frag sessions <max-track>] Specify the number of TCP reset packets to send to the source of the attack. (snort . Specifies ports to ignore (useful for ignoring noisy NFS traffic). Default is 1048576 bytes.7). Sets a Snort-wide minimum ttl to ignore all traffic. which means that there is no limit on label chain length. frag timeout <secs>] [. Obfuscates IP Addresses (snort -O). Specify the protocol (TCP. an error is logged and the remainder of the hosts are ignored. (Snort must be compiled with –enable-flexresp2) Specify the number of rows for the hash table used to track the time of responses. The default is 10000. Note: Alerts will still occur. (Snort must be compiled with –enable-flexresp2) Specifies the maximum number of flowbit tags that can be used within a rule set] [. followed by a list of ports. Default is on) • bad ipv6 frag alert on|off (Specify whether or not to alert. 5 on using the tag option when writing rules for more details.5. Changes GID to specified GID (snort -g). up to the PCRE library compiled limit (around 10 million). Disables banner and status reports (snort -q). This option is used to avoid race conditions when modifying and loading a configuration within a short time span .5. The snort default value is 1500. this option sets the maximum number of packets to be tagged regardless of the amount defined by the other metric. Sets assurance mode for stream (stream is established).) Set the amount of time in seconds between logging time stats.2 for more details. Setting this option to a value of 0 will disable the packet limit. Specifies a pcap file to use (instead of reading from network). 27 . A value of -1 allows for unlimited PCRE.com/?id= For IP obfuscation. In addition. When a metric other than packets is used in a tag option in a rule. Adds a new reference system to Snort. Set global memcap in bytes for thresholding. same effect as -P <snaplen> or --snaplen <snaplen> options. See Section 3. (This is deprecated. Print statistics on rule performance. The snort default value is 1500. See Section 2. Base version should be a string in all configuration files including included ones. same effect as -r <tf> option. A value of -1 allows for unlimited PCRE. binding version must be in any file configured with config binding. Note this option is only available if Snort was built to use time stats with --enable-timestats. Also used to determine how to set up the logging directory structure for the session post detection rule option and ascii output plugin . Restricts the amount of stack used by a given PCRE option. Sets umask when running (snort -m). A value of 0 results in no PCRE evaluation. See Section 2. Shows year in timestamps (snort -y).7.1 for more details. Set the snaplength of packet. Default is 1048576 bytes (1 megabyte). Restricts the amount of backtracking a given PCRE option. The default value when this option is not configured is 256 packets. up to the PCRE library compiled limit (around 10 million). eg: pass alert log activation. Sets UID to <id> (snort -u). A value of 0 results in no PCRE evaluation.an attempt is made to name the log directories after the IP address that is not in the reference net. This option is only useful if the value is less than the pcre match limit Exits after N packets (snort -n). eg: myref. Supply versioning information to configuration files. Use config event filter instead. it will limit the number of nested repeats within a pattern. Default is 3600 (1 hour). For example. the obfuscated net will be used if the packet contains an IP address in the reference net. Print statistics on preprocessor performance.before Snort has had a chance to load a previous configuration. For an IDS this is a big problem. but after the packet has been decoded. they are usually implemented by people who read the RFCs and then write their interpretation of what the RFC outlines into code. Unfortunately. heavily fragmented environments the nature of the splay trees worked against the system and actually hindered performance. For more detail on this issue and how it affects IDS. Once we have this information we can start to really change the game for these complex modeling problems. 2.org/docs/ idspaper/. The packet can be modified or analyzed in an out-of-band manner using this mechanism. The format of the preprocessor directive in the Snort rules file is: preprocessor <name>: <options> 2. They allow the functionality of Snort to be extended by allowing users and programmers to drop modular plugins into Snort fairly easily.org/vern/papers/activemap-oak03. Preprocessors are loaded and configured using the preprocessor keyword. Target-based analysis is a relatively new concept in network-based intrusion detection. Splay trees are excellent data structures to use when you have some assurance of locality of reference for the data that you are handling but in high speed.5 of Snort.icir.config utc config verbose Uses UTC instead of local time for timestamps (snort -U). Frag3 uses the sfxhash data structure and linked lists for data handling internally which allows it to have much more predictable and deterministic performance in any environment which should aid us in managing heavily fragmented environments. it is possible to evade the IDS. Target-based host modeling anti-evasion techniques. We can also present the IDS with topology information to avoid TTL-based evasions and a variety of other issues.snort.2. there are ambiguities in the way that the RFCs define some of the edge conditions that may occurr and when this happens different people implement certain aspects of their IP stacks differently. Frag3 was implemented to showcase and prototype a target-based module within Snort to test this idea. The idea of a target-based system is to model the actual targets on the network instead of merely modeling the protocols and looking for attacks within them. Faster execution than frag2 with less complex data management. Uses verbose logging to STDOUT (snort -v). This is where the idea for “target-based IDS” came from.1 Frag3 The frag3 preprocessor is a target-based IP defragmentation module for Snort.pdf. if the attacker has more information about the targets on a network than the IDS does. Check it out at. In an environment where the attacker can determine what style of IP defragmentation is being used on a particular target. Frag3 is intended as a replacement for the frag2 defragmentation module and was designed with the following goals: 1. but that’s a topic for another day. When IP stacks are written for different operating systems. The frag2 preprocessor used splay trees extensively for managing the data structures associated with defragmenting packets. The basic idea behind target-based IDS is that we tell the IDS information about hosts on the network so that it can avoid Ptacek & Newsham style evasion attacks based on information about how an individual target IP stack operates.2 Preprocessors Preprocessors were introduced in version 1. 28 .. check out the famous Ptacek & Newsham paper at. 2. Preprocessor code is run before the detection engine is called. – max frags <number> . Default is 8192. bsd. Fragments smaller than or equal to this limit are considered malicious and an event is raised. Default type is bsd.Alternate memory management mode.Defines smallest fragment size (payload size) that should be considered valid.IP List to bind this engine to. linux. last.Maximum simultaneous fragments to track. – detect anomalies . The known mappings are as follows. Use preallocated fragment nodes (faster in some situations). if detect anomalies is also configured.Limits the number of overlapping fragments per packet.Frag 3 Configuration Frag3 configuration is somewhat more complex than frag2. – policy <type> . Engine Configuration • Preprocessor name: frag3 engine • Available options: NOTE: Engine configuration options are space separated. Default is 1. – memcap <bytes> . the minimum is ”0”. Anyone who develops more mappings and would like to add to this list please feel free to send us an email! 29 . The Paxson Active Mapping paper introduced the terminology frag3 is using to describe policy types. The default is ”0” (unlimited). Default is 4MB. Global Configuration • Preprocessor name: frag3 global • Available options: NOTE: Global configuration options are comma separated. – overlap limit <number> . detect anomalies option must be configured for this option to take effect. The default is ”0” (unlimited). and the maximum is ”255”. Default value is all. There can be an arbitrary number of engines defined at startup with their own configuration. This is an optional parameter. Available types are first. This engine will only run for packets with destination addresses contained within the IP List. – prealloc frags <number> . but only one global configuration. This is an optional parameter. a global configuration directive and an engine instantiation. Default is 60 seconds.Memory cap for self preservation. There are at least two preprocessor directives required to activate frag3. detect anomalies option must be configured for this option to take effect.Detect fragment anomalies. and the maximum is ”255”. – min fragment length <number> .Minimum acceptable TTL value for a fragment packet. bsdright. – timeout <seconds> . Fragments in the engine for longer than this period will be automatically dropped. – bind to <ip list> .Select a target-based defragmentation mode. the minimum is ”0”.Timeout for fragments. – min ttl <value> . 10smp Linux 2.1.0 OSF1 V3.5.9.6.4 (RedHat 7. Packets that don’t fall within the address requirements of the first two engines automatically fall through to the third one.3 8.5.16-3 Linux 2.4.168.2 OSF1 V4.1 OS/2 (version unknown) OSF1 V3. detect_anomalies 30 .1 SunOS 4.0/24] policy last.10. bind_to 192.Platform AIX 2 AIX 4.0A.0.0.5.1-7.7.4 SunOS 5.1.47.5.8.2 IRIX 6.00 IRIX 4.19-6.0.4 Linux 2.5F IRIX 6.2. The first two engines are bound to specific IP address ranges and the last one applies to all other traffic.3 Cisco IOS FreeBSD HP JetDirect (printer) HP-UXsmp Linux 2.4.3) MacOS (version unknown) NCD Thin Clients OpenBSD (version unknown) OpenBSD (version unknown) OpenVMS 7. first and last policies assigned.3 IRIX64 6.2.0/24.0.2.1.8 Tru64 Unix V5.2.0 Linux 2.2.9-31SGI 1.14-5.172. bind_to [10.16.5.1.5.7-10 Linux 2.0/24 policy first.20 HP-UX 11.10 Linux 2. etc are configured via the detect anomalies option to the TCP configuration. \ [prune_log_max <bytes>] 31 . FIN and Reset sequence numbers. \ [track_udp <yes|no>]. etc). etc.2. direction. Read the documentation in the doc/signatures directory with filenames that begin with “123-” for information on the different event types. It is capable of tracking sessions for both TCP and UDP. such as data on SYN packets. [max_tcp <number>]. Target-Based Stream5. Data on SYN. like Frag3.Frag 3 Alert Output Frag3 is capable of detecting eight different types of anomalies. preprocessor stream5_global: \ [track_tcp <yes|no>]. [max_icmp <number>]. Stream API Stream5 fully supports the Stream API. [show_rebuilt_packets]. UDP sessions are established as the result of a series of UDP packets from two end points via the same set of ports. identify sessions that may be ignored (large data transfers. [max_udp <number>]. ICMP messages are tracked for the purposes of checking for unreachable and service unavailable messages. TCP Timestamps. data received outside the TCP window. while others do not. For example. etc) that can later be used by rules. introduces target-based actions for handling of overlapping data and other TCP anomalies. other protocol normalizers/preprocessors to dynamically configure reassembly behavior as required by the application layer protocol. Anomaly Detection TCP protocol anomalies. 2. Its event output is packet-based so it will work with all output modes of Snort. a few operating systems allow data in TCP SYN packets. With Stream5. \ [memcap <number bytes>]. \ [flush_on_alert]. \ [track_icmp <yes|no>]. which effectively terminate a TCP or UDP session. Stream5 Global Configuration Global settings for the Stream5 preprocessor.2 Stream5 The Stream5 preprocessor is a target-based TCP reassembly module for Snort. and the policies supported by Stream5 are the results of extensive research with many target operating systems. Transport Protocols TCP sessions are identified via the classic TCP ”connection”. The methods for handling overlapping data. Some of these anomalies are detected on a per-target basis. the rule ’flow’ and ’flowbits’ keywords are usable with TCP as well as UDP traffic. and update the identifying information about the session (application protocol. minimum is ”1”. Track sessions for UDP. \ [max_queued_bytes <bytes>]. [timeout <number secs>]. minimum is ”1”. The default is set to any. Print/display packet after rebuilt (for debugging). The default is ”yes”. maximum is not bounded. The default is ”30”. and that policy is not bound to an IP address or network. The default is ”64000”. The default is ”yes”. Backwards compatibilty. maximum is ”1052672”. [min_ttl <number>]. [max_window <number>]. The default is set to off. The default is ”8388608” (8MB). The default is set to off. \ [overlap_limit <number>]. minimum is ”32768” (32KB). [detect_anomalies]. 32 . This can have multiple occurances. [max_queued_segs <number segs>]. The default is ”256000”. per policy that is bound to an IP address or network. \ [check_session_hijacking]. Session timeout. Track sessions for ICMP. The default is ”128000”. Stream5 TCP Configuration Provides a means on a per IP address target to configure TCP policy. \ [require_3whs [<number secs>]]. maximum is ”1052672”. \ [dont_store_large_packets]. maximum is ”1052672”. maximum is ”1073741824” (1GB). \ [policy <policy_id>]. [use_static_footprint_sizes]. other than by the memcap. The default is ”yes”. One default policy must be specified. Maximum simultaneous TCP sessions tracked. the minimum is ”1”. minimum is ”0” (unlimited). Maximum simultaneous UDP sessions tracked. [dont_reassemble_async]. preprocessor stream5_tcp: \ [bind_to <ip_addr>]. Print a message when a session terminates that was consuming more than the specified number of bytes. The default is ”1048576” (1MB). and the maximum is ”86400” (approximately 1 day). minimum is ”1”. \ [ignore_any_rules] Option bind to <ip addr> timeout <num seconds> Description IP address or network for this policy. Memcap for TCP packet storage. Flush a TCP stream when an alert is generated on that stream. Maximum simultaneous ICMP sessions tracked. \ [ports <client|server|both> <all|number [number]*>]. OpenBSD 3.2 and earlier windows Windows 2000. Limit the number of bytes queued for reassembly on a given TCP session to bytes. This check validates the hardware (MAC) address from both sides of the connect – as established on the 3-way handshake against subsequent packets received on the session. Detect and alert on TCP protocol anomalies. and the maximum is ”1073725440” (65535 left shift 14). Don’t queue packets for reassembly if traffic has not been seen in both directions. This option is intended to prevent a DoS against Stream5 by an attacker using an abnormally large window. Alerts are generated (per ’detect anomalies’ option) for either the client or server when the MAC address for one side or the other does not match. Default is ”1048576” (1MB). bsd FresBSD 4. The default is ”0” (don’t consider existing sessions established). This option should not be used production environments. Windows 95/98/ME win2003 Windows 2003 Server vista Windows Vista solaris Solaris 9. The default is set to queue packets. first Favor first overlapped segment. The default is set to off. That is the highest possible TCP window per RFCs.4 and newer old-linux Linux 2. Establish sessions only on completion of a SYN/SYN-ACK/ACK handshake. Check for TCP session hijacking. Performance improvement to not queue large packets in reassembly buffer. NetBSD 2. A value of ”0” means unlimited. Use static values for determining when to build a reassembled packet to allow for repeatable tests.3 and newer Minimum TTL. and the maximum is ”86400” (approximately 1 day). 33 . last Favor first overlapped segment. the minimum is ”1” and the maximum is ”255”. The default is ”0” (unlimited).x and newer hpux HPUX 11 and newer hpux10 HPUX 10 irix IRIX 6 and newer macos MacOS 10. The default is set to off. with a non-zero minimum of ”1024”. This allows a grace period for existing sessions to be considered established during that interval immediately after Snort is started. there are no checks performed.x and newer. The optional number of seconds specifies a startup timeout. The default is set to off. The policy id can be one of the following: Policy Name Operating Systems. the minimum is ”0”. the minimum is ”0”. If an ethernet layer is not part of the protocol stack received by Snort. The default is ”0” (unlimited). Maximum TCP window allowed. and the maximum is ”255”. and a maximum of ”1073741824” (1GB). Limits the number of overlapping packets per session.x and newer. The default is set to off. the minimum is ”0”.x and newer linux Linux 2. The default is set to off. Using this option may result in missed attacks. The default is ”1”. A message is written to console/syslog when this limit is enforced. Windows XP. so using a value near the maximum is discouraged. Because of the potential impact of disabling a flowbits rule. the minimum is ”1”. A value of ”0” means unlimited. with a non-zero minimum of ”2”. Don’t process any -> any (ports) rules for TCP that attempt to match payload if there are no port specific rules for the src or destination port. For example. server. Don’t process any -> any (ports) rules for UDP that attempt to match payload if there are no port specific rules for the src or destination port. or byte test options.max queued segs <num> ports <client|server|both> <all|number(s)> ignore any rules Limit the number of segments queued for reassembly on a given TCP session. The default is ”30”. Specify the client. the ignore any rules option is effectively pointless. This is a performance improvement and may result in missed attacks. only those with content. or byte test options. Rules that have flow or flowbits will never be ignored. [ignore_any_rules] Option timeout <num seconds> ignore any rules Description Session timeout. PCRE. A message is written to console/syslog when this limit is enforced. ! △NOTE With the ignore any rules option. there should be only one occurance of the UDP configuration. a UDP rule will be ignored except when there is another port specific rule that may be applied to the traffic. Using this does not affect rules that look at protocol headers. ! △NOTE With the ignore any rules option. if a UDP rule that uses any -> any ports includes either flow or flowbits. The minimum port allowed is ”1” and the maximum allowed is ”65535”. the ignore any rules option will be disabled in this case. derived based on an average size of 400 bytes. Stream5 UDP Configuration Configuration for UDP session tracking. The default is ”2621”. and the maximum is ”86400” (approximately 1 day). 34 . or both and list of ports in which to perform reassembly. and a maximum of ”1073741824” (1GB). This can appear more than once in a given config. the ’ignored’ any -> any rule will be applied to traffic to/from port 53. ! △NOTE If no options are specified for a given TCP policy. The default is ”off”. preprocessor stream5_udp: [timeout <number secs>]. PCRE. if a UDP rule specifies destination port 53. This option can be used only in default policy. A list of rule SIDs affected by this option are printed at Snort’s startup. The default settings are ports client 21 23 25 42 53 80 110 111 135 136 137 139 143 445 513 514 1433 1521 2401 3306. Using this does not affect rules that look at protocol headers. Since there is no target based binding. Rules that have flow or flowbits will never be ignored. but NOT to any other source or destination port. only those with content. This is a performance improvement and may result in missed attacks. that is the default TCP policy. The default is ”off”. If only a bind to option is used with no other options that TCP policy uses all of the default values. This example configuration is the default configuration in snort. It is not ICMP is currently turned on by default. There are no anomalies detected relating to UDP or ICMP.Stream5 ICMP Configuration Configuration for ICMP session tracking. Since there is no target based binding.conf and can be used for repeatable tests of stream reassembly in readback mode. there should be only one occurance of the ICMP configuration. one for Windows and one for Linux. policy windows stream5_tcp: bind_to 10. Data on SYN packet 3. track_icmp no preprocessor stream5_tcp: \ policy first. Example Configurations 1. TCP Timestamp is outside of PAWS window 5.0/24. It is capable of alerting on 8 (eight) anomalies. and the maximum is ”86400” (approximately 1 day). Window size (after scaling) larger than policy allows 7. This configuration maps two network segments to different OS policies. preprocessor stream5_global: \ max_tcp 8192. The default is ”30”. ! △NOTE untested. preprocessor stream5_icmp: [timeout <number secs>] Option timeout <num seconds> Description Session timeout. in minimal code form and is NOT ready for use in production networks. The list of SIDs is as follows: 1. track_udp yes. Data sent on stream not accepting data 4. the minimum is ”1”.1. Limit on number of overlapping TCP packets reached 8. with all other traffic going to the default policy of Solaris.0/24. preprocessor preprocessor preprocessor preprocessor Alerts Stream5 uses generator ID 129. policy linux stream5_tcp: policy solaris .1. overlap adjusted size less than/equal 0 6. track_tcp yes. Bad segment. Data after Reset packet 35 stream5_global: track_tcp yes stream5_tcp: bind_to 192. use_static_footprint_sizes preprocessor stream5_udp: \ ignore_any_rules 2.1. SYN on established session 2.168. all of which relate to TCP anomalies. sfPortscan will currently alert for the following types of Nmap scans: • TCP Portscan • UDP Portscan • IP Portscan These alerts are for one→one portscans. This is the traditional place where a portscan takes place. Nmap encompasses many. ! △NOTE Negative queries will be distributed among scanning hosts. As the attacker has no beforehand knowledge of its intended target. and rarer still are multiple negative responses within a given amount of time. of the current portscanning techniques. This tactic helps hide the true identity of the attacker. otherwise. In the nature of legitimate network communications.2. if not.3 sfPortscan The sfPortscan module. one host scans multiple ports on another host. most queries sent by the attacker will be negative (meaning that the service ports are closed). sfPortscan alerts for the following types of distributed portscans: • TCP Distributed Portscan • UDP Distributed Portscan • IP Distributed Portscan These are many→one portscans. so we track this type of scan through the scanned host. sfPortscan was designed to be able to detect the different types of scans Nmap can produce. only the attacker has a spoofed source address inter-mixed with the real scanning address. is designed to detect the first phase in a network attack: Reconnaissance. Our primary objective in detecting portscans is to detect and track these negative responses. Distributed portscans occur when multiple hosts query one host for open services.2. sfPortscan alerts for the following types of portsweeps: • TCP Portsweep • UDP Portsweep • IP Portsweep 36 . an attacker determines what types of network protocols or services a host supports. This phase assumes the attacking host has no prior knowledge of what protocols or services are supported by the target. In the Reconnaissance phase. This is used to evade an IDS and obfuscate command and control hosts. this phase would not be necessary. developed by Sourcefire. One of the most common portscanning tools in use today is Nmap. which are the traditional types of scans. since most hosts have relatively few services available. negative responses from hosts are rare. Most of the port queries will be negative. if an attacker portsweeps a web farm for port 80. Open port events are not individual alerts. For example. such as NATs. proto <protocol> Available options: • TCP 37 . ! △NOTE The characteristics of a portsweep scan may not result in many negative responses. This usually occurs when a new exploit comes out and the attacker is looking for a specific service. On TCP scan alerts. sfPortscan only generates one alert for each host pair in question during the time window (more on windows below). sfPortscan Configuration Use of the Stream5 preprocessor is required for sfPortscan. as described in Section 2. On TCP sweep alerts however.2. It’s also a good indicator of whether the alert is just a very active legitimate host.• ICMP Portsweep These alerts are for one→many portsweeps. can trigger these alerts because they can send out many connection attempts within a very small amount of time. Active hosts.. One host scans a single port on multiple hosts. sfPortscan will only track open ports after the alert has been triggered. but tags based on the orginal scan alert.2.conf. sfPortscan will also display any open ports that were scanned. Stream gives portscan direction in the case of connectionless protocols like ICMP and UDP. You should enable the Stream preprocessor in your snort. A filtered alert may go off before responses from the remote hosts are received. The parameters you can use to configure the portscan module are: 1. “Medium” alerts track connection counts. which is why the option is off by default. this file will be placed in the Snort config dir. A ”High” setting will catch some slow scans because of the continuous monitoring. 9. afterwhich this window is reset. networks. IPs or networks not falling into this range are ignored if this option is used. especially under heavy load with dropped packets.• UDP • IGMP • ip proto • all 2. etc). detect ack scans This option will include sessions picked up in midstream by the stream module. If file does not contain a leading slash. • medium . This most definitely will require the user to tune sfPortscan. DNS caches. This setting is based on a static time window of 60 seconds. 4. 7. • high . this setting should see very few false postives.“Low” alerts are only generated on error packets sent from the target host. The list is a comma separated list of IP addresses. proxies. and because of the nature of error responses. especially under heavy load with dropped packets. ignore scanners <ip1|ip2/cidr[ [port|port2-port3]]> Ignores the source of scan alerts. this setting will never trigger a Filtered Scan alert because of a lack of error responses. IP address using CIDR notation. 6. 8. 5. which is necessary to detect ACK scans. this can lead to false alerts. ignore scanned <ip1|ip2/cidr[ [port|port2-port3]]> Ignores the destination of scan alerts. include midstream This option will include sessions picked up in midstream by Stream5. logfile <file> This option will output portscan events to the file specified. This setting may false positive on active hosts (NATs. The parameter is the same format as that of watch ip. but is very sensitive to active hosts. scan type <scan type> Available options: • portscan • portsweep • decoy portscan • distributed portscan • all 3. watch ip <ip1|ip2/cidr[ [port|port2-port3]]> Defines which IPs. ports are specified after the IP address/CIDR using a space and can be either a single port or a range denoted by a dash. Optionally. so the user may need to deploy the use of Ignore directives to properly tune this directive. However. sense level <level> Available options: • low . However.“High” alerts continuously track hosts on a network using a time window to evaluate portscan statistics for that host. The parameter is the same format as that of watch ip. and specific ports on those hosts to watch. and so will generate filtered scan alerts. This can lead to false alerts. which is why the option is off by default. 38 . This includes any IP options.169.603881 event_ref: 2 39 .169.4 Port/Proto Count: 200 Port/Proto Range: 20:47557 If there are open ports on the target.168.603880 event_id: 2 192. The payload and payload size of the packet are equal to the length of the additional portscan information that is logged. connection count.200 bytes. The characteristics of the packet are: Src/Dst MAC Addr == MACDAD IP Protocol == 255 IP TTL == 0 Other than that. snort generates a pseudo-packet and uses the payload portion to store the additional portscan information of priority count. Log File Output Log file output is displayed in the following format.3:192. port count. IP count.168. and explained further below: Time: 09/08-15:07:31. because open port alerts utilize the tagged packet output system. The size tends to be around 100 . and port range. IP range. one or more additional tagged packet(s) will be appended: Time: 09/08-15:07:31.168.168. so it is possible to extend favorite Snort GUIs to display portscan alerts and the additional information in the IP payload using the above packet characteristics.3 -> 192. then the user won’t see open port alerts.5 (portscan) TCP Filtered Portscan Priority Count: 0 Connection Count: 200 IP Count: 2 Scanner IP Range:.169. The open port information is stored in the IP payload and contains the port that is open. the packet looks like the IP portion of the packet that caused the portscan alert to be generated. The sfPortscan alert output was designed to work with unified packet logging. Open port alerts differ from the other portscan alerts.169. etc. This means that if an output system that doesn’t print tagged packets is used. and one-to-one scans may appear as a distributed scan. It’s important to correctly set these options. the alert type is very important. We use this count (along with IP Count) to determine the difference between one-to-one portscans and one-to-one decoys. Whether or not a portscan was filtered is determined here. Port Count. 3.169. Many times this just indicates that a host was very active during the time period in question. sfPortscan may not generate false positives for these types of hosts.3 -> 192. Tuning sfPortscan The most important aspect in detecting portscans is tuning the detection engine for your network(s). 3. this is a low number. Connection Count. and increments the count if the next IP is different. Portscans (one-to-one) display the scanner IP. If the host is generating portscan alerts (and is the host that is being scanned). 5. but be aware when first tuning sfPortscan for these IPs. and nfs servers.192. Some of the most common examples are NAT IPs. ignore scanners. IP Count IP Count keeps track of the last IP to contact a host. Scanned/Scanner IP Range This field changes depending on the type of alert. The analyst should set this option to the list of Cidr blocks and IPs that they want to watch. For one-to-one scans. syslog servers. This is accurate for connection-based protocols. Event id/Event ref These fields are used to link an alert with the corresponding Open Port tagged packet 2. High connection count and low priority count would indicate filtered (no response received from target). The ignore scanners and ignore scanned options come into play in weeding out legitimate hosts that are very active on your network. Connection Count Connection Count lists how many connections are active on the hosts (src or dst). IP Count. and ignore scanned options.168. and is more of an estimate for others. Here are some tuning tips: 1. Make use of the Priority Count. When determining false positives. DNS cache servers. 40 . If the host continually generates these types of alerts. For active hosts this number will be high regardless. IP Range. 4.168. Priority Count Priority Count keeps track of bad responses (resets. Use the watch ip.5 (portscan) Open Port Open Port: 38458 1. the more bad responses have been received. If no watch ip is defined. then add it to the ignore scanners option. So be much more suspicious of filtered portscans. Most of the false positives that sfPortscan may generate are of the filtered scan alert type.169. Portsweep (one-to-many) scans display the scanned IP range. and Port Range to determine false positives. Depending on the type of alert that the host generates. 2. sfPortscan will watch all network traffic. Filtered scan alerts are much more prone to false positives. If the host is generating portsweep events. add it to the ignore scanned option. 6. unreachables). the analyst will know which to ignore it as. The watch ip option is easy to understand. The higher the priority count. Port Count Port Count keeps track of the last port contacted and increments this number when that changes. add it to the ignore scanners list or use a lower sensitivity level. The reason that Priority Count is not included. Port Count / IP Count: This ratio indicates an estimated average of ports connected to per IP. but for now the user must manually do this. 4. this ratio should be high and indicates that the scanned host’s ports were connected to by fewer IPs. By default. this ratio should be low. the higher the better. For portscans. Snort’s real-time statistics are processed. this ratio should be high.2. either “console” which prints statistics to the console window or “file” with a file name. This includes: 41 .The portscan alert details are vital in determining the scope of a portscan and also the confidence of the portscan. It does this by normalizing the packet into the packet buffer. lower the sensitivity level. These responses indicate a portscan and the alerts generated by the low sensitivity level are highly accurate and require the least tuning. By default. we hope to automate much of this analysis in assigning a scope level and confidence level. is because the priority count is included in the connection count and the above comparisons take that into consideration.4 RPC Decode The rpc decode preprocessor normalizes RPC multiple fragmented records into a single un-fragmented record.2. this ratio should be low. The easiest way to determine false positives is through simple ratio estimations. Whenever this preprocessor is turned on. it should have an output mode enabled. it runs against traffic on ports 111 and 32771.5 Performance Monitor This preprocessor measures Snort’s real-time and theoretical maximum performance. Connection Count / Port Count: This ratio indicates an estimated average of connections per port. this ratio should be high. For portsweeps. but it’s also important that the portscan detection engine generate alerts that the analyst will find informative. This indicates that each connection was to a different port. Don’t alert when a single fragment record exceeds the size of one packet. 2. You get the best protection the higher the sensitivity level. it will only process client-side traffic. For portscans. For portsweeps. indicating that the scanning host connected to few ports but on many hosts. If stream5 is enabled. If all else fails. The Priority Count play an important role in tuning because the higher the priority count the more likely it is a real portscan or portsweep (unless the host is firewalled). This indicates that there were many connections to the same port. For portscans. Format preprocessor rpc_decode: \ <ports> [ alert_fragments ] \ [no_alert_multiple_requests] \ [no_alert_large_fragments] \ [no_alert_incomplete] Option alert fragments no alert multiple requests no alert large fragments no alert incomplete Description Alert on any fragmented RPC record. In the future. lower the sensitivity level. For portsweeps. Don’t alert when there are multiple records in one packet. The low sensitivity level does not catch filtered scans. 2. The low sensitivity level only generates alerts based on error responses. where statistics get printed to the specified file name. Connection Count / IP Count: This ratio indicates an estimated average of connections per IP. Don’t alert when the sum of fragmented records exceeds one packet. since these are more prone to false positives. If none of these other tuning techniques work or the analyst doesn’t have the time for tuning. The following is a list of ratios to estimate and the associated values that indicate a legimite scan and not a false positive. this ratio should be low. . This works fine when there is 44 . Examples preprocessor perfmonitor: \ time 30 events flow file stats. HTTP Inspect works on both client requests and server responses. • events .Defines which type of drop statistics are kept by the operating system.Prints statistics at the console.Turns on the theoretical maximum performance that Snort calculates given the processor speed and current performance.Prints statistics in a comma-delimited format to the file that is specified.Turns on event reporting. The current version of HTTP Inspect only handles stateless processing. followed by YYYY-MM-DD. • max file size . where x will be incremented each time the comma delimiated file is rolled over. This boosts performance. this is 10000. • console . • file . The default is the same as the maximum. Both of these directives can be overridden on the command line with the -Z or --perfmon-file options. • time . Given a data buffer.Adjusts the number of packets to process before checking for the time sample. This shows the user if there is a problem with the rule set that they are running. HTTP Inspect will decode the buffer.x.Defines the maximum size of the comma-delimited file. By default. This option can produce large amounts of output. You may also use snortfile which will output into your defined Snort log directory. • accumulate or reset . • atexitonly .). and will be fooled if packets are not reassembled. • pktcnt . since checking the time sample reduces Snort’s performance. Not all statistics are output to this file. • max .Dump stats for entire life of Snort. Before the file exceeds this size. This means that HTTP Inspect looks for HTTP fields on a packet-by-packet basis. it will be rolled into a new date stamped file of the format YYYY-MM-DD. The minimum is 4096 bytes and the maximum is 2147483648 bytes (2GB). By default. and normalize the fields.Represents the number of seconds between intervals. find HTTP fields.profile max console pktcnt 10000 preprocessor perfmonitor: \ time 300 file /var/tmp/snortstat pktcnt 10000 2. This is only valid for uniprocessor machines. reset is used.Prints out statistics about the type of traffic and protocol distributions that Snort is seeing. since many operating systems don’t keep accurate kernel statistics for multiple CP .6 HTTP Inspect HTTP Inspect is a generic HTTP decoder for user applications.2. but there are limitations in analyzing the protocol. The iis unicode map file is a Unicode codepoint map which tells HTTP Inspect which codepage to use when decoding Unicode characters. In the future. there are two areas of configuration: global and server.org/ dl/contrib/. iis unicode map <map filename> [codemap <integer>] This is the global iis unicode map file. 45 . but for right now.another module handling the reassembly. individual servers can reference their own IIS Unicode map. you’ll get an error if you try otherwise. which should allow the user to emulate any type of web server. we want to limit this to specific networks so it’s more useful. this inspects all network traffic.conf or be specified via a fully-qualified path to the map file. then you may get a lot of proxy alerts. Blind firewall proxies don’t count.. Users can configure individual HTTP servers with a variety of options.map and should be used if no other codepoint map is available. The iis unicode map is a required configuration parameter. 2. HTTP Inspect has a very “rich” user configuration. A Microsoft US Unicode codepoint map is provided in the Snort source etc directory by default. By configuring HTTP Inspect servers and enabling allow proxy use. Within HTTP Inspect.. Please note that if users aren’t required to configure web proxy use. It is called unicode. please only use this feature with traditional proxy environments. Future versions will have a stateful processing mode which will hook into various reassembly modules.c. A tool is supplied with Snort to generate custom Unicode maps--ms unicode generator. 3. Configuration 1.snort. the codemap is usually 1252. which is available at. The map file can reside in the same directory as snort. Global Configuration The global configuration deals with configuration options that determine the global functioning of HTTP Inspect. you will only receive proxy use alerts for web users that aren’t using the configured proxies or are using a rogue proxy server. detect anomalous servers This global configuration option enables generic HTTP server traffic inspection on non-HTTP configured ports. and alerts if HTTP traffic is seen. So. For US servers. 2. but are not required for proper operation. 1-A. only the alerting functionality. There are five profiles available: all. iis5 0. Most of your web servers will most likely end up using the default configuration. profile <all|apache|iis|iis5 0|iis4 0> Users can configure HTTP Inspect by using pre-defined HTTP server profiles.3.1 profile all ports { 80 } Configuration by Multiple IP Addresses This format is very similar to “Configuration by IP Address”.1.1. apache. the only difference being that specific IPs can be configured. the only difference being that multiple IPs can be specified via a space separated list. all The all profile is meant to normalize the URI using most of the common tricks available. Example Multiple IP Configuration preprocessor http_inspect_server: \ server { 10. profile all sets the configuration options described in Table 2. and rules based on HTTP traffic will still trigger.2.0/24 } Inspect alert or not. Example Default Configuration preprocessor http_inspect_server: \ server default profile all ports { 80 } Configuration by IP Address This format is very similar to “default”.1. iis. This is a great profile for detecting all types of attacks. There is a limit of 40 IP addresses or CIDR notations per http inspect server line. and iis4 0. Default This configuration supplies the default server configuration for any server that is not individually configured. In other words.Example Global Configuration preprocessor http_inspect: \ global iis_unicode_map unicode. 1. regardless of the HTTP server. HTTP normalization will still occur. Example IP Configuration preprocessor http_inspect_server: \ server 10.1 10. whether set to ‘yes’ or ’no’. Profiles allow the user to easily configure the preprocessor for a certain type of server. 46 .1. We alert on the more serious forms of evasions. profile iis sets the configuration options described in Table 2. alert off webroot on. alert off double decoding on. alert on utf 8 encoding on.Table 2. iis5 0 In IIS 4. header length not checked max headers 0. alert on iis unicode codepoints on. alert on non strict URL parsing on tab uri delimiter is set max header length 0. Apache also accepts tabs as whitespace.5. alert off directory normalization on. alert on iis backslash on.0. 47 .4: Options for the apache Profile Option Setting server flow depth 300 client flow depth 300 post depth 0 chunk encoding alert on chunks larger than 500000 bytes ascii decoding on. alert off apache whitespace on. alert on bare byte decoding on. profile apache sets the configuration options described in Table 2. double decoding. So that means we use IIS Unicode codemaps for each server. alert off non strict url parsing on tab uri delimiter is set max header length 0. alert off multiple slash multiple slash on. header length not checked max headers 0. alert on apache whitespace on. alert off iis delimiter on. like IIS does. alert on %u decoding on. iis The iis profile mimics IIS servers. number of headers not checked 1-C. alert off webroot on. Table 2. iis4 0. etc. alert off directory normalization on. These two profiles are identical to iis. bare-byte encoding. 1-D. backslashes. there was a double decoding vulnerability. apache The apache profile is used for Apache web servers. %u encoding.0 and IIS 5.4. This differs from the iis profile by only accepting UTF-8 standard Unicode encoding and not accepting backslashes as legitimate slashes. number of headers not checked 1-B. Table 2. header length not checked max headers 0. alert off multiple slash on. Double decode is not supported in IIS 5. alert on iis unicode codepoints . so it’s disabled by default. alert on iis backslash on.6.Table 2. alert on bare byte decoding on. 1-E. alert on %u decoding on. alert off multiple slash on.1 and beyond. alert on apache whitespace on. alert off non strict URL parsing on max header length 0. alert off directory normalization on. alert off apache whitespace on. alert on iis backslash on. number of headers not checked except they will alert by default if a URL has a double encoding. alert off iis delimiter on. alert off directory normalization on. alert off webroot on. no profile The default options used by HTTP Inspect do not use a profile and are described in Table 2. default. alert off webroot on. header length not checked max headers 0 iis delimiter on.6: Default HTTP Inspect Options Option Setting port 80 server flow depth 300 client flow depth 300 post depth 0 chunk encoding alert on chunks larger than 500000 bytes ascii decoding on. alert on non strict URL parsing on max header length 0. alert off utf 8 encoding on. alert on double decoding on. post depth <integer> This specifies the amount of data to inspect in a client post message.c. 5. >]} This is how the user configures which ports to decode on the HTTP server. 49 . and has a default value of 300. ports {<port> [<port>< .org web site at directory. Inversely. It primarily eliminates Snort fro inspecting larger HTTP Cookies that appear at the end of many client request Headers. When using this option. This increases the perfomance by inspecting only specified bytes in the post message. the user needs to specify the file that contains the IIS Unicode map and also specify the Unicode map to use. but your mileage may vary. This program is located on the Snort. which will be deprecated in a future release. You can select the correct code page by looking at the available code pages that the ms unicode generator outputs.1. This value can be set from -1 to 1460.snort. to get the specific Unicode mappings for an IIS web server. However. iis unicode map <map filename> codemap <integer> The IIS Unicode map is generated by the program ms unicode generator. So. It is similar to server flow depth (above). The default value is 0. Values above 0 tell Snort the number of bytes to inspect in the first packet of the server response. it’s the ANSI code page. This option significantly increases IDS performance because we are ignoring a large part of the network traffic (HTTP server response payloads). ! △NOTE server flow depth is the same as the old flow depth option. this is usually 1252. Executing this program generates a Unicode map for the system that it was run on. A small percentage of Snort rules are targeted at this traffic and a small flow depth value may cause false negatives in some of these rules. To ignore HTTPS traffic. 3.1. server flow depth <integer> This specifies the amount of server response payload to inspect. A value of -1 causes Snort to ignore all server side traffic for ports defined in ports. client flow depth <integer> This specifies the amount of raw client request payload to inspect. use the SSL preprocessor. HTTPS traffic is encrypted and cannot be decoded with HTTP Inspect. For US servers. a value of 0 causes Snort to inspect all HTTP server payloads defined in ports (note that this will likely slow down IDS performance). you run this program on that server and use that Unicode map in this configuration. Example preprocessor http_inspect_server: \ server 1.1 profile all ports { 80 3128 } 2. Headers are usually under 300 bytes long..• • • • • • • • • client flow depth post depth no alerts inspect uri only oversize dir length normalize headers normalize cookies max header length max headers These options must be specified after the profile option. 4. or the content that is likely to be in the first hundred or so bytes of non-header data. The value can be set from 0 to 65495. 6. But the ms unicode generator program tells you which codemap to use for you server.. Most of these rules target either the HTTP header. or. because we are not aware of any legitimate clients that use this encoding. u encode <yes|no> This option emulates the IIS %u encoding scheme. so be careful. a. it seems that all types of iis encoding is done: utf-8 unicode. As for alerting. This abides by the Unicode standard and only uses % encoding. so it is recommended that you disable HTTP Inspect alerting for this option. Don’t use the %u option.patch. like %uxxxx. How the %u encoding scheme works is as follows: the encoding scheme is started by a %u followed by 4 characters. In the first pass.rim. You should alert on the iis unicode option. bare byte. How this works is that IIS does two passes through the request URI. Bare byte encoding allows the user to emulate an IIS server and interpret non-standard encodings correctly. You have to use the base36 option with the utf 8 option. Please use this option with care. alert on all ‘/’ or something like that. %u002e = . 9. 50 . 14. as all non-ASCII values have to be encoded with a %. 11. 13. To alert on UTF-8 decoding.a %2f = /. When base36 is enabled. etc. because there are no legitimate clients that encode UTF-8 this way since it is non-standard. So it is most likely someone trying to be covert. %2e = . The iis unicode option handles the mapping of non-ASCII codepoints that the IIS server accepts and decodes normal UTF-8 requests. and %u.. In the second pass.>]} This option lets users receive an alert if certain non-RFC chars are used in a request URI. utf 8 <yes|no> The utf-8 decode option tells HTTP Inspect to decode standard UTF-8 Unicode sequences that are in the URI. base36 <yes|no> This is an option to decode base36 encoded chars. Anyway. 8. It is normal to see ASCII encoding usage in URLs. We leave out utf-8 because I think how this works is that the % encoded utf-8 is decoded to the Unicode byte in the first pass.. so ASCII is also enabled to enforce correct decoding. the following encodings are done: ascii. ASCII encoding is also enabled to enforce correct behavior.k. this option will not work. you may be interested in knowing when you have a UTF-8 encoded URI. 10. you must enable also enable utf 8 yes. If no iis unicode map is specified before or after this option. because you could configure it to say. and %u. ASCII and UTF-8 decoding are also enabled to enforce correct decoding. the default codemap is used. ASCII decoding is also enabled to enforce correct functioning. You should alert on %u encodings. When utf 8 is enabled. iis unicode <yes|no> The iis unicode option turns on the Unicode codepoint mapping. so for any Apache servers.yk. this is really complex and adds tons of different encodings for one character.jp/˜ shikap/patch/spp\_http\_decode. When iis unicode is enabled. double decode <yes|no> The double decode option is once again IIS-specific and emulates IIS functionality. make sure you have this option turned on.7. This option is based on info from:.. bare byte. If there is no iis unicode map option specified with the server config. but this will be prone to false positives as legitimate web clients use this type of encoding. ascii. bare byte <yes|no> Bare byte encoding is an IIS trick that uses non-ASCII characters as valid values when decoding UTF-8 values. If %u encoding is enabled. because base36 won’t work. The alert on this decoding should be enabled. This is not in the HTTP standard. An ASCII character is encoded like %u002f = /. 12. When double decode is enabled. Apache uses this standard. iis unicode uses the default codemap. This value can most definitely be ASCII. etc. The xxxx is a hex-encoded value that correlates to an IIS Unicode codepoint. It’s flexible.. For instance. a user may not want to see null bytes in the request URI and we can alert on that. because it is seen mainly in attacks and evasion attempts. non rfc char {<byte> [<byte . doing decodes in each one. and then UTF-8 is decoded in the second stage. ascii <yes|no> The ascii decode option tells us whether to decode encoded ASCII chars. otherwise. pipeline requests are inspected for attacks. and may also alert on HTTP tunneling that uses chunk encoding. since some web sites refer to files using directory traversals. iis backslash <yes|no> Normalizes backslashes to slashes. The non strict option assumes the URI is between the first and second space even if there is no valid HTTP identifier after the second space. no pipeline req This option turns HTTP pipeline decoding off. iis delimiter <yes|no> This started out being IIS-specific. non strict This option turns on non-strict URI parsing for the broken way in which Apache servers will decode a URI. 22. But you can still get an alert on this option. specify no. so if the emulated web server is Apache. 51 .html alsjdfk alsj lj aj la jsj s\n”. This picks up the Apache chunk encoding exploits. By default. So a request URI of “/foo\bar” gets normalized to “/foo/bar. 21. This is again an IIS emulation. multi slash <yes|no> This option normalizes multiple slashes in a row. then configure with a yes. It is only inspected with the generic pattern matching. apache whitespace <yes|no> This option deals with the non-RFC standard of using tab for a space delimiter. The directory: /foo/fake\_dir/. Only use this option on servers that will accept URIs like this: ”get /index. Apache uses this. specify yes. but Apache takes this non-standard delimiter was well. so something like: “foo/////////bar” get normalized to “foo/bar.15.” If you want an alert when multiple slashes are seen. 19. we always take this as standard since the most popular web servers accept it. but when this option is enabled. This alert may give false positives. Alerts on this option may be interesting. directory <yes|no> This option normalizes directory traversals and self-referential directories. pipeline requests are not decoded and analyzed per HTTP protocol field. enable this option. 18. use no. Since this is common. and is a performance enhancement if needed./bar gets normalized to: /foo/bar If you want to configure an alert. 20. otherwise./bar gets normalized to: /foo/bar The directory: /foo/.” 17. but may also be false positive prone. 16. chunk length <non-zero positive integer> This option is an anomaly detector for abnormally large chunk sizes.. etc. The argument specifies the max char directory length for URL directory. If a url directory is larger than this argument size. This should limit the alerts to IDS evasion type attacks. A good argument value is 300 characters. 26. 28. oversize dir length <non-zero positive integer> This option takes a non-zero positive integer as an argument. No argument is specified. if we have the following rule set: alert tcp any any -> any 80 ( msg:"content". and if there are none available.). 30. Requests that exceed this length will cause a ”Long Header” alert. specify an integer argument to max header length of 1 to 65535. webroot <yes|no> This option generates an alert when a directory traversal traverses past the web server root directory. If the proxy alert keyword is not enabled. The allow proxy use keyword is just a way to suppress unauthorized proxy use for an authorized server. Specifying a value of 0 is treated as disabling the alert. It’s important to note that if this option is used without any uricontent rules. like whisker -i 4. tab uri delimiter This option turns on the use of the tab character (0x09) as a delimiter for a URI. ) and the we inspect the following URI: get /foo.0\r\n\r\n No alert will be generated when inspect uri only is enabled. multi-slash. This has no effect on HTTP rules in the rule set. max header length <positive integer up to 65535> This option takes an integer as an argument. IIS does not. an alert is generated. you’ll catch most of the attacks. 25. then there is nothing to inspect. Apache accepts tab as a delimiter. No argument is specified.). no alerts This option turns off all alerts that are generated by the HTTP Inspect preprocessor module. For IIS. This means that no alert will be generated if the proxy alert global keyword has been used. It only alerts when the directory traversals go past the web server root directory. because it doesn’t alert on directory traversals that stay within the web server directory structure. 31. 29. a tab in the URI should be treated as any other character. multi-slash. not including Cookies (using the same configuration parameters as the URI normalization (ie.23. directory. To enable. 52 .. normalize headers This option turns on normalization for HTTP Header Fields. the user is allowing proxy use on this server. content: "foo". As this field usually contains 90-95% of the web attacks. This is obvious since the URI is only inspected with uricontent rules. It is useful for normalizing Referrer URIs that may appear in the HTTP Header. which is associated with certain web attacks. 27. only the URI portion of HTTP requests will be inspected for attacks. then this option does nothing. This alert is off by default. The integer is the maximum length allowed for an HTTP client request header field. inspect uri only This is a performance optimization. This generates much fewer false positives than the directory option. When enabled.htm http/1. etc. It is useful for normalizing data in HTTP Cookies that may be encoded. Whether this option is on or not. 24. For example. allow proxy use By specifying this keyword. So if you need extra performance. enable this optimization. a tab is treated as whitespace if a space character (0x20) precedes it. then no inspection will take place. directory. Examples preprocessor http_inspect_server: \ server 10. 53 . It saves state between individual packets. The alert is off by default. data header data body sections.7 SMTP Preprocessor The SMTP preprocessor is an SMTP decoder for user applications. max headers <positive integer up to 1024> This option takes an integer as an argument. However maintaining correct state is dependent on the reassembly of the client side of the stream (ie.2. To enable.32. Specifying a value of 0 is treated as disabling the alert. Given a data buffer. It will also mark the command. SMTP will decode the buffer and find SMTP commands and responses. a loss of coherent stream data results in a loss of state).1. Requests that contain more HTTP Headers than this value will cause a ”Max Header” alert. The integer is the maximum number of HTTP client request header fields. specify an integer argumnet to max headers of 1 to 1024. SMTP handles stateless and stateful processing. and TLS data.1. regular mail data can be ignored for an additional performance boost. no alerts Turn off all alerts for this preprocessor. this will include 25 and possibly 465. RFC 2821 recommends 512 as a maximum command line length. 3. 9. invalid cmds { <Space-delimited list of commands> } Alert if this command is sent from client side. RFC 2821 recommends 1024 as a maximum data header line length. Also. } This specifies on what ports to check for SMTP data. all checks all commands none turns off normalization for all commands.. alt max command line len <int> { <cmd> [<cmd>] } Overrides max command line len for specific commands. 7. such as port and inspection type. inspection type <stateful | stateless> Indicate whether to operate in stateful or stateless mode. which improves performance. RFC 2821 recommends 512 as a maximum response line length. cmds just checks commands listed with the normalize cmds parameter. Default is an empty list. Default is an empty list. 12. 5. . 8. Absence of this option or a ”0” means never alert on response line length. 4. Absence of this option or a ”0” means never alert on data header line length. Absence of this option or a ”0” means never alert on command line length. In addition. SMTP command lines can be normalized to remove extraneous spaces. ports { <port> [<port>] .. valid cmds { <Space-delimited list of commands> } List of valid commands. max header line len <int> Alert if an SMTP DATA header line is longer than this value. The configuration options are described below: 1.Configuration SMTP has the usual configuration items. max response line len <int> Alert if an SMTP response line is longer than this value. max command line len <int> Alert if an SMTP command line is longer than this value. 10. 11. Space characters are defined as space (ASCII 0x20) or tab (ASCII 0x09). normalize <all | none | cmds> This turns on normalization. for encrypted SMTP. 2. Typically. Since so few (none in the current snort rule set) exploits are against mail data. We do not alert on commands in this list. ignore tls data Ignore TLS-encrypted data when processing rules. this is relatively safe to do and can improve the performance of data inspection. TLS-encrypted traffic can be ignored. Normalization checks for more than one space character after a command. ignore data Ignore data section of mail (except for mail headers) when processing rules. This not normally printed out with the configuration because it can print so much data. the preprocessor actually maps RCPT and MAIL to the correct command name. normalize cmds { <Space-delimited list of commands> } Normalize this list of commands Default is { RCPT VRFY EXPN }. print cmds List all commands understood by the preprocessor. 15.8 FTP/Telnet Preprocessor FTP/Telnet is an improvement to the Telnet decoder and provides stateful inspection capability for both FTP and Telnet data streams. } \ } \ HELO ETRN } \ VRFY } 2. 14. For the preprocessor configuration. Default is enable. they are referred to as RCPT and MAIL. alert unknown cmds Alert if we don’t recognize command. 16. FTP/Telnet works on both client requests and server responses. 55 . Drop if alerted.2. respectively.. FTP/Telnet will decode the stream. identifying FTP commands and responses and Telnet escape sequences and normalize the fields. Default is off.13. Within the code. inspection type This indicates whether to operate in stateful or stateless mode. there are four areas of configuration: Global. The presence of the option indicates the option itself is on. you’ll get an error if you try otherwise. Global Configuration The global configuration deals with configuration options that determine the global functioning of FTP/Telnet. FTP Client. The FTP/Telnet global configuration must appear before the other three areas of configuration. checks for encrypted traffic will occur on every packet. Telnet. Users can configure individual FTP servers and clients with a variety of options.6). The default is to run FTP/Telent in stateful inspection mode. Within FTP/Telnet. meaning it looks for information and handles reassembled data correctly. Configuration 1. whereas in stateful mode. which should allow the user to emulate any type of FTP server or FTP Client.FTP/Telnet has the capability to handle stateless processing. encrypted traffic <yes|no> This option enables detection and alerting on encrypted Telnet and FTP command channels. check encrypted Instructs the the preprocessor to continue to check an encrypted session for a subsequent command to cease encryption. The following example gives the generic global configuration format: Format preprocessor ftp_telnet: \ global \ inspection_type stateful \ encrypted_traffic yes \ check_encrypted You can only have a single global configuration. 2. a particular session will be noted as encrypted and not inspected any further. ! △NOTE When inspection type is in stateless mode. while the yes/no argument applies to the alerting functionality associated with that option. ! △NOTE options have an argument of yes or no. FTP/Telnet has a very “rich” user configuration. 3. and FTP Server. This argument specifies whether the user wants Some configuration the configuration option to generate a ftptelnet alert or not. meaning it only looks for information on a packet-bypacket basis.2. Example Global Configuration preprocessor ftp_telnet: \ global inspection_type stateful encrypted_traffic no 56 . similar to that of HTTP Inspect (See. Per the Telnet RFC. It is only applicable when the mode is stateful.Telnet Configuration The telnet configuration deals with configuration options that determine the functioning of the Telnet portion of the preprocessor. normalize This option tells the preprocessor to normalize the telnet traffic by eliminating the telnet escape sequences. ports {<port> [<port>< . 2. SSH tunnels cannot be decoded. ayt attack thresh < number > This option causes the preprocessor to alert when the number of consecutive telnet Are You There (AYT) commands reaches the number specified. the telnet decode preprocessor. This is anomalous behavior which could be an evasion case. Default This configuration supplies the default server configuration for any FTP server that is not individually configured. 57 . and subsequent instances will override previously set values.. However.. detect anomalies In order to support certain options. The detect anomalies option enables alerting on Telnet SB without the corresponding SE. Typically port 23 will be included. Configuration 1. subnegotiation begins with SB (subnegotiation begin) and must end with an SE (subnegotiation end).. 3. certain implementations of Telnet servers will ignore the SB without a cooresponding SE. Rules written with ’raw’ content options will ignore the normailzed buffer that is created when this option is in use. Most of your FTP servers will most likely end up using the default configuration. >]} This is how the user configures which ports to decode as telnet traffic. it is also susceptible to this behavior. It functions similarly to its predecessor. so adding port 22 will only yield false positives. Being that FTP uses the Telnet protocol on the control connection. 4. Telnet supports subnegotiation. >]}. For example: ftp_cmds { XPWD XCWD XCUP XMKD XRMD } 4. Typically port 21 will be included. 7. print cmds During initialization. This option specifies a list of additional commands allowed by this server. def max param len <number> This specifies the default maximum allowed parameter length for an FTP command. the only difference being that specific IPs can be configured.. This may be used to allow the use of the ’X’ commands identified in RFC 775. as well as any additional commands as needed. Example IP specific FTP Server Configuration preprocessor _telnet_protocol: \ ftp server 10.1. For example the USER command – usernames may be no longer than 16 bytes.1. fmt must be enclosed in <>’s and may contain the following: 58 . so the appropriate configuration would be: alt_max_param_len 16 { USER } 6. 2. cmd validity cmd < fmt > This option specifies the valid format for parameters of a given command. ports {<port> [<port>< . 3.. It can be used as a basic buffer overflow detection. Configuration by IP Address This format is very similar to “default”.. 5.1 ports { 21 } ftp_cmds { XPWD XCWD } FTP Server Configuration Options 1. It can be used as a more specific buffer overflow detection.Example Default FTP Server Configuration preprocessor ftp_telnet_protocol: \ ftp server default ports { 21 } Refer to 60 for the list of options set in default ftp server configuration.. The most common among servers that do.org/internetdrafts/draft-ietf-ftpext-mlst-16. While not part of an established standard. Some others accept a format using YYYYMMDDHHmmss[+—]TZ format. separated by | One of the choices enclosed within {}. The example above is for the first case (time format as specified in. [] Description Parameter must be an integer Parameter must be an integer between 1 and 255 Parameter must be a single character. including mode Z which allows for # zip-style compression. ignore telnet erase cmds <yes|no> This option allows Snort to ignore telnet escape sequences for erase character (TNC EAC) and erase line (TNC EAL) when normalizing FTP command channel. certain FTP servers accept MDTM commands that set the modification time on a file. per RFC 959 Parameter must be a long host port specified.n[n[n]]] ] string > MDTM is an off case that is worth discussing.Value int number char <chars> date <datefmt> string host port long host port extended host port {}. telnet cmds <yes|no> This option turns on detection and alerting when telnet escape sequences are seen on the FTP command channel. cmd_validity MDTM < [ date nnnnnnnnnnnnnn[. per RFC 1639 Parameter must be an extended host port specified.literal Parameter is a string (effectively unrestricted) Parameter must be a host/port specified. # This allows additional modes. where: n Number C Character [] optional format enclosed | OR {} choice of options . 9. Some FTP servers do not process those telnet escape sequences. one of <chars> Parameter follows format specified.txt) To check validity for a server that uses the TZ format. optional value enclosed within [] Examples of the cmd validity option are shown below. | {}. per RFC 2428 One of choices enclosed within. per RFC 959 and others performed by the preprocessor. Injection of telnet escape sequences could be used as an evasion attempt on an FTP command channel. cmd_validity MODE < char ASBCZ > # Allow for a date in the MDTM command. 59 .uuu]. + .. use the following: cmd_validity MDTM < [ date nnnnnnnnnnnnnn[{+|-}n[n]] ] string > 8.ietf. Use of the ”data chan” option is deprecated in favor of the ”ignore data chan” option. It can be used to improve performance. other preprocessors) to ignore FTP data channel connections. ”data chan” will be removed in a future release. The other options will override those in the base configuration. Using this option means that NO INSPECTION other than TCP state will be performed on FTP data transfers. it is recommended that this option not be used. If your rule set includes virus-type rules. Options specified in the configuration file will modify this set of options. FTP commands are added to the set of allowed commands. data chan This option causes the rest of snort (rules. ignore data chan <yes|no> This option causes the rest of Snort (rules. Setting this option to ”yes” means that NO INSPECTION other than TCP state will be performed on FTP data transfers. Most of your FTP clients will most likely end up using the default configuration. Default This configuration supplies the default client configuration for any FTP client that is not individually configured. 11. other preprocessors) to ignore FTP data channel connections. especially with large file transfers from a trusted source. especially with large file transfers from a trusted source. the FTP client configurations has two types: default.10. FTP Server Base Configuration Options The base FTP server configuration is as follows.. Example Default FTP Client Configuration preprocessor ftp_telnet_protocol: \ ftp client default bounce no max_resp_len 200 60 . and by IP address. it is recommended that this option not be used. If your rule set includes virus-type rules. It can be used to improve performance. bounce to < CIDR.20020 } • Allow bounces to 192.168.Configuration by IP Address This format is very similar to “default”.[port|portlow. Example IP specific FTP Client Configuration preprocessor ftp_telnet_protocol: \ ftp client 10.168.1.162. bounce_to { 192. It can be used as a basic buffer overflow detection.2.168.1. Examples/Default Configuration from snort.1.1.conf preprocessor ftp_telnet: \ global \ encrypted_traffic yes \ inspection_type stateful preprocessor ftp_telnet_protocol:\ telnet \ normalize \ ayt_attack_thresh 200 61 .162. ignore telnet erase cmds <yes|no> This option allows Snort to ignore telnet escape sequences for erase character (TNC EAC) and erase line (TNC EAL) when normalizing FTP command channel.1 port 20020 and 192. It can be used to deal with proxied FTP connections where the FTP data channel is different from the client. An FTP bounce attack occurs when the FTP PORT command is issued and the specified host does not match the host of the client. 5. bounce_to { 192.1.78. the only difference being that specific IPs can be configured. the use of PORT 192.1.1. the use of PORT 192.1.168.52.162.1 port 20020 – ie.1.168.1 ports 20020 through 20040 – ie.2 port 20030. Some FTP clients do not process those telnet escape sequences.20040 } • Allow bounces to 192.1.xx.1. 3. this allows the PORT command to use the IP address (in CIDR format) and port (or inclusive port range) without generating an alert.20020.1. Injection of telnet escape sequences could be used as an evasion attempt on an FTP command channel.1.78.1. A few examples: • Allow bounces to 192. bounce <yes|no> This option turns on detection and alerting of FTP bounce attacks.porthi] > When the bounce option is turned on.168.1. telnet cmds <yes|no> This option turns on detection and alerting when telnet escape sequences are seen on the FTP command channel. 2. bounce_to { 192.1.1 bounce yes max_resp_len 500 FTP Client Configuration Options 1. where xx is 52 through 72 inclusive. max resp len <number> This specifies the maximum allowed response length to an FTP command accepted by the client.20030 } 4.20020 192.1.168. Configuration By default. preprocessor ftp_telnet_protocol: \ ftp server default \ def_max_param_len 100 \ alt_max_param_len 200 { CWD } \ cmd_validity MODE < char ASBCZ > \ cmd_validity MDTM < [ date nnnnnnnnnnnnnn[. Secure CRT. the SSH preprocessor counts the number of bytes transmitted to the server. The SSH vulnerabilities that Snort can detect all happen at the very beginning of an SSH session.. >]} This option specifies which ports the SSH preprocessor should inspect traffic to. CRC 32. The available configuration options are described below. Snort ignores the session to increase performance.# # # # # This is consistent with the FTP rules as of 18 Sept 2004. 3.2. 2. all alerts are disabled and the preprocessor checks traffic on port 22. To detect the attacks. Both Challenge-Response Overflow and CRC 32 attacks occur after the key exchange. The Secure CRT and protocol mismatch exploits are observable before the key exchange. If those bytes exceed a predefined limit within a predefined number of packets. 4. max encrypted packets < number > The number of encrypted packets that Snort will inspect before ignoring a given SSH session. Set CWD to allow parameter length of 200 MODE has an additional mode of Z (compressed) Check for string formats in USER & PASS commands Check MDTM commands that set modification time on the file. 1.. the SSH version string exchange is used to distinguish the attacks. and the Protocol Mismatch exploit. This number must be hit before max encrypted packets packets are sent.9 SSH The SSH preprocessor detects the following exploits: Challenge-Response Buffer Overflow. or else Snort will ignore the traffic. max server version len < number > 62 . Since the Challenge-Response Overflow only effects SSHv2 and CRC 32 only effects SSHv. server ports {<port> [<port>< . an alert is generated. max client bytes < number > The number of unanswered bytes allowed to be transferred before alerting on Challenge-Response Overflow or CRC 32. Once max encrypted packets packets have been seen. and are therefore encrypted. Both attacks involve sending a large payload (20kb+) to the server immediately after the authentication challenge. Example Configuration from snort. enable recognition Enable alerts for non-SSH traffic on SSH ports. enable srvoverflow Enables checking for the Secure CRT exploit. 63 . 10. For instance. 12. 5. 7. The SSH preprocessor should work by default. or if a client generates server traffic.The maximum number of bytes allowed in the SSH server version string before alerting on the Secure CRT server version string overflow. only segmentation using WriteAndX is currently reassembled. Snort rules can be evaded by using both types of fragmentation.conf Looks for attacks on SSH server port 22. enable badmsgdir Enable alerts for traffic flowing the wrong direction. enable ssh1crc32 Enables checking for the CRC 32 exploit. It is primarily interested in DCE/RPC requests. 11.10 DCE/RPC The dcerpc preprocessor detects and decodes SMB and DCE/RPC traffic. preprocessor ssh: \ server_ports { 22 } \ max_client_bytes 19600 \ max_encrypted_packets 20 \ enable_respoverflow \ enable_ssh1crc32 2. try increasing the number of required client bytes with max client bytes. With the preprocessor enabled. Alerts at 19600 unacknowledged bytes within 20 encrypted packets for the Challenge-Response Overflow/CRC32 exploits. the preprocessor will stop processing traffic for a given session. 6. the rules are given reassembled DCE/RPC data to examine. 8. and only decodes SMB to get to the potential DCE/RPC requests carried by SMB. if the presumed server generates client traffic. After max encrypted packets is reached. Currently. If Challenge-Respone Overflow or CRC 32 false positive. the preprocessor only handles desegmentation (at SMB and TCP layers) and defragmentation of DCE/RPC. enable protomismatch Enables checking for the Protocol Mismatch exploit. autodetect Attempt to automatically detect SSH.2. Other methods will be handled in future versions of the preprocessor. 9. enable respoverflow Enables checking for the Challenge-Response Overflow exploit. At the SMB layer. enable paysize Enables alerts for invalid payload sizes. This option is not configured by default. They are described below: • autodetect In addition to configured ports. in kilobytes.Autodetection of SMB is done by looking for ”\xFFSMB” at the start of the SMB data. in bytes. Autodetection of DCE/RPC is not as reliable. it ends processing. this option should not be configured as SMB segmentation provides for an easy evasion opportunity. one byte is checked for DCE/RPC version 5 and another for a DCE/RPC PDU type of Request... Since the preprocessor looks at TCP reassembled packets (to avoid 64 . two bytes are checked in the packet. • max frag size <number> Maximum DCE/RPC fragment size to put in defragmentation buffer. before the final desegmentation or defragmentation of the DCE/RPC request takes place. • ports dcerpc { <port> [<port> <.. Configuration The proprocessor has several optional configuration options. Default is port 135. • alert memcap Alert if memcap is exceeded. This will potentially catch an attack earlier and is useful if in inline mode.. This option is not configured by default. • disable dcerpc frag Do not do DCE/RPC defragmentation. This option is not configured by default. as well as checking the NetBIOS header (which is always present for SMB) for the type ”Session Message”. • ports smb { <port> [<port> <. If subsequent checks are nonsensical. Currently.>] } Ports that the preprocessor monitors for DCE/RPC over TCP traffic. • disable smb frag Do not do SMB desegmentation. the preprocessor proceeds with the assumption that it is looking at DCE/RPC data. try to autodetect DCE/RPC sessions. Unless you are experiencing severe performance issues.>] } Ports that the preprocessor monitors for SMB traffic. this option should not be configured as DCE/RPC fragmentation provides for an easy evasion opportunity. Note that DCE/RPC can run on practically any port in addition to the more common ports. If both match. • memcap <number> Maximum amount of memory available to the DCE/RPC preprocessor for desegmentation and defragmentation. Unless you are experiencing severe performance issues. Assuming that the data is a DCE/RPC header. Default are ports 139 and 445. • reassemble increment <number> This option specifies how often the preprocessor should create a reassembled packet to send to the detection engine with the data that’s been accrued in the segmentation and fragmentation reassembly buffers. Default is 100000 kilobytes. Default is 3000 bytes. This option is not configured by default. Set memory cap for desegmentation/defragmentation to 200. If not specified. Set memory cap for desegmentation/defragmentation to 50. Don’t do desegmentation on SMB writes. Create a reassembly packet every time through the preprocessor if there is data in the desegmentation/defragmentation buffers. that in using this option. don’t do DCE/RPC defragmentation. The alert is gid 130. Note At the current time. Configuration Examples In addition to defaults. preprocessor dcerpc: \ ports dcerpc { 135 2103 } \ memcap 200000 \ reassemble_increment 1 Default Configuration If no options are given to the preprocessor. preprocessor dcerpc: \ autodetect \ disable_smb_frag \ max_frag_size 4000 In addition to defaults. 65 . sid 1. autodetect SMB and DCE/RPC sessions on non-configured ports.000 kilobytes.) preprocessor dcerpc: \ disable_dcerpc_frag \ memcap 50000 In addition to the defaults. Not recommended if Snort is running in passive mode as it’s not really needed. which is triggered when the preprocessor has reached the memcap limit for memory allocation. A value of 0 will in effect disable this option as well. the last packet of an attack using DCE/RPC segmented/fragmented evasion techniques may have already gone through before the preprocessor looks at it. (Since no DCE/RPC defragmentation will be done the memory cap will only apply to desegmentation. however. this option is disabled. there is not much to do with the dcerpc preprocessor other than turn it on and let it reassemble fragmented DCE/RPC packets. Truncate DCE/RPC fragment if greater than 4000 bytes.TCP overlaps and segmentation evasions). Note. the default configuration will look like: preprocessor dcerpc: \ ports smb { 139 445 } \ ports dcerpc { 135 } \ max_frag_size 3000 \ memcap 100000 \ reassemble_increment 0 Preprocessor Events There is currently only one alert.000 kilobytes. The argument to the option specifies how often the preprocessor should create a reassembled packet if there is data in the segmentation/fragmentation buffers. detect on DCE/RPC (or TCP) ports 135 and 2103 (overrides default). so looking at the data early will likely catch the attack before all of the exploit data has gone through. Snort will potentially take a performance hit. 11 DNS The DNS preprocessor decodes DNS Responses and can detect the following exploits: DNS Client RData Overflow. such as the handshake. 2. The SSL Dynamic Preprocessor (SSLPP) decodes SSL and TLS traffic and optionally determines if and when Snort should stop inspection of it. Therefore.2. and that the traffic is legitimately encrypted.conf Looks for traffic on DNS server port 53. all alerts are disabled and the preprocessor checks traffic on port 53.2. ports {<port> [<port>< . SSLPP looks for a handshake followed by encrypted traffic traveling to both sides. Obsolete Record Types. Configuration By default. documented below. no further inspection of the data on the connection is made. If one side responds with an indication that something has failed. especially when packets may be missed. enable obsolete types Alert on Obsolete (per RFC 1035) Record Types 3. By default. >]} This option specifies the source ports that the DNS preprocessor should inspect traffic. the user should use the ’trustservers’ option.. Do not alert on obsolete or experimental RData record types..12 SSL/TLS Encrypted traffic should be ignored by Snort for both performance reasons and to reduce false positives. Verifying that faultless encrypted traffic is sent from both endpoints ensures two things: the last client-side handshake packet was not crafted to evade Snort. By enabling the SSLPP to inspect port 443 and enabling the noinspect encrypted option. 66 . Check for the DNS Client RData overflow vulnerability.2. 1. The available configuration options are described below. SSL is used over port 443 as HTTPS. the session is not marked as encrypted.. Once the traffic is determined to be encrypted. Typically. It will not operate on TCP sessions picked up midstream. and it will cease operation on a session if it loses state because of missing data (dropped packets). Examples/Default Configuration from snort. preprocessor dns: \ ports { 53 } \ enable_rdata_overflow 2. the only observed response from one endpoint will be TCP ACKs. only the SSL handshake of each connection will be inspected. In some cases. if a user knows that server-side encrypted data can be trusted to mark the session as encrypted. DNS looks at DNS Response traffic over UDP and TCP and it requires Stream preprocessor to be enabled for TCP decoding. and Experimental Record Types. preprocessor ssl: noinspect_encrypted 2. the preprocessor checks for unicast ARP requests.conf Enables the SSL preprocessor and tells it to disable inspection on encrypted traffic. This requires the noinspect encrypted option to be useful. the preprocessor inspects Ethernet addresses and the addresses in the ARP packets. 3.. When ”-unicast” is specified as the argument of arpspoof. The host with the IP address should be on the same layer 2 segment as Snort is. Examples/Default Configuration from snort. When inconsistency occurs..Configuration 1. When no arguments are specified to arpspoof. By default. trustservers Disables the requirement that application (encrypted) data must be observed on both sides of the session before a session is marked encrypted.. Default is off. Specify a pair of IP and hardware address as the argument to arpspoof detect host. The preprocessor will use this list when detecting ARP cache overwrite attacks. and inconsistent Ethernet to IP mapping. Format preprocessor arpspoof[: -unicast] preprocessor arpspoof_detect_host: ip mac 67 . Specify one host IP MAC combo per line.13 ARP Spoof Preprocessor The ARP spoof preprocessor decodes ARP packets and detects ARP attacks. >]} This option specifies which ports SSLPP will inspect traffic on. unicast ARP requests. ports {<port> [<port>< . Use this option for slightly better performance if you trust that your servers are not compromised.2. An alert with GID 112 and SID 1 will be generated if a unicast ARP request is detected. noinspect encrypted Disable inspection on traffic that is encrypted. Default is off. an alert with GID 112 and SID 2 or 3 is generated. Alert SID 4 is used in this case. i.168. Transaction. The preprosessor merely looks for Ethernet address inconsistencies.Option ip mac Description IP address.168. • Stream reassembly must be performed for TCP sessions. New rule options have been implemented to improve performance. preprocessor arpspoof preprocessor arpspoof_detect_host: 192.40. Read. Dependency Requirements For proper functioning of the preprocessor: • The dcerpc preprocessor (the initial iteration) must be disabled. i. UDP and RPC over HTTP v.e.168.2 f0:0f:00:f0:0f:01 2. the dcerpc2 preprocessor will enable stream reassembly for that session if necessary.14 DCE/RPC 2 Preprocessor The main purpose of the preprocessor is to perform SMB desegmentation and DCE/RPC defragmentation to avoid rule evasion using these techniques.40.40.1 proxy and server. • Stream session tracking must be enabled. Write Block Raw.40.2 f0:0f:00:f0:0f:01 The third example configuration has unicast detection enabled. stream5.168. Write and Close. The Ethernet address corresponding to the preceding IP. TCP.2. the frag3 preprocessor should be enabled and configured.1 f0:0f:00:f0:0f:00 preprocessor arpspoof_detect_host: 192. preprocessor arpspoof The next example configuration does not do unicast detection but monitors ARP mapping for hosts 192. preprocessor arpspoof: -unicast preprocessor arpspoof_detect_host: 192. Transaction Secondary.40. Read Block Raw and Read AndX.e. If it is decided that a session is SMB or DCE/RPC.40.1 f0:0f:00:f0:0f:00 preprocessor arpspoof_detect_host: 192.1 and 192. 68 . The following transports are supported for DCE/RPC: SMB. either through configured ports. reduce false positives and reduce the count and complexity of DCE/RPC based rules. servers or autodetecting.168. Example Configuration The first example configuration does neither unicast detection nor ARP mapping monitoring. Write AndX.2.168. The preprocessor requires a session tracker to keep its data. • IP defragmentation should be enabled. SMB desegmentation is performed for the following commands that can be used to transport DCE/RPC requests and responses: Write. Target Based There are enough important differences between Windows and Samba versions that a target based approach has been implemented. i. i.22 Any valid TID. deleting either the UID or TID invalidates the FID. The binding between these is dependent on OS/software version.22 and earlier Any valid UID and TID. Samba 3.22 and earlier. Accepted SMB commands Samba in particular does not recognize certain commands under an IPC$ tree.0. does not accept: Open Write And Close Read Read Block Raw Write Block Raw Windows (all versions) Accepts all of the above commands under an IPC$ tree. Therefore. However. Samba greater than 3.. if the TID used in creating the FID is deleted (via a tree disconnect). share handle or TID and file/named pipe handle or FID must be used to write data to a named pipe. since it is necessary in making a request to the named pipe. Some important differences: Named pipe instance tracking A combination of valid login handle or UID. along with a valid FID can be used to make a request. no more requests can be written to that named pipe instance. TID and FID used to make a request to a named pipe instance.22 in that deleting the UID or TID used to create the named pipe instance also invalidates it.0. the FID that was created using this TID becomes invalid. Samba (all versions) Under an IPC$ tree. no more requests can be written to that named pipe instance. i.e. AndX command chaining 69 . Windows 2003 Windows XP Windows Vista These Windows versions require strict binding between the UID.0.e. If the TID used to create the FID is deleted (via a tree disconnect). however. However. along with a valid FID can be used to make a request. the FID becomes invalid. It also follows Samba greater than 3. the FID that was created using this TID becomes invalid.e.0. If the UID used to create the named pipe instance is deleted (via a Logoff AndX). requests after that follow the same binding as Samba 3. Windows 2000 Windows 2000 is interesting in that the first request to a named pipe must use the same binding as that of the other Windows versions. no binding. e. is very lax and allows some nonsensical combinations.Windows is very strict in what command combinations it allows to be chained. whereas in using the Write* commands. DCE/RPC Fragmented requests .20 and earlier Any amount of Bind requests can be made. Multiple Transaction requests can be made simultaneously to the same named pipe. login/logoff and tree connect/tree disconnect. Samba (all versions) Uses just the MID to define a ”thread”. on the other hand. If another Bind is made. the client has to explicitly send one of the Read* requests to tell the server to send the response and (2) a Transaction request is not written to the named pipe until all of the data is received (via potential Transaction Secondary requests) whereas with the Write* commands. multiple logins and tree connects (only one place to return handles for these). The context id field in any other fragment can contain any value. we don’t want to keep track of data that the server won’t accept. These requests can also be segmented with Transaction Secondary commands. Ultimately. Windows (all versions) For all of the Windows versions. Samba (all versions) The context id that is ultimately used for the request is contained in the last fragment. It is necessary to track this so as not to munge these requests together (which would be a potential evasion opportunity). If a Bind after a successful Bind is made. The context id field in any other fragment can contain any value. An MID represents different sub-processes within a process (or under a PID). DCE/RPC Fragmented requests . Windows (all versions) The context id that is ultimately used for the request is contained in the first fragment. i. Samba later than 3.0. An evasion possibility would be accepting a fragment in a request that the server won’t accept that gets sandwiched between an exploit. Any binding after that must use the Alter Context request. Samba.0.20 Another Bind request can be made if the first failed and no interfaces were successfully bound to. all previous interface bindings are invalidated. What distinguishes them (when the same named pipe is being written to. Transaction tracking The differences between a Transaction request and using one of the Write* commands to write data to a named pipe are that (1) a Transaction performs the operations of a write and a read from the named pipe.g. having the same FID) are fields in the SMB header representing a process id (PID) and multiplex id (MID). Samba 3. Multliple Bind requests A Bind request is the first request that must be made in a connection-oriented DCE/RPC session in order to specify the interface/interfaces that one wants to communicate with. data is written to the named pipe as it is received by the server. all previous interface bindings are invalidated. Windows (all versions) Uses a combination of PID and MID to define a ”thread”.Operation number 70 . only one Bind can ever be made on a session whether or not it succeeds or fails. The PID represents the process this request is a part of. e. Segments for each ”thread” are stored separately and written to the named pipe when all segments are received.Context ID Each fragment in a fragmented request carries the context id of the bound interface it wants to make the request to.. Windows Vista The opnum that is ultimately used for the request is contained in the first fragment. Samba (all versions) The byte order of the stub data is that which is used in the request carrying the stub data. DCE/RPC Stub data byte order The byte order of the stub data is determined differently for Windows and Samba. Samba (all versions) Windows 2000 Windows 2003 Windows XP The opnum that is ultimately used for the request is contained in the last fragment. cl] OFF 1024-4194303 (kilobytes) 1514-65535 pseudo-event | event | ’[’ event-list ’]’ "none" | "all" event | event ’. co. Configuration The dcerpc2 preprocessor has a global configuration and one or more server configurations.’ event-list "memcap" | "smb" | "co" | "cl" 0-65535 Option explanations memcap 71 . co Stands for connection-oriented DCE/RPC. smb. cl] Server Configuration 72 .. This option is useful in inline mode so as to potentially catch an exploit early before full defragmentation is done. max frag len Specifies the maximum fragment size that will be added to the defragmention module. co] events [memcap. Default is 100 MB. co. disable defrag Tells the preprocessor not to do DCE/RPC defragmentation.) memcap Only one event. max_frag_len 14440 disable_defrag. Default is disabled. disable this option. If the memcap is reached or exceeded. smb] reassemble_threshold 500 Default global configuration preprocessor dcerpc2: memcap 102400. co and cl. cl Stands for connectionless DCE/RPC. events [smb. Default is not set. (See Events section for an enumeration and explanation of events.Specifies the maximum amount of run-time memory that can be allocated. If a fragment is greater than this size. A value of 0 supplied as an argument to this option will. Option examples memcap 30000 max_frag_len 16840 events none events all events smb events co events [co] events [smb. events [memcap. Alert on events related to connection-oriented DCE/RPC processing. co. Defaults are smb. smb. Alert on events related to connectionless DCE/RPC processing. events [memcap. reassemble threshold Specifies a minimum number of bytes in the DCE/RPC desegmentation and defragmentation buffers before creating a reassembly packet to send to the detection engine. Default is to do defragmentation. events smb memcap 50000. memcap 300000. cl]. Run-time memory includes any memory allocated after configuration. alert. co. smb Alert on events related to SMB processing. in effect. it is truncated before being added to the defragmentation module. events Specifies the classes of events to enable. The net option supports IPv6 addresses. 73 . udp 1025:.0. Zero or more net configurations can be specified. the default configuration is used if no net configurations match. the defaults will be used. default values will be used for the default configuration.conf CANNOT be used. shares with ’$’ must be enclosed quotes.’ detect-list transport | transport port-item | transport ’[’ port-list ’]’ "smb" | "tcp" | "udp" | "rpc-over-http-proxy" | "rpc-over-http-server" port-item | port-item ’. rpc-over-http-server 593] autodetect [tcp 1025:. if non-required options are not specified. If no default configuration is specified.’ port-list port | port-range ’:’ port | port ’:’ | port ’:’ port 0-65535 share | ’[’ share-list ’]’ share | share ’.’ ’"’ ’]’ ’[’ 0-255 Because the Snort main parser treats ’$’ as the start of a variable and tries to expand it. When processing DCE/RPC traffic. tcp 135. A net configuration matches if the packet’s server IP address matches an IP address or net specified in the net configuration.’ ’"’ ’]’ ’[’ ’$’ graphical ascii characters except ’. The default and net options are mutually exclusive. At most one default configuration can be specified. Option explanations default Specifies that this configuration is for the default server configuration.’ share-list word | ’"’ word ’"’ | ’"’ var-word ’"’ graphical ascii characters except ’. If a net configuration matches.0. Option syntax Option default net policy detect Argument NONE <net> <policy> <detect> Required YES YES NO NO Default NONE NONE policy WinXP detect [smb [139. For any dcerpc2 server configuration. udp 135. it will override the default configuration.preprocessor dcerpc2_server The dcerpc2 server configuration is optional. Note that port and ip variables defined in snort. A dcerpc2 server configuration must start with default or net options.20" "none" | detect-opt | ’[’ detect-list ’]’ detect-opt | detect-opt ’.445].22" | "Samba-3.. 0. feab:45b3::/32] net [192. A value of 0 disables this option.0. It would be very uncommon to see SMB on anything other than ports 139 and 445.0. Defaults are 1025-65535 for TCP. This is because the proxy is likely a web server and the preprocessor should not look at all web traffic. udp 135.0. This option is useful if the RPC over HTTP proxy configured with the detect option is only used to proxy DCE/RPC traffic. the preprocessor will always attempt to autodetect for ports specified in the detect configuration for rpc-over-http-proxy.168./24] net 192. tcp] detect [smb 139.TCP/UDP.168.445] detect [smb [139.445]] detect [smb. autodetect Specifies the DCE/RPC transport and server ports that the preprocessor should attempt to autodetect on for the transport. Default is empty. RPC over HTTP proxy and lastly SMB. tcp 135. 593 for RPC over HTTP server and 80 for RPC over HTTP proxy.0. 135 for TCP and UDP.0/255.168.10 net 192. policy Specifies the target-based policy to use when processing.168. tcp [135.0/24.22 detect none detect smb detect [smb] detect smb 445 detect [smb 445] detect smb [139. UDP and RPC over HTTP server. The configuration will only apply to the IP addresses and nets supplied as an argument.445]. detect Specifies the DCE/RPC transport and server ports that should be detected on for the transport.168. The autodetect ports are only queried if no detect transport/ports match the packet. rpc-over-http-server [593.6002:6004]] 74 . Note that most dynamic DCE/RPC ports are above 1024 and ride directly over TCP or UDP. Default is to autodetect on RPC over HTTP proxy detect ports.0/24 net [192.168.net Specifies that this configuration is an IP or net specific configuration.255. smb invalid shares Specifies SMB shares that the preprocessor should alert on if an attempt is made to connect to them via a Tree Connect or Tree Connect AndX.255. The order in which the preprocessor will attempt to autodetect will be . no autodetect http proxy ports By default.10.2103]] detect [smb [139. Option examples net 192. RPC over HTTP server. Defaults are ports 139 and 445 for SMB.0. Default is ”WinXP”. Default maximum is 3 chained commands.0. 10.3003:] autodetect [tcp [2025:3001. \ smb_invalid_shares ["C$". rpc-over-http-server 593]. udp 2025:] autodetect [tcp 2025:. \ autodetect [tcp 1025:. policy WinVista.10.6005:]]. policy WinVista. tcp]. "C$"] smb_invalid_shares ["private".0/24. \ smb_invalid_shares ["C$". "C$"] smb_max_chain 1 Configuration examples preprocessor dcerpc2_server: \ default preprocessor dcerpc2_server: \ default. rpc-over-http-server 1025:]. events [smb.4. autodetect tcp 1025:. tcp 135. \ autodetect [tcp 1025:. udp 1025:. udp 135. tcp. rpc-over-http-proxy [1025:6001.11. udp 135. co. smb_max_chain 3 Events The preprocessor uses GID 133 to register events.10. \ detect [smb [139. policy Samba.57].10. policy Win2000 preprocessor dcerpc2_server: \ net [10.0/24. policy WinXP.feab:45b3::/126].autodetect none autodetect tcp autodetect [tcp] autodetect tcp 2025: autodetect [tcp 2025:] autodetect tcp [2025:3001. policy WinXP. smb_max_chain 3 Complete dcerpc2 default configuration preprocessor dcerpc2: \ memcap 102400.4. detect smb.4. \ detect [smb. udp] autodetect [tcp 2025:. udp.4. tcp 135. detect [smb.0/24. rpc-over-http-proxy 8081]. rpc-over-http-server [1025:6001.6005:]] smb_invalid_shares private smb_invalid_shares "private" smb_invalid_shares "C$" smb_invalid_shares [private. policy Win2000 preprocessor dcerpc2_server: \ default.445]. Memcap events 75 . "ADMIN$"] preprocessor dcerpc2_server: net 10.445].4. cl] preprocessor dcerpc2_server: \ default. rpc-over-http-server 1025:]. policy Win2000. \ detect [smb [139. rpc-over-http-server 593]. "ADMIN$"]. smb_max_chain 1 preprocessor dcerpc2_server: \ net [10. udp 1025:. no_autodetect_http_proxy_ports preprocessor dcerpc2_server: \ net [10. autodetect none Default server configuration preprocessor dcerpc2_server: default. autodetect [tcp.56.feab:45b3::/126].3003:]] autodetect [tcp. "D$".11. (The byte count must always be greater than or equal to the data size. The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command byte count specified in the command header. the preprocessor will alert. Some SMB commands. Some commands. The SMB id does not equal \xffSMB. Note that since the preprocessor does not yet support SMB2. Positive Response (only from server). Some commands require a minimum number of bytes after the command header. An SMB message type was specified in the header. The word count of the command header is invalid. especially the commands from the SMB Core implementation require a data format field that specifies the kind of data that will be coming next. the preprocessor will alert. Some commands require a specific format for the data. such as Transaction. SMB commands have pretty specific word counts and if the preprocessor sees a command with a word count that doesn’t jive with that command. id of \xfeSMB is turned away before an eventable point is reached. The preprocessor will alert if the byte count specified in the SMB command header is less than the data size specified in the SMB command. Valid types are: Message.SID 1 Description If the memory cap is reached and the preprocessor is configured to alert.) Some of the Core Protocol commands (from the initial SMB implementation) require that the byte count be some value greater than the data size exactly. the preprocessor will alert. 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 76 .. SMB events SID 2 Description An invalid NetBIOS Session Service type was specified in the header. Request (only from client). If this offset puts us before data that has already been processed or after the end of payload. Either a request was made by the server or a response was given by the client.. Retarget Response (only from server) and Keep Alive. The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command header to be decoded. The preprocessor will alert if the format is not that which is expected for that command. If this field is zero. The preprocessor will alert if the NetBIOS Session Service length field contains a value less than the size of an SMB header. the preprocessor will alert. The preprocessor will alert if the byte count minus a predetermined amount based on the SMB command is not equal to the data size. Many SMB commands have a field containing an offset from the beginning of the SMB header to where the data the command is carrying starts. Negative Response (only from server).) The preprocessor will alert if the total amount of data sent in a transaction is greater than the total data count specified in the SMB command header. If a command requires this and the byte count is less than the minimum required byte count for that command. Windows does not allow this behavior. so it need to be queued with the request and dequeued with the response. however Samba does. however. it issues a Read* command to the server to tell it to send a response to the data it has written. the login handle returned by the server is used for the subsequent chained commands. (The preprocessor is only interested in named pipes as this is where DCE/RPC requests are written to. The combination of a Session Setup AndX command with a chained Logoff AndX command. Windows does not allow this behavior. The Read* request contains the file id associated with a named pipe instance that the preprocessor will ultimately send the data to. The Tree Disconnect command is used to disconnect from that share. It looks for a Tree Connect or Tree Connect AndX to the share. There should be under normal circumstances no more than a few pending tree connects at a time and the preprocessor will alert if this number is excessive. A Tree Connect AndX command is used to connect to a share. There is. The preprocessor will alert if it sees this. With commands that are chained after a Session Setup AndX request. only one place in the SMB header to return a login handle (or Uid). With AndX command chaining it is possible to chain multiple Tree Connect AndX commands within the same request. The preprocessor will alert if the connection-oriented DCE/RPC minor version contained in the header is not equal to 0. An Open AndX or Nt Create AndX command is used to open/create a file or named pipe. When a Session Setup AndX request is sent to the server.18 19 20 21 22 23 24 25 26 For the Tree Connect command (and not the Tree Connect AndX command). essentially logins in and logs off in the same request and is anomalous behavior. the preprocessor has to queue the requests up and wait for a server response to determine whether or not an IPC share was successfully connected to (which is what the preprocessor is interested in). The combination of a Open AndX or Nt Create AndX command with a chained Close command. however Samba does. The preprocessor will alert if it sees this.) The Close command is used to close that file or named pipe. only one place in the SMB header to return a tree handle (or Tid). 77 . If multiple Read* requests are sent to the server. The preprocessor will alert if it sees any of the invalid SMB shares configured. The preprocessor will alert if the number of chained commands in a single request is greater than or equal to the configured amount (default is 3). After a client is done writing data using the Write* commands. This is anomalous behavior and the preprocessor will alert if it happens. there is no indication in the Tree Connect response as to whether the share is IPC or not. There should be under normal circumstances no more than a few pending Read* requests at a time and the preprocessor will alert if this number is excessive. essentially connects to a share and disconnects from the same share in the same request and is anomalous behavior. In this case the preprocessor is concerned with the server response. This is anomalous behavior and the preprocessor will alert if it happens. There is. The combination of a Tree Connect AndX command with a chained Tree Disconnect command. A Logoff AndX request is sent by the client to indicate it wants to end the session and invalidate the login handle. essentially opens and closes the named pipe in the same request and is anomalous behavior. The preprocessor will alert if it sees this. Unlike the Tree Connect AndX response. the server responds (if the client successfully authenticates) which a user id or login handle. does not contain this file id. This is used by the client in subsequent requests to indicate that it has authenticated. however. The server response. they are responded to in the order they were sent. Connection-oriented DCE/RPC events SID 27 28 Description The preprocessor will alert if the connection-oriented DCE/RPC major version contained in the header is not equal to 5. however. With AndX command chaining it is possible to chain multiple Session Setup AndX commands within the same request. this number should stay the same for all fragments. The preprocessor will alert if in a Bind or Alter Context request. If a request if fragmented. If a request is fragmented. The preprocessor will alert if a fragment is larger than the maximum negotiated fragment length. The preprocessor will alert if the remaining fragment length is less than the remaining packet size. The preprocessor will alert if it changes in a fragment mid-request. there are no context items specified. wrapping the sequence number space produces strange behavior from the server. The preprocessor will alert if a non-last fragment is less than the size of the negotiated maximum fragment length. The preprocessor will alert if the connectionless DCE/RPC pdu type is not a valid pdu type. Most evasion techniques try to fragment the data as much as possible and usually each fragment comes well below the negotiated transmit size. The preprocessor will alert if the opnum changes in a fragment mid-request. The preprocessor will alert if the fragment length defined in the header is less than the size of the header. In testing. so this should be considered anomalous behavior. The preprocessor will alert if in a Bind or Alter Context request. The context id is a handle to a interface that was bound to. The operation number specifies which function the request is calling on the bound interface.29 30 31 32 33 34 35 36 37 38 39 The preprocessor will alert if the connection-oriented DCE/RPC PDU type contained in the header is not a valid PDU type. this number should stay the same for all fragments. there are no transfer syntaxes to go with the requested interface. Connectionless DCE/RPC events SID 40 41 42 43 Description The preprocessor will alert if the connectionless DCE/RPC major version is not equal to 4. It is anomalous behavior to attempt to change the byte order mid-session. The call id for a set of fragments in a fragmented request should stay the same (it is incremented for each complete request). The preprocessor will alert if the sequence number uses in a request is the same or less than a previously used sequence number on the session. The byte order of the request data is determined by the Bind in connection-oriented DCE/RPC for Windows. The preprocessor will alert if the context id changes in a fragment mid-request. The preprocessor will alert if the packet data length is less than the size of the connectionless header. . Syntax <uuid> [ ’. the following Messenger interface UUID as taken off the wire from a little endian Bind request: 79 . the client specifies a list of interface UUIDs along with a handle (or context id) for each interface UUID that will be used during the DCE/RPC session to reference the interface. The server will respond with the interface UUIDs it accepts as valid and will allow the client to make requests to those services. if the any frag option is used to specify evaluating on all fragments. However. say 5 bytes into the request (maybe it’s a length field). Also. An interface contains a version. This option requires tracking client Bind and Alter Context requests as well as server Bind Ack and Alter Context responses for connection-oriented DCE/RPC in the preprocessor. Each interface is represented by a UUID. The any frag argument says to evaluate for middle and last fragments as well. hexlong and hexshort will be specified and interpreted to be in big endian order (this is usually the default way an interface UUID will be seen and represented). specify one or more service interfaces to bind to. A DCE/RPC request can specify whether numbers are represented as big endian or little endian.one for big endian and one for little endian.<2. For each Bind and Alter Context request. a DCE/RPC request can be broken up into 1 or more fragments. When a client makes a request. not a fragment) since most rules are written to start at the beginning of a request. the context id used in the request can be correlated with the interface UUID it is a handle for.=1. This can eliminate false positives where more than one service is bound to successfully since the preprocessor can correlate the bind UUID to the context id used in the request. it can. will be looking at the wrong data on a fragment other than the first. By default it is reasonable to only evaluate if the request is a first fragment (or full request). a middle or the last fragment. equal to (’=’) or not equal to (’!’) the version specified. a rule can simply ask the preprocessor. however. since subsequent fragments will contain data deeper into the DCE/RPC. using this rule option. Instead of using flow-bits. i.. Some versions of an interface may not be vulnerable to a certain exploit.any_frag. it will specify the context id so the server knows what service the client is making a request to. A rule which is looking for data. This tracking is required so that when a request is processed.e. Flags (and a field in the connectionless header) are set in the DCE/RPC header to indicate whether the fragment is the first.it either accepts or rejects the client’s wish to bind to a certain interface. greater than (’>’). 4b324fc8-1670-01d3-1278-5a47bf6ee188. The server response indicates which interfaces it will allow the client to make requests to . It is necessary for a client to bind to a service before being able to make a call to it.. since the beginning of subsequent fragments are already offset some length from the beginning of the request. This option is used to specify an interface UUID. The preprocessor eliminates the need for two rules by normalizing the UUID. Optional arguments are an interface version and operator to specify that the version be less than (’<’).’ <operator> <version> ] [ ’. Many checks for data in the DCE/RPC request are only relevant if the DCE/RPC request is a first fragment (or full request). As an example. 4b324fc8-1670-01d3-1278-5a47bf6ee188. When a client sends a bind request to the server. by default the rule will only be evaluated for a first fragment (or full request.any_frag.dce iface For DCE/RPC based rules it has been necessary to set flow-bits based on a client bind to a service to avoid false positives. The representation of the interface UUID is different depending on the endianness specified in the DCE/RPC previously requiring two rules . Also. After is has been determined that a client has bound to a specific interface and is making a request to it (see above . It is likely that an exploit lies in the particular DCE/RPC function call.. dce opnum The opnum represents a specific function call to an interface. This option is used to specify an opnum (or operation number). This option matches if any one of the opnums specified match the opnum of the DCE/RPC request. This option takes no arguments. i. 80 . Syntax <opnum-list> opnum-list opnum-item opnum-range opnum = = = = opnum-item | opnum-item ’.’ opnum-list opnum | opnum-range opnum ’-’ opnum 0-65535 Examples dce_opnum: dce_opnum: dce_opnum: dce_opnum: 15.20-22. Note that a defragmented DCE/RPC request will be considered a full request. 15-18.dce iface) usually we want to know what function call it is making to that service.18-20. dce stub data Since most netbios rules were doing protocol decoding only to get to the DCE/RPC stub data. the version operation is true. 15. Example dce_stub_data. The opnum of a DCE/RPC request will be matched against the opnums specified with this option. the remote procedure call or function call data. This option will not match if the fragment is not a first fragment (or full request) unless the any frag option is supplied in which case only the interface UUID and version need match. 15. This reduces the number of rule option checks and the complexity of the rule.17. opnum range or list containing either or both opnum and/or opnum-range.e. this option will alleviate this need and place the cursor at the beginning of the DCE/RPC stub data. relative.) 81 . string.’ <offset> [ ’.23470.relative. reference:bugtraq.1024:] \ (msg:"dns R_Dnssrv funcs2 overflow attempt".2280.2007-1748. hex. flow:established. This option matches if there is DCE/RPC stub data.4. little.!=.relative. \ dce_iface:50abc2a4-574d-40b3-9d66-ee4fd5fba076. \ classtype:attempted-admin.’ [ ’!’ ] <operator> ’.) alert udp $EXTERNAL_NET any -> $HOME_NET [135. little.to_server. dce_opnum:0-11.’ "relative" ] [ ’. Example of rule complexity reduction The following two rules using the new rule options replace 64 (set and isset flowbit) rules that are necessary if the new rule options are not used: alert tcp $EXTERNAL_NET any -> $HOME_NET [135. byte_jump:4.dce.{12}(\x00\x00\x00\x00|. \ byte_test:4.’ <offset> [ ’.dce.dce.dce. byte test and byte jump A DCE/RPC request can specify whether numbers are represented in big or little endian. byte jump Syntax <convert> ’.{12})/sR". reference:cve.dce. the following normal byte test arguments will not be allowed: big.>.multiplier 2.relative. byte_jump:4.dce.{12})/sR". string.’ <value> [ ’. oct and from beginning.35000. \ pcre:"/ˆ.593.align.’ "multiplier" <mult-value> ] \ [ ’. dce_stub_data. dce_opnum:0-11.-4.>. hex.to_server. flow:established. \ pcre:"/ˆ. These rule options will take as a new argument dce and will work basically the same as the normal byte test/byte jump.dce.0. \ classtype:attempted-admin. sid:1000069.256. reference:bugtraq. dce_stub_data. but since the DCE/RPC preprocessor will know the endianness of the request.{12}(\x00\x00\x00\x00|.-4. it will be able to do the correct conversion. There are no arguments to this option.-4.post_offset -4. When using the dce argument to a byte jump.’ "post_offet" <adjustment-value> ] ’.1024:] \ (msg:"dns R_Dnssrv funcs2 overflow attempt".256.’ "dce" convert offset mult-value adjustment-value = = = = 1 | 2 | 4 -65535 to 65535 0-65535 -65535 to 65535 Example byte_jump:4. byte test Syntax <convert> ’. \ dce_iface:50abc2a4-574d-40b3-9d66-ee4fd5fba076. dec.relative.23470.relative.’ "relative" ]] \ ’.align. When using the dce argument to a byte test. dec and oct. sid:1000068.’ "align" ] [ ’.>.relative.-10. the following normal byte jump arguments will not be allowed: big.2007-1748.This option is used to place the cursor (used to walk the packet payload in rules processing) at the beginning of the DCE/RPC stub data.align.’ "dce" convert operator value offset = = = = 1 | 2 | 4 ’<’ | ’=’ | ’>’ | ’&’ | ’ˆ’ 0-4294967295 -65535 to 65535 Examples byte_test: 4.445.4. byte_test: 2. regardless of preceding rule options.139. reference:cve. \ byte_test:4. \ metadata: rule-type decode .rules include $PREPROC_RULE_PATH/decoder. classtype:protocol-command-decode. See README. if config disable decode alerts is in snort. sid: 1.gre and the various preprocessor READMEs for descriptions of the rules in decoder.rules..2. these options will take precedence over the event type of the rule.. These files are updated as new decoder and preprocessor events are added to Snort.decode for config options that control decoder events.decode. and have the names decoder. sid: 1. Any one of the following rule types can be used: alert log pass drop sdrop reject For example one can change: alert ( msg: "DECODE_NOT_IPV4_DGRAM".conf that reference the rules files. rev: 1.conf or the decoder or preprocessor rule type is drop. the drop cases only apply if Snort is running inline. A packet will be dropped if either a decoder config drop option is in snort. They also allow one to specify the rule type or action of a decoder or preprocessor event on a rule by rule basis. config enable decode drops. decoder events will not be generated regardless of whether or not there are corresponding rules for the event.rules To disable any rule. rev: 1. See doc/README. gid: 116.rules respectively.rules and preprocessor. To enable these rules in snort. classtype:protocol-command-decode.rules and preprocessor. \ metadata: rule-type decode . Also note that if the decoder is configured to enable drops. just comment it with a # or remove the rule completely from the file (commenting is recommended)./configure --enable-decoder-preprocessor-rules The decoder and preprocessor rules are located in the preproc rules/ directory in the top level source tree. Of course.conf. README. To change the rule type or action of a decoder/preprocessor rule. e.g. include $PREPROC_RULE_PATH/preprocessor. gid: 116. Decoder config options will still determine whether or not to generate decoder events. define the path to where the rules are located and uncomment the include lines in snort. 2.) to drop ( msg: "DECODE_NOT_IPV4_DGRAM". just replace alert with the desired rule type.conf. 82 .) to drop (as well as alert on) packets where the Ethernet protocol is IPv4 but version field in IPv4 header has a value other than 4. var PREPROC_RULE_PATH /path/to/preproc_rules .1 Configuring The following options to configure will enable decoder and preprocessor rules: $ .3 Decoder and Preprocessor Rules Decoder and preprocessor rules allow one to enable and disable decoder and preprocessor events on a rule by rule basis.3. For example. • Event Suppression You can completely suppress the logging of unintersting events. otherwise they will be loaded. • Rate Filters You can use rate filters to change a rule action when the number or rate of events indicates a possible attack.7. This can be tuned to significantly reduce false alarms.4.1 Rate Filtering rate filter provides rate based attack prevention by allowing users to configure a new action to take for a specified time when a given rate is exceeded. you also have to remove the decoder and preprocessor rules and any reference to them from snort. apply_to <ip-list>] The options are described in the table below . \ new_action alert|drop|pass|log|sdrop|reject.10.all are required except apply to. 2.conf will make Snort revert to the old behavior: config autogenerate_preprocessor_decoder_rules Note that if you want to revert to the old behavior.2.3. in which case they are evaluated in the order they appear in the configuration file. Format Rate filters are used as standalone configurations (outside of a rule) and have the following format: rate_filter \ gen_id <gid>. Multiple rate filters can be defined on the same rule. 83 . sig_id <sid>. \ count <c>. seconds <s>. This is covered in section 3. \ track <by_src|by_dst|by_rule>.conf. 2. and the first applicable action is taken.2 Reverting to original behavior If you have configured snort to use decoder and preprocessor rules. the following config option in snort. This option applies to rules not specified and the default behavior is to alert. • Event Filters You can use event filters to reduce the number of logged events for noisy rules.4 Event Processing Snort provides a variety of mechanisms to tune event processing to suit your needs: • Detection Filters You can use detection filters to specifiy a threshold that must be exceeded before a rule generates an event. \ timeout <seconds> \ [. which is optional. Examples Example 1 . This can be tuned to significantly reduce false alarms. \ count 100. 0 seconds means count is a total count instead of a specific rate. timeout 10 Example 2 . For example. the maximum number of rule matches in s seconds before the rate filter limit to is exceeded. Note that events are generated during the timeout period. for each unique destination IP address. track by rule and apply to may not be used together. or by rule. or they are aggregated at rule level. reject. new action replaces rule action for t seconds. If t is 0. This means the match statistics are maintained for each unique source IP address. \ new_action drop. 84 . \ count 100. and block further connections from that IP address for 10 seconds: rate_filter \ gen_id 135. then rule action is never reverted back..allow a maximum of 100 successful simultaneous connections from any one IP address. drop. source and destination means client and server respectively. and block further connection attempts from that IP address for 10 seconds: rate_filter \ gen_id 135. then ignores events for the rest of the time interval.allow a maximum of 100 connection attempts per second from any one IP address. revert to the original rule action after t seconds. sdrop and reject are conditionally compiled with GIDS. the time period over which count is accrued. even if the rate falls below the configured limit. and sdrop can be used only when snort is used in inline mode. timeout 10 2. destination IP address.Option track by src | by dst | by rule count c seconds s new action alert | drop | pass | log | sdrop | reject timeout t apply to <ip-list> Description rate is tracked either by source IP address. seconds 1.4. For rules related to Stream5 sessions. c must be nonzero value. \ track by_src. track by rule and apply to may not be used together. restrict the configuration to only to source or destination IP address (indicated by track parameter) determined by <ip-list>.2 Event Filtering Event filtering can be used to reduce the number of logged alerts for noisy rules by limiting the number of times a particular event is logged during a specified time interval. \ track by_src. seconds 0. \ new_action drop. sig_id 1. sig_id 2. There are 3 types of event filters: • limit Alerts on the 1st m events during the time interval. An event filter may be used to manage number of alerts after the rule action is enabled by rate filter. Type threshold alerts every m times we see this event during the time interval. event filters with sig id 0 are considered ”global” because they apply to all rules with the given gen id. Global event filters do not override what’s in a signature or a more specific stand-alone event filter. seconds <s> threshold \ gen_id <gid>. if they do not block an event from being logged. Format event_filter \ gen_id <gid>. • both Alerts once per time interval after seeing m occurrences of the event. then ignores any additional events during the time interval. number of rule matching in s seconds that will cause event filter limit to be exceeded. the global filtering test is applied. Specify the signature ID of an associated rule. Type both alerts once per time interval after seeing m occurrences of the event. \ count <c>. Both formats are equivalent and support the options described below . type limit alerts on the 1st m events during the time interval. then ignores events for the rest of the time interval. sig id pair. then the filter applies to all rules.all are required.• threshold Alerts every m times we see this event during the time interval. If gen id is also 0. seconds <s> threshold is an alias for event filter. sig_id <sid>. \ track <by_src|by_dst>. sig id. (gen id 0. or for each unique destination IP addresses. rate is tracked either by source IP address. Thresholds in a rule (deprecated) will override a global event filter. track by src|by dst count c seconds s ! △NOTE Only one event filter may be defined for a given gen id. Standard filtering tests are applied first. threshold is deprecated and will not be supported in future releases. 85 . then ignores any additional events during the time interval. \ type <limit|threshold|both>. or destination IP address. sig id != 0 is not allowed). time period over which count is accrued. gen id 0. \ type <limit|threshold|both>. sig_id <sid>. s must be nonzero value. Snort will terminate with an error while reading the configuration information. If more than one event filter is applied to a specific gen id. Ports or anything else are not tracked. c must be nonzero value. \ track <by_src|by_dst>. sig id 0 specifies a ”global” filter because it applies to all sig ids for the given gen id. This means count is maintained for each unique source IP addresses. sig id 0 can be used to specify a ”global” threshold that applies to all rules. \ count <c>. Option gen id <gid> sig id <sid> type limit|threshold|both Description Specify the generator ID of an associated rule. sig_id 1853. \ count 30. Such events indicate a change of state that are significant to the user monitoring the network. \ count 1. seconds 60 Limit logging to every 3rd event: event_filter \ gen_id 1. seconds 60 Limit logging to just 1 event per 60 seconds.! △NOTE event filters can be used to suppress excessive rate filter alerts. sig_id 1852. Users can also configure a memcap for threshold with a “config:” option: config event_filter: memcap <bytes> # this is deprecated: config threshold: memcap <bytes> \ 86 .map for details on gen ids. \ count 3. however. seconds 60 Limit to logging 1 event per 60 seconds per IP. seconds 60 Events in Snort are generated in the usual way. \ type limit. event filters are handled as part of the output system. Examples Limit logging to 1 event per 60 seconds: event_filter \ gen_id 1. but only if we exceed 30 events in 60 seconds: event_filter \ gen_id 1. sig_id 1851. sig_id 0. triggering each rule for each event generator: event_filter \ gen_id 0. track by_src. count 1. track by_src. sig_id 0. \ type limit. track by_src. Read genmsg. seconds 60 Limit to logging 1 event per 60 seconds per IP triggering each rule (rule gen id is 1): event_filter \ gen_id 1. \ type limit. the first new action event of the timeout period is never suppressed. \ type both. \ count 1. track by_src. \ type threshold. track by_src. and IP addresses via an IP list . ip <ip-list> Option gen id <gid> sig id <sid> track by src|by dst ip <list> Description Specify the generator ID of an associated rule. ip 10. sig_id 1852. Suppression uses an IP list to select specific networks and users for suppression. You may also combine one event filter and several suppressions to the same non-zero SID.1.1. sig_id <sid>.3 Event Suppression Event suppression stops specified events from firing without removing the rule from the rule base. \ track <by_src|by_dst>. Suppression are standalone configurations that reference generators. sig_id 1852: Suppress this event from this IP: suppress gen_id 1. sig_id <sid>.54 Suppress this event to this CIDR block: suppress gen_id 1.2. Format The suppress configuration has two forms: suppress \ gen_id <gid>. ip must be provided as well. sig id 0 specifies a ”global” filter because it applies to all sig ids for the given gen id. This allows a rule to be completely suppressed. You may apply multiple suppressions to a non-zero SID.4. If track is provided. sig id 0 can be used to specify a ”global” threshold that applies to all rules. gen id 0. track by_dst. \ suppress \ gen_id <gid>. or suppressed when the causative traffic is going to or coming from a specific IP or group of IP addresses. Suppress by source IP address or destination IP address. track by_src.1. Examples Suppress this event completely: suppress gen_id 1.0/24 87 . Specify the signature ID of an associated rule. Suppression tests are performed prior to either standard or global thresholding tests. SIDs. ip 10. sig_id 1852. This is optional. but if present. ip must be provided as well. Restrict the suppression to only source or destination IP addresses (indicated by track parameter) determined by ¡list¿.1. 88 . The default value is 8. previous data in these files will be overwritten. max queue This determines the maximum size of the event queue.4. We currently have two different methods: • priority .2. The default value is 3. if the event queue has a max size of 8. alert. such as max content length or event ordering using the event queue.’. the statistics will be saved in these files. 2. You can’t log more than the max event number that was specified. 3. If the append option is not present. When a file name is provided in profile rules or profile preprocs. For example.4 Event Logging Snort supports logging multiple events per packet/stream that are prioritized with different insertion methods.The highest priority (1 being the highest) events are ordered first. only 8 events will be stored for a single packet or stream.conf and Snort will print statistics on the worst (or all) performers on exit. The method in which events are ordered does not affect rule types such as pass. • content length . etc. log. This determines the number of events to log for a given packet or stream.Rules are ordered before decode or preprocessor alerts. and rules that have a longer content are ordered before rules with shorter contents.. 1. Each require only a simple config option to snort. sort by avg ticks (default configuration if option is turned on) config profile rules • Print all rules.txt 89 .txt append • Print the top 10 rules.txt append • Print top 20 rules. based on highest average time config profile rules: print 10. based on total time config profile rules: print 100. filename perf. sort avg ticks • Print all rules. sorted by number of checks config profile rules: print all. save results to perf. save results to performance. \ sort <sort_option> \ [. and append to file rules stats.2.1 Rule Profiling Format config profile_rules: \ print [all | <num>].txt with timestamp in filename config profile rules: print 20.txt each time config profile rules: filename performance.txt config profile rules filename rules stats. sort checks • Print top 100 rules. sort total ticks • Print with default options. sort by avg ticks.5. 0 45027. You can use the ”filename” option in snort. sort total ticks 2. Quick to check. 0. We are looking at moving some of these into code. that most likely contains PCRE.0 0.conf to specify a file where this will be written. it makes sense to leave it alone. filename <filename> [append]] • <num> is the number of preprocessors to print 90 . These files will be found in the logging directory.1: Rule Profiling Example Output Output Snort will print a table much like the following at exit. The filenames will have timestamps appended to them. A high Avg/Check is a poor performing rule. By default.0 0.0 107822 53911. print 4. the few options may or may not match.0 Avg/Match Avg/Nonmatch ========= ============ 385698. The Microsecs (or Ticks) column is important because that is the total time spent evaluating a given rule. if that rule is causing alerts.0 Figure 2.0 0.0 53911. this information will be printed to the console when Snort exits.0 90054 45027.. But. \ sort <sort_option> \ [. especially those with low SIDs.0 92458 46229.5. a new file will be created each time Snort is run.0 46229.2 Preprocessor Profiling Format config profile_preprocs: \ print [all | <num>]. High Checks and low Avg/Check is usually an any->any rule with few rule options and no content. • Checks (number of times preprocessor decided to look at a packet. and other factors.When printing a specific number of preprocessors all subtasks info for a particular preprocessor is printed for each layer 0 preprocessor stat. this information will be printed to the console when Snort exits. It does give a reasonable indication of how much relative time is spent within each subtask. should ALWAYS match Checks. app layer header was correct. based on highest average time config profile preprocs: print 10.The number is indented for each layer. sorted by number of checks config profile preprocs: Output Snort will print a table much like the following at exit. sort total_ticks The columns represent: • Number (rank) . i. sort by avg ticks (default configuration if option is turned on) config profile preprocs • Print all preprocessors. subroutines within preprocessors. sort avg ticks • Print all preprocessors. If ”append” is not specified. Because of task swapping. ports matched.e. a new file will be created each time Snort is run.txt append • Print the top 10 preprocessors. Configuration line used to print the above table: config profile_rules: \ print 3.conf to specify a file where this will be written. You can use the ”filename” option in snort. non-instrumented code. 91 print all. sort checks .txt config profile preprocs. By default. These files will be found in the logging directory. filename preprocs stats. the Pct of Caller field will not add up to 100% of the caller’s time.For non layer 0 preprocessors. this identifies the percent of the caller’s ticks that is spent for this subtask. • Preprocessor Name • Layer . Layer 1 preprocessors are listed under their respective caller (and sorted similarly). unless an exception was trapped) • CPU Ticks • Avg Ticks per Check • Percent of caller . sort by avg ticks.• <sort option> is one of: checks avg ticks total ticks • <filename> is the output filename • [append] dictates that the output will go to the same file each time (optional) Examples • Print all preprocessors. The filenames will have timestamps appended to them. etc) • Exits (number of corresponding exits – just to verify code is instrumented correctly. and append to file preprocs stats. 00 0.01 0.41 0.16 0.94 99.11 0.01 0.08 0.00 0.70 0.00 0.00 0.86 4.70 0.81 93.04 0.51 6.21 1.06 0.34 0.00 0.01 0.87 0.06 3.91 15.01 0.37 0.24 0.12 0.20 47.84 0.72 1.87 71.84 0.00 0.53 21.40 28.27 0.12 12.00 0.94 3.77 0.02 0.83 0.00 0.02 11.66 0.65 1.00 0.73 1.00 0.23 21.00 0.08 9.06 0.89 0.77 0.68 38.25 77.77 39.29 2.73 1.06 0.14 0.34 1.06 0.02 0.20 19.37 0.00 0.59 19.07 17.2: Preprocessor Profiling Example Output 92 .12 0.00 0.01 0.14 307.17 21.51 2.07 0.00 0.46 99.70 0.43 39.62 17.59 0.32 0.78 1.88 44.33 8.20 0.81 6.00 0.01 0.00 0.03 8.16 1.79 0.02 47.00 4.03 0.00 0.07 6.04 0.15 0.89 2.53 21.10 0.56 39.17 18.20 34.14 25.10 1.62 3.78 2.57 1.81 39.00 0.34 0.00 65.00 0.39 13.22 15657.00 0.85 84.01 0.32 0.09 0.80 0.30 0.00 0.58 0.92 3.16 0.00 Figure 2.17 21.06 0.56 0.16 0.04 0.01 19.02 0. so one or both or neither may be enabled. as above. It does not provide a hard and fast latency guarantee but should in effect provide a good average latency control.5. The action taken upon detection of excessive latency is configurable. To use PPM. \ threshold count. and some implementation details worth noting.2.3 Packet Performance Monitoring (PPM) PPM provides thresholding mechanisms that can be used to provide a basic level of latency control for snort. \ debug-pkts # Rule configuration: config ppm: max-rule-time <micro-secs>. you must build with the –enable-ppm or the –enable-sourcefire option to configure. Packet and rule monitoring is independent. . or together in just one config ppm statement. \ suspend-timeout <seconds>. PPM is configured as follows: # Packet configuration: config ppm: max-pkt-time <micro-secs>. The following sections describe configuration. \ fastpath-expensive-packets. \ pkt-log. Both rules and packets can be checked for latency. \ suspend-expensive-rules. sample output.. A summary of this information is printed out when snort exits. 94 If fastpath-expensive-packets or suspend-expensive-rules is not used. These rules were used to generate the sample output that follows. then no action is taken other than to increment the count of the number of packets that should be fastpath’d or the rules that should be suspended. Example 2: The following suspends rules and aborts packet inspection. . threshold 5. debug-pkt config ppm: \ max-rule-time 50. 1 nc-rules tested. .config ppm: \ max-pkt-time 50.21764 usecs PPM: Process-EndPkt[64] . \ pkt-log.3659 usecs PPM: Process-EndPkt[62] PPM: PPM: PPM: PPM: Pkt-Event Pkt[63] used=56.. Process-BeginPkt[63] caplen=60 Pkt[63] Used= 8.0438 usecs. 0 rules.633125 usecs Rule Performance Summary: 95 . \ suspend-timeout 300. Sample Snort Exit Output Packet Performance Summary: max packet time : 50 usecs packet events : 1 avg pkt time : 0. packet fastpathed.15385 usecs PPM: Process-EndPkt[61] PPM: Process-BeginPkt[62] caplen=342 PPM: Pkt[62] Used= 65.. PPM: Process-BeginPkt[61] caplen=60 PPM: Pkt[61] Used= 8. suspend-expensive-rules.394 usecs Process-EndPkt[63] PPM: Process-BeginPkt[64] caplen=60 PPM: Pkt[64] Used= 8. fastpath-expensive-packets.. Hence the reason this is considered a best effort approach. When multiple plugins of the same type (log. not processor usage by Snort. They allow Snort to be much more flexible in the formatting and presentation of output to its users. Therefore. Output modules are loaded at runtime by specifying the output keyword in the rules file: output <name>: <options> output alert_syslog: log_auth log_alert 2. This was a conscious design decision because when a system is loaded. it is recommended that you tune your thresholding to operate optimally when your system is under load. latency thresholding is presently only available on Intel and PPC platforms.max rule time rule events avg nc-rule time Implementation Details : 50 usecs : 0 : 0. alert) are specified. As with the standard logging and alerting systems. Multiple output plugins may be specified in the Snort configuration file. 2. This module also allows the user to specify the logging facility and priority within the Snort rules file. Therefore this implementation cannot implement a precise latency guarantee with strict timing guarantees.1 alert syslog This module sends alerts to the syslog facility (much like the -s command line switch). output plugins send their data to /var/log/snort by default or to a user directed directory (using the -l command line switch). The format of the directives in the rules file is very similar to that of the preprocessors. after the preprocessors and detection engine. not just the processor time the Snort application receives. Due to the granularity of the timing measurements any individual packet may exceed the user specified packet or rule processing time limit.6. Available Keywords Facilities • log auth • log authpriv • log daemon 96 . Latency control is not enforced after each preprocessor.6. the latency for a packet is based on the total system time. • This implementation is software based and does not use an interrupt driven timing mechanism and is therefore subject to the granularity of the software based timing tests.2675 usecs • Enforcement of packet and rule processing times is done after processing each rule.6 Output Modules Output modules are new as of version 1. • Time checks are made based on the total system time. they are stacked and called in sequence when an event occurs. giving users greater flexibility in logging alerts. • Since this implementation depends on hardware based high performance frequency counters. The output modules are run when the alert or logging subsystems of Snort are called. host is 127. a hostname and port can be passed as options.] \ <facility> <priority> <options> 97 .0.1. The default port is 514.0. output alert_syslog: \ [host=<hostname[:<port>]. Format alert_full: <output filename> Example output alert_full: alert.2 alert fast This will print Snort alerts in a quick one-line format to a specified output file. External programs/processes can listen in on this socket and receive Snort alert and packet data in real time.fast 2. This is currently an experimental interface.Example output alert_syslog: 10.4 alert unixsock Sets up a UNIX domain socket and sends alert reports to it. Inside the logging directory.3 alert full This will print Snort alert messages with full packet headers. <facility> <priority> <options> 2. The creation of these files slows Snort down considerably.1. It is a faster alerting method than full alerts because it doesn’t need to print all of the packet headers to the output file Format alert_fast: <output filename> Example output alert_fast: alert. These files will be decoded packet dumps of the packets that triggered the alerts. This output method is discouraged for all but the lightest traffic situations.1. Format alert_unixsock Example output alert_unixsock 98 . The alerts will be written in the default logging directory (/var/log/snort) or in the logging directory specified at the command line. a directory will be created per IP.6.full 2.6.1:514.6. Each has its own advantages and disadvantages: hex (default) . You can choose from the following options.Specify your own name for this Snort sensor. port . This is so that data from separate Snort runs can be kept distinct.not readable unless you are a true geek. <parameter list> The following parameters are available: host .Database username for authentication password .2.very good Human readability . one will be generated automatically encoding .Represent binary data as a hex string. or socket filename extension for UNIX-domain connections.org web page.Host to connect to. Storage requirements .Represent binary data as a base64 string.2x the size of the binary Searchability . Blobs are not used because they are not portable across databases. Format database: <log | alert>.∼1.Port number to connect to at the server host. Storage requirements .Database name user . it will connect using a local UNIX domain socket. see Figure 2. <database type>.not readable requires post processing 99 .impossible without post processing Human readability . Parameters are specified with the format parameter = argument. This module only takes a single argument: the name of the output file. This is useful for performing post-process analysis on collected traffic with the vast number of tools that are available for examining tcpdump-formatted files.Because the packet payload and option data is binary. dbname .3x the size of the binary Searchability . Format log_tcpdump: <output filename> Example output log_tcpdump: snort.log 2. requires post processing base64 . there is no one simple and portable way to store it in a database.6 database This module from Jed Pickel sends Snort data to a variety of SQL databases. If you do not specify a name.6.6. Note that the file name will have the UNIX timestamp in seconds appended the file name. So i leave the encoding option to you. The arguments to this plugin are the name of the database to be logged to and a parameter list. Without a host name. More information on installing and configuring this module can be found on the [91]incident.3 for example usage.5 log tcpdump The log tcpdump module logs packets to a tcpdump-formatted file. If a non-zero-length string is specified. TCP/IP communication is used.Password used if the database demands password authentication sensor name . There are five database types available in the current version of the plugin.Log only a minimum amount of data.3: Database Output Plugin Configuration ascii . postgresql. destination ip. destination port. There are two logging types available. If you choose this option. source port. Set the type to match the database you are using. the output is in the order the formatting option is listed. If the formatting option is default. Non-ASCII Data is represented as a ‘. These are mssql.<. Setting the type to log attaches the database logging functionality to the log facility within the program.’. there is a logging method and database type that must be defined. and protocol) Furthermore. source ip.>) Searchability . signature.7.Represent binary data as an ASCII string. The list of formatting options is below.slightly larger than the binary because some characters are escaped (&. and odbc.7 csv The csv output plugin allows alert data to be written in a format easily importable to a database. If you set the type to log.5 for more details. oracle. 2.6.very good for searching for a text string impossible if you want to search for binary human readability .Log all details of a packet that caused an alert (including IP/TCP options and the payload) fast . the plugin will be called on the log output chain.very good detail . • timestamp • sig generator • sig id • sig rev • msg • proto • src • srcport 100 . log and alert. You severely limit the potential of some analysis applications if you choose this option.How much detailed data do you want to store? The options are: full (default) . tcp flags. mysql. The following fields are logged: timestamp. but this is still the best choice for some applications. This is the only option where you will actually lose data. mysql. Storage requirements . then data for IP and TCP options will still be represented as hex because it does not make any sense for that data to be ASCII. See section 3. The plugin requires 2 arguments: a full pathname to a file and the output formatting option.output database: \ log. ! △NOTE The database output plugin does not have the ability to handle alerts that are generated by using the tag keyword. Setting the type to alert attaches the plugin to the alert output chain within the program. dbname=snort user=snort host=localhost password=xyz Figure 2. csv default output alert_csv: /var/log/alert. ! △NOTE file creation time (in Unix Epoch format) appended to each file when it is created.csv timestamp. msg 2. The name unified is a misnomer. The log file contains the detailed packet information (a packet dump with the associated event ID). The unified output plugin logs events in binary format. Files have the 101 .h. message id). an alert file. port. and a log file. The alert file contains the high-level details of an event (eg: IPs.6. protocol. as the unified output plugin creates two different files.8 unified The unified output plugin is designed to be the fastest possible method of logging Snort events. Both file types are written in a bimary format described in spo unified. allowing another programs to handle complex logging mechanisms that would otherwise diminish the performance of Snort. nostamp] [. It has the same performance characteristics. simply specify unified2. Packet logging includes a capture of the entire packet and is specified with log unified2.log.Format output alert_unified: <base file name> [.alert.6. mpls_event_types] output log_unified2: \ filename <base filename> [.10 alert prelude ! △NOTE support to use alert prelude is not built in by default. 102 . alert logging will only log events and is specified with alert unified2. <limit <file size limit in MB>] output log_unified: <base file name> [. <limit <size in MB>] [. mpls_event_types 2. <limit <size in MB>] [.6. nostamp] output unified2: \ filename <base file name> [. unified 2 files have the file creation time (in Unix Epoch format) appended to each file when it is created. limit 128 2. nostamp. or true unified logging. packet logging.8 on unified logging for more information.log. <limit <file size limit in MB>] Example output alert_unified: snort.alert. but a slightly different logging format. To use alert prelude. snort must be built with the –enable-prelude argument passed to . limit 128. MPLS labels can be included in unified2 events.6.log. Format output alert_unified2: \ filename <base filename> [./configure. Unified2 can work in one of three modes. To include both logging styles in a single.9 unified 2 The unified2 output plugin is a replacement for the unified output plugin. Likewise. nostamp unified2: filename merged. then MPLS labels will be not be included in unified2 events.log. When MPLS support is turned on. If option mpls event types is not used. nostamp log_unified2: filename snort. ! △NOTE By default. limit 128. nostamp unified2: filename merged. limit 128. alert logging. nostamp] [. Use option mpls event types to enable this. See section 2. unified file. mpls_event_types] Example output output output output alert_unified2: filename snort. limit 128. <limit <size in MB>] [. limit 128 output log_unified: snort. 11 log null Sometimes it is useful to be able to create rules that will alert to certain types of traffic but will not cause packet log entries. Communicates with an Aruba Networks wireless mobility controller to change the status of authenticated users.2.org/.6. Format output alert_prelude: \ profile=<name of prelude profile> \ [ info=<priority number for info priority alerts>] \ [ low=<priority number for low priority alerts>] \ [ medium=<priority number for medium priority alerts>] Example output alert_prelude: profile=snort info=4 low=3 medium=2 2.The alert prelude output plugin is used to log to a Prelude database. Format output log_null Example output log_null # like using snort -n ruletype info { type alert output alert_fast: info.arubanetworks. Format output alert_aruba_action: \ <controller address> <secrettype> <secret> <action> 103 . snort must be built with the –enable-aruba argument passed to . see. This is equivalent to using the -n command line option but it is able to work within a ruletype.prelude-ids. This allows Snort to take action against users on the Aruba controller to control their network privilege levels.alert output log_null } 2. In Snort 1./configure. see http: //www. For more information on Aruba Networks access control.6.8.12 alert aruba action ! △NOTE Support to use alert aruba action is not built in by default.com/. the log null plugin was introduced. To use alert aruba action. For more information on Prelude. Blacklist the station by disabling all radio communication.7.6 cleartext foobar setrole:quarantine_role 2. and IP-Frag policy (see section 2.Authentication secret configured on the Aruba mobility controller with the ”aaa xml-api client” configuration command.7 Host Attribute Table Starting with version 2. blacklist . one of ”sha1”. 2. For rule evaluation.Secret type. or a cleartext password.7. The table is re-read during run time upon receipt of signal number 30.9. This information is stored in an attribute table.Change the user´ role to the specified rolename.1. Snort must be configured with the –enable-targetbased flag. secrettype . s Example output alert_aruba_action: \ 10.2 Attribute Table File Format The attribute table uses an XML format and consists of two sections. If the rule doesn’t have protocol metadata.Action to apply to the source IP address of the traffic generating an alert.2. and the host attribute section. used to reduce the size of the file for common data elements.2.2). ”md5” or ”cleartext”. Snort has the capability to use information from an outside source to determine both the protocol for use with Snort rules. represented as a sha1 or md5 hash.3.1 Configuration Format attribute_table filename <path to file> 2. <SNORT_ATTRIBUTES> <ATTRIBUTE_MAP> <ENTRY> <ID>1</ID> <VALUE>Linux</VALUE> </ENTRY> <ENTRY> <ID>2</ID> 104 . or the traffic doesn’t have any matching service information. ! △NOTE To use a host attribute table. An example of the file format is shown below. action .1) and TCP Stream reassembly policies (see section 2. service information is used instead of the ports when the protocol metadata in the rule matches the service corresponding to the traffic. which is loaded at startup. setrole:rolename .8. if applicable. Snort associates a given packet with its attribute data from the table.The following parameters are required: controller address . a mapping section.Aruba mobility controller address. secret . The mapping section is optional. the rule relies on the port information. 1.9p1</ATTRIBUTE_VALUE> <CONFIDENCE>93</CONFIDENCE> </VERSION> </APPLICATION> </SERVICE> <SERVICE> <PORT> <ATTRIBUTE_VALUE>23</ATTRIBUTE_VALUE> <CONFIDENCE>100</CONFIDENCE> </PORT> <IPPROTO> <ATTRIBUTE_VALUE>tcp</ATTRIBUTE_VALUE> <CONFIDENCE>100</CONFIDENCE> </IPPROTO> <PROTOCOL> <ATTRIBUTE_VALUE>telnet</ATTRIBUTE_VALUE> <CONFIDENCE>100</CONFIDENCE> 105 .168.<VALUE>ssh</VALUE> </ENTRY> </ATTRIBUTE_MAP> <ATTRIBUTE_TABLE> <HOST> <IP>192. for a given host entry.1 Format <directive> <parameters> 106 .0</ATTRIBUTE_VALUE> <CONFIDENCE>89</CONFIDENCE> </VERSION> </APPLICATION> </CLIENT> </CLIENTS> </HOST> </ATTRIBUTE_TABLE> </SNORT_ATTRIBUTES> ! △NOTE With Snort 2. They can be loaded via directives in snort.1. udp.8 Dynamic Modules Dynamically loadable modules were introduced with Snort 2.8. etc). Snort must be configured with the --disable-dynamicplugin flag. the stream and IP frag information are both used. They will be used in a future release. A DTD for verification of the Host Attribute Table XML file is provided with the snort packages. ! △NOTE To disable use of dynamic modules. etc) are used. and any client attributes are ignored.conf or via command-line options. The application and version for a given service attribute. ssh. Of the service attributes. only the IP protocol (tcp.6. and protocol (http. 2. 2.8. port.<. 2 Reloading a configuration First modify your snort. followed by the full or relative path to a directory of detection rules shared libraries. add --disable-reload-error-restart in addition to --enable-reload to configure when compiling. Specify file.3 below). the main Snort packet processing thread will swap in the new configuration to use and will continue processing under the new configuration. A separate thread will parse and create a swappable configuration object while the main Snort packet processing thread continues inspecting traffic under the current configuration. See chapter 5 for more information on dynamic detection rules libraries. specify directory. (Same effect as --dynamic-preprocessor-lib or --dynamic-preprocessor-lib-dir options).2. followed by the full or relative path to a directory of preprocessor shared libraries. There is also an ancillary option that determines how Snort should behave if any non-reloadable options are changed (see section 2. (Same effect as --dynamic-detection-lib or --dynamic-detection-lib-dir options). Or.conf (the file passed to the -c option on the command line).2 Directives Syntax dynamicpreprocessor [ file <shared library path> | directory <directory of shared libraries> ] Description Tells snort to load the dynamic preprocessor shared library (if file is used) or all dynamic preprocessor shared libraries (if directory is used). send Snort a SIGHUP signal.9. This functionality 2. dynamicengine [ file <shared library path> | directory <directory of shared libraries> ] dynamicdetection [ file <shared library path> | directory <directory of shared libraries> ] 2. followed by the full or relative path to the shared library. To disable this behavior and have Snort exit instead of restart. followed by the full or relative path to the shared library.9. Or.9. Tells snort to load the dynamic detection rules shared library (if file is used) or all dynamic detection rules shared libraries (if directory is used). Then. add --enable-reload to configure when compiling. This option is enabled by default and the behavior is for Snort to restart if any nonreloadable options are added/modified/removed. followed by the full or relative path to a directory of preprocessor shared libraries. to initiate a reload. See chapter 5 for more information on dynamic preprocessor libraries. All newly created sessions will. specify directory. Specify file. 107 .1 Enabling support To enable support for reloading a configuration. Note that for some preprocessors.g. specify directory. When a swappable configuration object is ready for use. 2. use the new configuration. (Same effect as --dynamic-engine-lib or --dynamic-preprocessor-lib-dir options). Specify file. existing session data will continue to use the configuration under which they were created in order to continue with proper state for that session. e. ! △NOTE is not currently supported in Windows. See chapter 5 for more information on dynamic engine libraries. however. followed by the full or relative path to the shared library.8. Tells snort to load the dynamic engine shared library (if file is used) or all dynamic engine shared libraries (if directory is used). etc.conf -T 2.e. $ snort -c snort. so you should test your new configuration An invalid before issuing a reload. i. Non-reloadable configuration options of note: • Adding/modifying/removing shared objects via dynamicdetection. e. If reload support ! △NOTEconfiguration will still result in Snort fatal erroring. • Any changes to output will require a restart. Modifying any of these options will cause Snort to restart (as a SIGHUP previously did) or exit (if --disable-reload-error-restart was used to configure Snort). Snort will restart (as it always has) upon receipt of a SIGHUP. • Adding/modifying/removing preprocessor configurations are reloadable (except as noted below).3 Non-reloadable configuration options There are a number of option changes that are currently non-reloadable because they require changes to output.9.g. startup memory allocations. . Reloadable configuration options of note: • Adding/modifying/removing text rules and variables are reloadable.$ kill -SIGHUP <snort pid> ! △NOTE is not enabled. any new/modified/removed shared objects will require a restart. dynamicengine and dynamicpreprocessor are not reloadable.. Each configuration can have different preprocessor settings and detection rules. using the following configuration line: config binding: <path_to_snort. A default configuration binds multiple vlans or networks to non-default configurations.conf> net <ipList> 109 .dynamicdetection dynamicengine dynamicpreprocessor output In certain cases. only some of the parameters to a config option or preprocessor configuration are not reloadable.conf> vlan <vlanIdList> config binding: <path_to_snort.10 Multiple Configurations Snort now supports multiple configurations based on VLAN Id or IP subnet within a single instance of Snort. 2. VLANs/Subnets not bound to any specific configuration will use the default configuration.1 Creating Multiple Configurations Default configuration for snort is specified using the existing -c option.10. Those parameters are listed below the relevant config option or preprocessor.. Each unique snort configuration file will create a new configuration instance within snort. 2. If a rule is not specified in a configuration then the rule will never raise an event for the configuration. ipList .Refers to the absolute or relative path to the snort. non-payload detection options.10.e.path to snort.conf for specific configuration. The format for ranges is two vlanId separated by a ”-”.. vlanIdList . Subnets can be CIDR blocks for IPV6 or IPv4. ! △NOTE Vlan and Subnets can not be used in the same line. A rule shares all parts of the rule options. If not defined in a configuration. and post-detection options. Negative vland Ids and alphanumeric are not supported.Refers to ip subnets. the default values of the option (not the default configuration values) take effect. Parts of the rule header can be specified differently across configurations. Spaces are allowed within ranges. .conf . payload detection options. Configurations can be applied based on either Vlans or Subnets not both. including the general options. they are included as valid in terms of configuring Snort. ! △NOTE Even though Vlan Ids 0 and 4095 are reserved. policy_id policy_mode policy_version The following config options are specific to each configuration. Valid vlanId is any number in 0-4095 range.Refers to the comma seperated list of vlandIds and vlanId ranges. sdrop . ICMP. activate . and /32 indicates a specific machine address. user=snort dbname=snort host=localhost } 3. and then log the packet 2.make iptables drop the packet. The CIDR block indicates the netmask that should be applied to the rule’s address and any incoming packets that are tested against the rule.2. log .3 IP Addresses The next portion of the rule header deals with the IP address and port information for a given rule. drop .1. the address/CIDR combination 192. You can then use the rule types as actions in Snort rules.0/24 would signify the block of addresses from 192. say. The keyword any may be used to define any address.make iptables drop the packet but do not log it.ignore the packet 4. The addresses are formed by a straight numeric IP address and a CIDR[3] block. In addition. There are four protocols that Snort currently analyzes for suspicious behavior – TCP. UDP. reject .log } This example will create a rule type that will log to syslog and a MySQL database: ruletype redalert { type alert output alert_syslog: LOG_AUTH LOG_ALERT output database: log. The rule action tells Snort what to do when it finds a packet that matches the rule criteria. pass.alert and then turn on another dynamic rule 5.generate an alert using the selected alert method. RIP.log the packet 3. alert.168. and dynamic. OSPF. /16 a Class B network. pass .remain idle until activated by an activate rule .255. 114 .action.168. IGRP. and then send a TCP reset if the protocol is TCP or an ICMP port unreachable message if the protocol is UDP.2. such as ARP.1. you have additional options which include drop.make iptables drop the packet and log the packet 7.168. log it.1 to 192. mysql. 3. A CIDR block mask of /24 indicates a Class C network. This example will create a type that will log to just tcpdump: ruletype suspicious { type log output log_tcpdump: suspicious. The CIDR designations give us a nice short-hand way to designate large address spaces with just a few characters. GRE. Any rule that used this designation for. dynamic . and IP. and sdrop.1. then act as a log rule 6. log. if you are running Snort in inline mode. the destination address would match on any address in that range. 8. You can also define your own rule types and associate one or more output plugins with them. 1. etc. For example. reject. In the future there may be more. Snort does not have a mechanism to provide host name lookup for the IP address fields in the rules file. alert . activate.2 Protocols The next field in a rule is the protocol. There are 5 available default actions in Snort. IPX. 1.4 Port Numbers Port numbers may be specified in a number of ways.0/24] 111 (content: "|00 01 86 a5|".0 Class C network.1. For the time being.168.10.0/24 any -> 192.1.168.). An IP list is specified by enclosing a comma separated list of IP addresses and CIDR blocks within square brackets.3: IP Address Lists In Figure 3.1.168.1. such as 111 for portmapper.0/24 500: log tcp traffic from privileged ports less than or equal to 1024 going to ports greater than or equal to 500 Figure 3.1. You may also specify lists of IP addresses.1.) Figure 3. of the traffic that the rule applies to. or 80 for http. how Zen. and by negation.1. The negation operator is indicated with a !.1.1. The negation operator may be applied against any of the other rule types (except any.168. Port negation is indicated by using the negation operator !. There is an operator that can be applied to IP addresses.168. which would translate to none. static port definitions.0/24. ranges. and the destination address was set to match on the 192. if for some twisted reason you wanted to log everything except the X Windows ports. the source IP address was set to match for any computer talking.2. Port ranges are indicated with the range operator :.) Figure 3.5.3 for an example of an IP list in action. For example. For example..1. such as in Figure 3.4. The IP address and port numbers on the left side of the direction operator is considered to be the traffic coming from the source log udp any any -> 192. 3..0/24 111 \ (content: "|00 01 86 a5|". \ msg: "external mountd access".168.1. This rule’s IP addresses indicate any tcp packet with a source IP address not originating from the internal network and a destination address on the internal network.10. an easy modification to the initial example is to make it alert on any traffic that originates outside of the local net with the negation operator as shown in Figure 3. or direction. Static ports are indicated by a single port number. the IP list may not include spaces between the addresses. 3. meaning literally any port.168. See Figure 3.5 The Direction Operator The direction operator -> indicates the orientation. The range operator may be applied in a number of ways to take on different meanings.0/24] any -> \ [192.0/24. msg: "external mountd access".4: Port Range Examples 115 .alert tcp !192. etc.2: Example IP Address Negation Rule alert tcp ![192. 23 for telnet. Any ports are a wildcard value. This operator tells Snort to match any IP address except the one indicated by the listed IP address. including any ports.2. the negation operator.0/24 :6000 log tcp traffic from any port going to ports less than or equal to 6000 log tcp any :1024 -> 192.1.0/24 1:1024 log udp traffic coming from any port and destination ports ranging from 1 to 1024 log tcp any any -> 192. you could do something like the rule in Figure 3.2.168. log tcp any any -> 192.5) and flowbits (3. An example of the bidirectional operator being used to record both sides of a telnet session is shown in Figure 3. but they have a different option field: activated by. and the address and port information on the right side of the operator is the destination host.6. All Snort rule options are separated from each other using the semicolon (.1.168. 3. \ content: "|E8C0FFFFFF|/bin".operator. Put ’em together and they look like Figure 3. there’s a very good possibility that useful data will be contained within the next 50 (or whatever) packets going to that same service port on the network.0/24 23 Figure 3.1. except they have a *required* option field: activates.) dynamic tcp !$HOME_NET any -> $HOME_NET 143 (activated_by: 1. activates: 1. There is also a bidirectional operator.0/24 any <> 192.6.3 Rule Options Rule options form the heart of Snort’s intrusion detection engine. Activate rules are just like alerts but also tell Snort to add a rule when a specific network event occurs.168.10). Activate/dynamic rule pairs give Snort a powerful capability.does not exist is so that rules always read consistently. 3. combining ease of use with power and flexibility.1. note that there is no <. count. count: 50.) Figure 3. so there’s value in collecting those packets for later analysis. such as telnet or POP3 sessions. Dynamic rules act just like log rules. Dynamic rules have a second required field as well. \ msg: "IMAP buffer overflow!".2.7: Activate/Dynamic Rule Example 116 .6: Snort rules using the Bidirectional Operator host. activate tcp !$HOME_NET any -> $HOME_NET 143 (flags: PA. which is indicated with a <> symbol.7. Also.0/24 !6000:6010 Figure 3.6 Activate/Dynamic Rules ! △NOTE Activate and Dynamic rules are being phased out in favor of a combination of tagging (3. In Snort versions before 1. If the buffer overflow happened and was successful. This is handy for recording/analyzing both sides of a conversation. This tells Snort to consider the address/port pairs in either the source or destination orientation. the direction operator did not have proper error checking and many people used an invalid token.7.) character. Activate rules act just like alert rules. These rules tell Snort to alert when it detects an IMAP buffer overflow and collect the next 50 packets headed for port 143 coming from outside $HOME NET headed to $HOME NET.8. Dynamic rules are just like log rules except are dynamically enabled when the activate rule id goes off.7. The reason the <.5: Example of Port Negation log tcp !192. Rule option keywords are separated from their arguments with a colon (:) character. This is very useful if you want to set Snort up to perform follow on recording when a specific rule goes off. You can now have one rule activate another when it’s action is performed for a set number of packets.168. \ flags:AP. It is a simple text string that utilizes the \ as an escape character to indicate a discrete character that might otherwise confuse Snort’s rules parser (such as the semi-colon .] Examples alert tcp any any -> any 7070 (msg:"IDS411/dos-realaudio".org/plugins/dump. Table 3.4).securityfocus.<id>.com/info/IDS.) alert tcp any any -> any 21 (msg:"IDS287/ftp-wuftp260-venglin-linux". The plugin currently supports several specific systems as well as unique URLs.cgi/ for a system that is indexing descriptions of alerts based on of the sid (See Section 3.There are four major categories of rule options.nai. reference:arachnids.com/vil/dispVirus. Make sure to also take a look at reference The reference keyword allows rules to include references to external attack identification systems.4.1: Supported Systems URL Prefix General Rule Options 3.” 3. This plugin is to be used by output plugins to provide a link to additional information about the alert produced.<id>. 3.cgi?name=. \ 117 .4. character).org/cgi-bin/cvename.1 msg The msg rule option tells the logging and alerting engine the message to print along with a packet dump or to an alert. content:"|fff4 fffd 06|".4.nessus.mit.asp?virus k= http:// System bugtraq cve nessus arachnids mcafee url Format reference: <id system>. [reference: <id system>.whitehats. content:"|31c031db 31c9b046 cd80 31c031db|". Format msg: "<message text>". \ flags:AP.php3?id= (currently down). Example This example is a rule with the Snort Rule ID of 1000983.3 gid The gid keyword (generator id) is used to identify what part of Snort generates the event when a particular rule fires.4.) 3. gid:1000001.000. Format sid: <snort rules id>.000 Rules included with the Snort distribution • >1. alert tcp any any -> any 80 (content:"BOB". To avoid potential conflict with gids defined in Snort (that for some reason aren’t noted it etc/generators).map contains contains more information on preprocessor and decoder gids. Note that the gid keyword is optional and if it is not specified in a rule.1387. it will default to 1 and the rule will be part of the general rule subsystem. Example This example is a rule with a generator id of 1000001.4. sid:1000983.4) The file etc/gen-msg. \ reference:cve. This information allows output plugins to identify rules easily. See etc/generators in the source tree for the current generator ids in use.000. Format gid: <generator id>.5) • <100 Reserved for future use • 100-1.CAN-2000-1574. (See section 3.IDS287.4 sid The sid keyword is used to uniquely identify Snort rules.000 be used. This option should be used with the sid keyword. sid:1. reference:bugtraq. alert tcp any any -> any 80 (content:"BOB". This option should be used with the rev keyword.map contains a mapping of alert messages to Snort rule IDs.000.000 Used for local rules The file sid-msg. it is not recommended that the gid keyword be used. rev:1. For example gid 1 is associated with the rules subsystem and various gids over 100 are designated for specific preprocessors and the decoder.reference:arachnids.) 3.4. rev:1. (See section 3. For general rule writing. This information is useful when postprocessing alert to map an ID to an alert message.) 118 .4. it is recommended that a value greater than 1. rev:1.4) Format rev: <revision integer>.config file. They are currently ordered with 3 default priorities.4. This option should be used with the sid keyword. The file uses the following syntax: config classification: <class name>. flags:A+ .5 rev The rev keyword is used to uniquely identify revisions of Snort rules.<default priority> These attack classifications are listed in Table 3.4.2. Defining classifications for rules provides a way to better organize the event data Snort produces. classtype:attempted-recon. Example alert tcp any any -> any 25 (msg:"SMTP expn root". Format classtype: <class name>. (See section 3. Revisions.6 classtype The classtype keyword is used to categorize a rule as detecting an attack that is part of a more general type of attack class. allow signatures and descriptions to be refined and replaced with updated information. A priority of 1 (high) is the most severe and 3 (low) is the least severe.) Attack classifications defined by Snort reside in the classification.) 3. \ content:"expn root".4.<class description>. along with Snort rule id’s. alert tcp any any -> any 80 (content:"BOB". Example This example is a rule with the Snort Rule Revision of 1. nocase. Snort provides a default set of attack classes that are used by the default set of rules it provides. Table 3. sid:1000983.3. config that are used by the rules it provides. Examples alert TCP any any -> any 80 (msg: "WEB-MISC phf attempt". priority:10 ). 120 . Snort provides a default set of classifications in classification.) alert tcp any any -> any 80 (msg:"EXPLOIT ntpdx overflow".conf by using the config classification option. Examples of each case are given below. flags:A+. classtype:attempted-admin.7 priority The priority tag assigns a severity level to rules. Format priority: <priority integer>. \ dsize: >128. \ content: "/cgi-bin/phf".4. priority:10. 3. A classtype rule assigns a default priority (defined by the config classification option) that may be overridden with a priority rule. Examples alert tcp any any -> any 80 (msg: "Shared Library Rule Example". with a key and a value.) alert tcp any any -> any 80 (msg: "Shared Library Rule Example". metadata:soid 3|12345. the second a single metadata keyword. See Section 2. the rule is applied to that packet. \ metadata:engine shared. The gid keyword (generator id) is used to identify what part of Snort generates the event when a particular rule fires. . with keys separated by commas. metadata: key1 value1. otherwise.) alert tcp any any -> any 80 (msg: "HTTP Service Rule Example".8 metadata The metadata tag allows a rule writer to embed additional information about the rule. the rule is not applied (even if the ports specified in the rule match). Table 3.7 for details on the Host Attribute Table. \ metadata:service http. Format The examples below show an stub rule from a shared library rule.4.9 General Rule Quick Reference Table 3. 121 .4: General rule option keywords Keyword msg reference gid Description The msg keyword tells the logging and alerting engine the message to print with the packet dump or alert. typically in a key-value format. soid 3|12345. When the value exactly matches the service ID as specified in the table. The reference keyword allows rules to include references to external attack identification systems. Multiple keys are separated by a comma.3. The first uses multiple metadata keywords.3. Certain metadata keys and values have meaning to Snort and are listed in Table 3.) 3. while keys and values are separated by a space. key2 value2. Keys other than those listed in the table are effectively ignored by Snort and can be free-form. \ metadata:engine shared.4. metadata: key1 value1. Examples alert tcp any any -> any 139 (content:"|5c 00|P|00|I|00|P|00|E|00 5c|". The binary data is generally enclosed within the pipe (|) character and represented as bytecode. For example. 122 . The rev keyword is used to uniquely identify revisions of Snort rules. \ " Format content: [!] "<content string>".) ! △NOTE A ! modifier negates the results of the entire content search. if using content:!"A". It allows the user to set rules that search for specific content in the packet payload and trigger response based on that data.5. This allows rules to be tailored for less false positives. Bytecode represents binary data as hexadecimal numbers and is a good shorthand method for describing complex binary data. the result will return a match. and there are only 5 bytes of payload and there is no ”A” in those 5 bytes. within:50. If there must be 50 bytes for a valid match. If the rule is preceded by a !.sid rev classtype priority metadata The sid keyword is used to uniquely identify Snort rules. the test is successful and the remainder of the rule option tests are performed. the alert will be triggered on packets that do not contain this content. typically in a key-value format. Be aware that this test is case sensitive.5 Payload Detection Rule Options 3.) alert tcp any any -> any 80 (content:!"GET". the Boyer-Moore pattern match function is called and the (rather computationally expensive) test is performed against the packet contents. Note that multiple content rules can be specified in one rule. The metadata keyword allows a rule writer to embed additional information about the rule. it can contain mixed text and binary data. This is useful when writing rules that want to alert on packets that do not match a certain pattern ! △NOTE Also note that the following characters must be escaped inside a content rule: : . Whenever a content option pattern match is performed. modifiers included.1 content The content keyword is one of the more important features of Snort. The priority keyword assigns a severity level to rules. 3. use isdataat as a pre-cursor to the content. The option data for the content keyword is somewhat complex. The example below shows use of mixed text and binary data in a Snort rule. The classtype keyword is used to categorize a rule as detecting an attack that is part of a more general type of attack class. If data exactly matching the argument data string is contained anywhere within the packet’s payload. 4 offset 3.5.5.5.6 within 3. ignoring case. The modifier keywords change how the previously specified content works.7 http client body 3.5.1 option.5.5. Example This example tells the content pattern matcher to look at the raw traffic. nocase. Example alert tcp any any -> any 21 (msg:"FTP ROOT".5. instead of the decoded traffic provided by the Telnet decoder.5.5. Format nocase. rawbytes.5.8 http cookie 3.3 depth 3.5. nocase modifies the previous ’content’ keyword in the rule.13 3.11 http uri 3.Changing content behavior The content keyword has a number of modifier keywords.5: Content Modifiers Modifier Section nocase 3.3 rawbytes The rawbytes keyword allows rules to look at the raw packet data.5. content:"USER root".) 123 .9 http header 3. This acts as a modifier to the previous content 3.10 http method 3. ignoring any decoding that was done by preprocessors.5.5.2 nocase The nocase keyword allows the rule writer to specify that the Snort should look for the specific pattern.) 3. content: "|FF F1|". These modifier keywords are: Table 3.5 distance 3. alert tcp any any -> any 21 (msg: "Telnet NOP". format rawbytes.2 rawbytes 3.12 fast pattern 3.5. {1}DEF/. there must be a content in the rule before ‘depth’ is specified.5.) 3. Example The rule below maps to a regular expression of /ABC. This can be thought of as exactly the same thing as offset (See Section 3.5). depth:20. As the depth keyword is a modifier to the previous ‘content’ keyword.3. depth modifies the previous ‘content’ keyword in the rule. distance:1. Format distance: <byte count>. A depth of 5 would tell Snort to only look for the specified pattern within the first 5 bytes of the payload. alert tcp any any -> any 80 (content: "cgi-bin/phf".4 depth The depth keyword allows the rule writer to specify how far into a packet Snort should search for the specified pattern. 3.) 124 . offset. An offset of 5 would tell Snort to start looking for the specified pattern after the first 5 bytes of the payload. Format offset: <number>.5 offset The offset keyword allows the rule writer to specify where to start searching for a pattern within a packet.6 distance The distance keyword allows the rule writer to specify how far into a packet Snort should ignore before starting to search for the specified pattern relative to the end of the previous pattern match.5. alert tcp any any -> any any (content:"ABC". Example The following example shows use of a combined content. offset:4. As this keyword is a modifier to the previous ’content’ keyword. content: "DEF".5. there must be a content in the rule before ’offset’ is specified. and depth search rule. except it is relative to the end of the last pattern match instead of the beginning of the packet.5. offset modifies the previous ’content’ keyword in the rule. Format depth: <number>. 2.1 ). content: "EFG".5.5. alert tcp any any -> any 80 (content:"ABC".5. The extracted Cookie Header field may be NORMALIZED.6). Format http_cookie. Format http_client_body. there must be a content in the rule before ’http cookie’ is specified.5. It’s designed to be used in conjunction with the distance (Section 3. http_client_body.8 http client body The http client body keyword is a content modifier that restricts the search to the NORMALIZED body of an HTTP client request. 125 . Format within: <byte count>. 3. alert tcp any any -> any any (content:"ABC".) 3.6) rule option. As this keyword is a modifier to the previous ’content’ keyword. within:10. content: "EFG".9 http cookie The http cookie keyword is a content modifier that restricts the search to the extracted Cookie Header field of an HTTP client request. Examples This rule constrains the search of EFG to not go past 10 bytes past the ABC match. there must be a content in the rule before ’http client body’ is specified.) ! △NOTE The http client body modifier is not allowed to be used with the rawbytes modifier for the same content.5. per the configuration of HttpInspect (see 2. Examples This rule constrains the search for the pattern ”EFG” to the NORMALIZED body of an HTTP client request.. content: "EFG". 3.5. The extracted Header fields may be NORMALIZED. http_header.11 http method The http method keyword is a content modifier that restricts the search to the extracted Method from an HTTP client request. 126 .) ! △NOTE The http cookie modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content.5. http_cookie. content: "EFG". there must be a content in the rule before ’http header’ is specified. Examples This rule constrains the search for the pattern ”EFG” to the extracted Header fields of an HTTP client request. there must be a content in the rule before ’http method’ is specified. alert tcp any any -> any 80 (content:"ABC". As this keyword is a modifier to the previous ’content’ keyword. Format http_method..) ! △NOTE The http header modifier is not allowed to be used with the rawbytes modifier for the same content. Format http_header.Examples This rule constrains the search for the pattern ”EFG” to the extracted Cookie Header field of an HTTP client request. per the configuration of HttpInspect (see 2.2. alert tcp any any -> any 80 (content:"ABC". 3.6). even though it is shorter than the earlier pattern ”ABCD”. content: "EFG". The http method 3. alert tcp any any -> any 80 (content:"ABCD". The http uri 3.12 http uri The http uri keyword is a content modifier that restricts the search to the NORMALIZED request URI field . http_uri. http_method.5.14). It overrides the default of using the longest content within the rule.5.) ! △NOTE modifier is not allowed to be used with the rawbytes modifier for the same content.) 127 . there must be a content in the rule before ’http uri’ is specified. content: "GET". Format http_uri.Examples This rule constrains the search for the pattern ”GET” to the extracted Method from an HTTP client request. content: "EFG". As this keyword is a modifier to the previous ’content’ keyword. alert tcp any any -> any 80 (content:"ABC". alert tcp any any -> any 80 (content:"ABC".) ! △NOTE modifier is not allowed to be used with the rawbytes modifier for the same content. Examples This rule causes the pattern ”EFG” to be used with the Fast Pattern Matcher. there must be a content in the rule before ’fast pattern’ is specified. Format fast_pattern. fast pattern may be specified at most once for each of the buffer modifiers (excluding the http cookie modifier). Using a content rule option followed by a http uri modifier is the same as using a uricontent by itself (see: 3. Examples This rule constrains the search for the pattern ”EFG” to the NORMALIZED URI. fast_pattern. As this keyword is a modifier to the previous ’content’ keyword.13 fast pattern The fast pattern keyword is a content modifier that sets the content within a rule to be used with the Fast Pattern Matcher.5. ! △NOTE The fast pattern modifier is not allowed to be used with the http cookie modifier for the same content.2.exe?/c+ver Another example. For example. write the content that you want to find in the context that the URI will be normalized. do not include directory traversals. 3.1. such as %2f or directory traversals.exe?/c+ver will get normalized into: /winnt/system32/cmd. (See Section 3..%252fp%68f? will get normalized into: /cgi-bin/phf? When writing a uricontent rule.14 uricontent The uricontent keyword in the Snort rule language searches the NORMALIZED request URI field. or range of URI lengths to match.%c0%af.. For example. 128 . 3. the minimum length.6.5. nor with a content that is negated with a !..1) For a description of the parameters to this function.5. The reason is that the things you are looking for are normalized out of the URI buffer. This option works in conjunction with the HTTP Inspect preprocessor specified in Section 2. see the content rule options in Section 3. ! △NOTE uricontent cannot be modified by a rawbytes modifier.15 urilen The urilen keyword in the Snort rule language specifies the exact length. the URI: /cgi-bin/aaaaaaaaaaaaaaaaaaaaaaaaaa/. the maximum length. these rules will not alert. the URI: /scripts/. You can write rules that look for the non-normalized content by using the content option. if Snort normalizes directory traversals./winnt/system32/cmd. This means that if you are writing rules that include things that are normalized.5. Format uricontent:[!]<content string>.5. ! △NOTE R and B should not be used together. Format isdataat:<int>[. Example alert tcp any any -> any 111 (content:"PASS". check out the PCRE web site.>] <int>.pcre.2. For more detail on what can be done via a pcre regular expression. optionally looking for data relative to the end of the previous content match. 3.6. then verifies there is at least 50 bytes after the end of the string PASS.relative]. urilen: [<.8 for descriptions of each modifier. isdataat:50.relative.17 pcre The pcre keyword allows rules to be written using perl compatible regular expressions.org Format pcre:[!]"(/<regex>/|m<delim><regex><delim>)[ismxAEGRUBPHMCO]".5. within:50. then verifies that there is not a newline character within 50 bytes of the end of the PASS string. \ content:!"|0a|". See tables 3. 3.16 isdataat Verify that the payload has data at a specified location. The modifiers 129 .7. and 3.5. The post-re modifiers set compile time flags for the regular expression.6.) This rule looks for the string PASS exists in the packet.. 3.Format urilen: int<>int. Inverts the ”greediness” of the quantifiers so that they are not greedy by default.6: Perl compatible modifiers for pcre case insensitive include newlines in the dot metacharacter By default.5. <value>. whitespace data characters in the pattern are ignored except when escaped or inside a character class A E G Table 3. Example This example performs a case-insensitive search for the string BLAH in the payload. \ 130 .9. 3.) ! △NOTE Snort’s handling of multiple URIs with PCRE does not work as expected. Format byte_test: <bytes to convert>. please read Section 3. PCRE when used without a uricontent only evaluates the first URI. $ also matches immediately before the final character if it is a newline (but not before any other newlines).<number type>.5. Without E. as well as the very start and very end of the buffer. In order to use pcre to inspect all URIs.<endian>] [.7: PCRE compatible modifiers for pcre the pattern must match only at the start of the buffer (same as ˆ ) Set $ to match only at the end of the subject string.i s m x Table 3. alert ip any any -> any any (pcre:"/BLAH/i". you must use either a content or a uricontent.18 byte test Test a byte field against a specific value (with operator). the string is treated as one big line of characters. When m is set.relative] [. but become greedy if followed by ”?”. [!]<operator>. For a more detailed explanation. string]. Capable of testing binary values or converting representative byte strings to their binary equivalent and testing them. <offset> [. ˆ and $ match immediately following or immediately before any newline in the buffer. ˆ and $ match at the beginning and ending of the string. then the operator is set to =.14 for a description and examples (2. (Similar to distance:0.greater than • = .} 131 .Converted string data is represented in hexadecimal • dec .14 for quick reference). If the & operator is used.equal • ! .R U P H M C B O Table 3.3) Option bytes to convert operator Description Number of bytes to pick up from the packet Operation to perform to test the value: • < . See section 2.Process data as big endian (default) • little .2.less than • > . ! △NOTE Snort uses the C operators for each of these operators.1. then it would be the same as using if (data & value) { do something(). If ! is specified without an operator. Any of the operators can also include ! to check if the operator is not true.Converted string data is represented in octal dce Let the DCE/RPC 2 preprocessor determine the byte order of the value to be converted.8: Snort specific modifiers for pcre Match relative to the end of the last pattern match.Process data as little endian string number type Data is stored in string format in packet Type of number being read: • hex .bitwise AND • ˆ .not • & . 0.relative] [.) alert udp any any -> any 1238 \ (byte_test: 8. string. 0. 20. string. This pointer is known as the detect offset end pointer. 0xdeadbeef.big] [.>. \ msg: "got 123!". 0. =.) alert udp any any -> any 1234 \ (byte_test: 4.9. 0. \ msg: "got 12!". \ msg: "got DEADBEEF!". relative. hex. \ byte_test: 4.hex] [. within: 4. 1000. The byte jump option does this by reading some number of bytes. convert them to their numeric representation. =. =.string]\ [.multiplier <multiplier value>] [. 123. \ content: "|00 00 00 07|". dec. distance: 4. \ content: "|00 04 93 F3|".1000. 0. Format byte_jump: <bytes_to_convert>. =. distance: 4. 12.) alert udp any any -> any 1237 \ (byte_test: 10.19 byte jump The byte jump keyword allows rules to be written for length encoded protocols trivially.post_offset <adjustment value>]. rules can be written that skip over specific portions of length-encoded protocols and perform detection in very specific locations. move that many bytes forward and set a pointer for later detection.oct] [. string. 1234. \ msg: "got 1234567890!". 1234567890. string. dec. dec. 132 .Examples alert udp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"AMD procedure 7 plog overflow ".) 3. please read Section 3.) alert udp any any -> any 1235 \ (byte_test: 3.) alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"AMD procedure 7 plog overflow ". 20.align] [.5. For a more detailed explanation. \ byte_test: 4.from_beginning] [. \ content: "|00 00 00 07|".5.dec] [.) alert udp any any -> any 1236 \ (byte_test: 2. <offset> \ [. or doe ptr. within: 4. relative. By having an option that reads the length of a portion of data. string. =. \ msg: "got 1234!". \ content: "|00 04 93 F3|".little][. then skips that far forward in the packet. dec. >. ) 3. the whole option evaluates as true. \ content: "|00 00 00 01|".1 options provide programmatic detection capabilities as well as some more dynamic type detection. Format. \ byte_test: 4. 12.20 ftpbounce The ftpbounce keyword detects FTP bounce attacks.5. 133 . within: 4. 900. relative. ftpbounce. \ msg: "statd format string buffer overflow".1 detection plugin decodes a packet or a portion of a packet. align. the option and the argument are separated by a space or a comma. and looks for various malicious encodings. The preferred usage is to use a space between option and argument. \ byte_jump: 4. relative. sid:3441. pcre:"/ˆPORT/smi". nocase.2.5.established. Skip forward or backwards (positive of negative value) by <value> number of bytes after the other jump options have been applied. Example alert tcp $EXTERNAL_NET any -> $HOME_NET 21 (msg:"FTP PORT bounce attempt". rev:1. >.14 for quick reference). distance: 4. If an option has an argument. Example alert udp any any -> any 32770:34000 (content: "|00 01 86 B8|". content:"PORT".2. \ flow:to_server.21 asn1 The ASN.\ classtype:misc-attack. Let the DCE/RPC 2 preprocessor determine the byte order of the value to be converted. Multiple options can be used in an ’asn1’ option and the implied logic is boolean OR. The ASN.) 3. 20.14 for a description and examples . So if any of the arguments evaluate as true. See section 2. e. absolute_offset 0. option[ argument]] . asn1: bitstring_overflow. . This keyword must have one argument which specifies the length to compare against. Option invalid-entry Description Looks for an invalid Entry string. ! △NOTEcannot do detection over encrypted sessions. So if you wanted to start decoding and ASN.1 sequence right after the content “foo”. \ asn1: bitstring_overflow. This is the relative offset from the last content match or byte test/jump. Detects a double ASCII encoding that is larger than a standard buffer. CVE-2004-0396: ”Malformed Entry Modified and Unchanged flag insertion”. The syntax looks like. absolute offset has one argument. SSH (usually port 22). . For example.11. \ flow:to_server. if you wanted to decode snmp packets. which is a way of causing a heap overflow (see CVE-2004-0396) and bad pointer derefenece in versions of CVS 1. cvs:invalid-entry. relative offset has one argument.22 cvs The CVS detection plugin aids in the detection of: Bugtraq-10384. Compares ASN.Format asn1: option[ argument][. This plugin Format cvs:<option>. content:"foo".) alert tcp any any -> any 80 (msg:"ASN1 Relative Foo". then this keyword is evaluated as true. but it is unknown at this time which services may be exploitable. Offset may be positive or negative.g. you would specify ’content:"foo". This means that if an ASN. \ asn1: oversize_length 10000. relative_offset 0’. you would say “absolute offset 0”. This is known to be an exploitable function in Microsoft.1 type lengths with the supplied argument.15 and before. Default CVS server ports are 2401 and 514 and are included in the default ports for stream reassembly.established. the offset value.) 3.1 type is greater than 500. relative_offset 0. oversize length <value> absolute offset <value> relative offset <value> Examples alert udp any any -> any 161 (msg:"Oversize SNMP Length". Examples alert tcp any any -> any 2401 (msg:"CVS Invalid-entry". Option bitstring overflow double overflow Description Detects invalid bitstring encodings that are known to be remotely exploitable. the offset number.5. “oversize length 500”. Offset values may be positive or negative.) 134 . This is the absolute offset from the beginning of the packet. 14. and looks for various malicious encodings. The uricontent keyword in the Snort rule language searches the normalized request URI field. The offset keyword allows the rule writer to specify where to start searching for a pattern within a packet.24 dce opnum See the DCE/RPC 2 Preprocessor section 2.2. The byte test keyword tests a byte field against a specific value (with operator).26 Payload Detection Quick Reference Table 3.5. The distance keyword allows the rule writer to specify how far into a packet Snort should ignore before starting to search for the specified pattern relative to the end of the previous pattern match.2.14 for a description and examples of using this rule option. 3. The within keyword is a content modifier that makes sure that at most N bytes are between pattern matches using the content keyword. The isdataat keyword verifies that the payload has data at a specified location. The rawbytes keyword allows rules to look at the raw packet data. The pcre keyword allows rules to be written using perl compatible regular expressions.2. ignoring any decoding that was done by preprocessors.14 for a description and examples of using this rule option. The depth keyword allows the rule writer to specify how far into a packet Snort should search for the specified pattern. See the DCE/RPC 2 Preprocessor section 2.14 for a description and examples of using this rule option.5. The byte jump keyword allows rules to read the length of a portion of data.5.5.23 dce iface See the DCE/RPC 2 Preprocessor section 2. The ftpbounce keyword detects FTP bounce attacks. The asn1 detection plugin decodes a packet or a portion of a packet.9: Payload detection rule option keywords Keyword content rawbytes depth offset distance Description The content keyword allows the user to set rules that search for specific content in the packet payload and trigger response based on that data. 3.2.14. 3.25 dce stub data See the DCE/RPC 2 Preprocessor section 2.2. See the DCE/RPC 2 Preprocessor section 2.14.2. See the DCE/RPC 2 Preprocessor section 2. then skip that far forward in the packet. The cvs keyword detects invalid entry strings.3. within uricontent isdataat pcre byte test byte jump ftpbounce asn1 cvs dce iface dce opnum dce stub data 135 . 6.3.3 tos The tos keyword is used to check the IP TOS field for a specific value.1 fragoffset The fragoffset keyword allows one to compare the IP fragment offset field against a decimal value. Format fragoffset:[<|>]<number>. To catch all the first fragments of an IP session.6. Example alert ip any any -> any any \ (msg: "First Fragment". Example This example checks for a time-to-live value that is less than 3. you could use the fragbits keyword and look for the More fragments option in conjunction with a fragoffset of 0. Format ttl:[[<number>-]><=]<number>. This option keyword was intended for use in the detection of traceroute attempts. This example checks for a time-to-live value that between 3 and 5. 3. ttl:<3. Example This example looks for a tos value that is not 4 tos:!4.2 ttl The ttl keyword is used to check the IP time-to-live value. fragoffset: 0.) 3. Format tos:[!]<number>. ttl:3-5.6 Non-Payload Detection Rule Options 3. 136 .6. fragbits: M. Example This example looks for the IP ID of 31337.Strict Source Routing satid . 137 .4 id The id keyword is used to check the IP ID field for a specific value. the value 31337 is very popular with some hackers.6.Stream identifier any .End of list nop .IP Extended Security lsrr .Record Route eol . for example. ipopts:lsrr. Example This example looks for the IP Option of Loose Source Routing.3.Loose Source Routing ssrr . Some tools (exploits.5 ipopts The ipopts keyword is used to check if a specific IP option is present.No Op ts . id:31337. Format ipopts:<rr|eol|nop|ts|sec|esec|lsrr|ssrr|satid|any>.6.Time Stamp sec . The following options may be checked: rr . Format id:<number>. 3.IP Security esec . scanners and other odd programs) set this field specifically for various purposes.any IP options are set The most frequently watched for IP options are strict and loose source routing which aren’t used in any widespread internet applications. In many cases.6. Example This example checks if the More Fragments bit and the Do not Fragment bit are set. Example This example looks for a dsize that is between 300 and 400 bytes. Format dsize: [<>]<number>[<><number>].Reserved Bit The following modifiers can be set to change the match criteria: + match on the specified bits.6. 138 . dsize:300<>400.6 fragbits The fragbits keyword is used to check if fragmentation and reserved bits are set in the IP header. This may be used to check for abnormally sized packets. regardless of the size of the payload. 3. it is useful for detecting buffer overflows.Warning Only a single ipopts keyword may be specified per rule. 3. fragbits:MD+. The following bits may be checked: M . Warning dsize will fail on stream rebuilt packets. plus any others * match if any of the specified bits are set ! match if the specified bits are not set Format fragbits:[+*!]<[MDR]>.Don’t Fragment R .More Fragments D .7 dsize The dsize keyword is used to test the packet payload size. RST P .2).3.Reserved bit 2 0 .8 flags The flags keyword is used to check if specific TCP flag bits are present. alert tcp any any -> any any (flags:SF. This allows rules to only apply to clients or servers.) 3. 139 .9 flow The flow keyword is used in conjunction with TCP stream reassembly (see Section 2.6.URG 1 .12.match if the specified bits are not set To handle writing rules for session initiation packets such as ECN where a SYN packet is sent with the previously reserved bits 1 and 2 set.SYN R . It allows rules to only apply to certain directions of the traffic flow.Reserved bit 1 (MSB in TCP Flags byte) 2 . ignoring reserved bit 1 and reserved bit 2.match on the specified bits. plus any others * .<FSRPAU120>]. an option mask may be specified. regardless of the values of the reserved bits.match if any of the specified bits are set ! .2. Format flags:[!|*|+]<FSRPAU120>[. Example This example checks if just the SYN and the FIN bits are set.PSH A . This allows packets related to $HOME NET clients viewing web pages to be distinguished from servers running in the $HOME NET.No TCP Flags Set The following modifiers can be set to change the match criteria: + .12 if one wishes to find packets with just the syn bit.FIN (LSB in TCP Flags byte) S . The established keyword will replace the flags: A+ used in many places to show established TCP connections.6. A rule could check for a flags value of S. The following bits may be checked: F .ACK U . otherwise unsets the state if the state is set.2. It allows rules to track states across transport protocol sessions. Examples alert tcp !$HOME_NET any -> $HOME_NET 21 (msg:"cd incoming detected".2). Sets the specified state if the state is unset.(to_client|to_server|from_client|from_server)] [.Options Option to client to server from client from server established stateless no stream only stream Format flow: [(established|stateless)] [. This string should be limited to any alphanumeric string including periods. regardless of the rest of the detection options.<STATE_NAME>].(no_stream|only_stream)]. nocase. Most of the options need a user-defined name for the specific state that is being checked. Format flowbits: [set|unset|toggle|isset|reset|noalert][. as it allows rules to generically track the state of an application protocol. and underscores. \ flow:from_client.). Unsets the specified state for the current flow. Examples alert tcp any 143 -> any any (msg:"IMAP login". Option set unset toggle isset isnotset noalert Description Sets the specified state for the current flow. content:"CWD incoming". 140 . The flowbits option is most useful for TCP sessions. Cause the rule to not generate an alert. \ flow:stateless. There are seven keywords associated with flowbits.6. Checks if the specified state is not set.10 flowbits The flowbits keyword is used in conjunction with conversation tracking from the Stream preprocessor (see Section2. dashes. Checks if the specified state is set.) alert tcp !$HOME_NET 0 -> $HOME_NET 0 (msg: "Port 0 TCP traffic". flowbits:noalert.11 seq The seq keyword is used to check for a specific TCP sequence number. Example This example looks for a TCP sequence number of 0. 3.) alert tcp any any -> any 143 (msg:"IMAP LIST".) 3. window:55808.6. flowbits:isset. 141 . content:"LIST". Example This example looks for a TCP window size of 55808. 3.12 ack The ack keyword is used to check for a specific TCP acknowledge number. Format window:[!]<number>. Example This example looks for a TCP acknowledge number of 0.6.13 window The window keyword is used to check for a specific TCP window size. Format ack: <number>. ack:0.content:"OK LOGIN".logged_in. flowbits:set.logged_in. seq:0. Format seq:<number>.6. 6. itype:>30. Example This example looks for an ICMP code greater than 30.14 itype The itype keyword is used to check for a specific ICMP type value. 3. 3. This particular plugin was developed to detect the stacheldraht DDoS agent.6. This is useful because some covert channel programs use static ICMP fields when they communicate.6.17 icmp seq The icmp seq keyword is used to check for a specific ICMP sequence value.3.16 icmp id The icmp id keyword is used to check for a specific ICMP ID value. Format itype:[<|>]<number>[<><number>]. Format icode: [<|>]<number>[<><number>]. Example This example looks for an ICMP ID of 0. This particular plugin was developed to detect the stacheldraht DDoS agent. Example This example looks for an ICMP type greater than 30. code:>30. icmp_id:0.15 icode The icode keyword is used to check for a specific ICMP code value. 3. This is useful because some covert channel programs use static ICMP fields when they communicate.6. Format icmp_id:<number>. 142 . ). 3. Format ip_proto:[!|>|<] <name or number>.19 ip proto The ip proto keyword allows checks against the IP protocol header.20 sameip The sameip keyword allows rules to check if the source ip is the same as the destination IP.Format icmp_seq:<number>. 3. [<procedure number>|*]>.18 rpc The rpc keyword is used to check for a RPC application. alert ip any any -> any any (ip_proto:igmp. [<version number>|*]. Example The following example looks for an RPC portmap GETPORT request. icmp_seq:0. Warning Because of the fast pattern matching engine. alert tcp any any -> any 111 (rpc: 100000. For a list of protocols that may be specified by name.*. Example This example looks for an ICMP Sequence of 0. see /etc/protocols. Wildcards are valid for both version and procedure numbers by using ’*’.6. Format rpc: <application number>. version. Example This example looks for IGMP traffic.3.6.6.) 3. the RPC keyword is slower than looking for the RPC values by using normal content matching. 143 . and procedure numbers in SUNRPC CALL requests. not • <= .equal • != .6.6. The tos keyword is used to check the IP TOS field for a specific value. The id keyword is used to check the IP ID field for a specific value. as determined by the TCP sequence numbers. The ttl keyword is used to check the IP time-to-live value.less than or equal • >= .) 3.<operator>.<.22 Non-Payload Detection Quick Reference Table 3.greater than • = .<number> Where the operator is one of the following: • < . ! △NOTE The stream size option is only available when the Stream5 preprocessor is enabled.greater than or equal Example For example.Format sameip. Example This example looks for any traffic where the Source IP and the Destination IP is the same. to look for a session that is less that 6 bytes from the client side.10: Non-payload detection rule option keywords Keyword fragoffset ttl tos id Description The fragoffset keyword allows one to compare the IP fragment offset field against a decimal value.21 stream size The stream size keyword allows a rule to match traffic according to the number of bytes observed. 144 .) 3.6. use: alert tcp any any -> any any (stream_size:client. alert ip any any -> any any (sameip. Format stream_size:<server|client|both|either>.less than • > . The itype keyword is used to check for a specific ICMP type value. The icode keyword is used to check for a specific ICMP code value. The icmp seq keyword is used to check for a specific ICMP sequence value. The window keyword is used to check for a specific TCP window size.2 session The session keyword is built to extract user data from TCP Sessions. This is especially handy for combining data from things like NMAP activity. There are two available argument keywords for the session rule option.1 logto The logto keyword tells Snort to log all packets that trigger this rule to a special output log file. version. or even web sessions is very useful. ftp. It should be noted that this option does not work when Snort is in binary logging mode. There are many cases where seeing what users are typing in telnet. etc. The rpc keyword is used to check for a RPC application. and procedure numbers in SUNRPC CALL requests. Format session: [printable|all]. 145 . The fragbits keyword is used to check if fragmentation and reserved bits are set in the IP header. The flow keyword allows rules to only apply to certain directions of the traffic flow.7. The icmp id keyword is used to check for a specific ICMP ID value. HTTP CGI scans.7 Post-Detection Rule Options 3. 3. printable or all. The printable keyword only prints out data that the user would normally see or be able to type. The flowbits keyword allows rules to track states across transport protocol sessions.7.. Format logto:"filename". The all keyword substitutes non-printable characters with their hexadecimal equivalents. The seq keyword is used to check for a specific TCP sequence number. rlogin. The ip proto keyword allows checks against the IP protocol header. 3. The flags keyword is used to check if specific TCP flag bits are present. resp:rst_all.) It is easy to be fooled into interfering with normal network traffic as well.3 resp The resp keyword is used to attempt to close sessions when an alert is triggered. 3. Format resp: <resp_mechanism>[..<resp_mechanism>[. alert tcp any any -> any 1524 (flags:S. Example The following example attempts to reset any TCP connection to port 1524.Example The following example logs all printable strings in a telnet packet. this is called flexible response. The session keyword is best suited for post-processing binary (pcap) log files. Warnings This functionality is not built in by default. It is quite easy to get Snort into an infinite loop by defining a rule such as: alert tcp any any -> any any (resp:rst_all. Use the – –enable-flexresp flag to configure when building Snort to enable this functionality.) Warnings Using the session keyword can slow Snort down considerably. In Snort.7. log tcp any any <> any 23 (session:printable. so it should not be used in heavy load situations. Be very careful when using Flexible Response.) 146 .<resp_mechanism>]]. or something really important .include the msg option text into the blocking visible notice • proxy <port nr> .) Warnings React functionality is not built in by default. <metric>. [direction]. type • session . The following arguments (basic modifiers) are valid for this option: • block .htm". but it is the responsibility of the output plugin to properly handle these special alerts. Example alert tcp any any <> 192. Tagged traffic is logged to allow analysis of response codes and post-attack traffic. The basic reaction is blocking interesting sites users want to access: New York Times. additional traffic involving the source and/or destination host is tagged. described in Section 2. <react_additional_modifier>]. (Note that react may now be enabled independently of flexresp and flexresp2. Format react: block[.1.5 tag The tag keyword allow rules to log more than just the single packet that triggered the rule. The notice may include your own comment. Format tag: <type>.7. tagged alerts will be sent to the same output plugins as the original alert.Log packets in the session that set off the rule • host .7. does not properly handle tagged alerts.3. The React code allows Snort to actively close offending connections and send a visible notice to the browser.use the proxy port to send the visible notice Multiple additional arguments are separated by a comma.168. Once a rule is triggered.) Be very careful when using react.close connection and send the visible notice The basic argument may be combined with the following arguments (additional modifiers): • msg . msg. 3. <count>. The react keyword should be placed as the last one in the option list.Log packets from the host that caused the tag to activate (uses [direction] modifier) count 147 .6.0/24 80 (content: "bad. you must configure with –enable-react to build it. the database output plugin.4 react This keyword implements an ability for users to react to traffic that matches a Snort rule. proxy 8000. react: block. Causing a network traffic generation loop is very easy to do with this functionality. Currently.napster and porn sites. slashdot. \ msg: "Not for children!".6. 1. since the rule will fire on every packet involving 10.1 any (tag:host. tag:host. alert tcp any any <> 10.Tag the host/session for <count> seconds • bytes .1. metric • packets .600.1.Tag packets containing the source IP address of the packet that generated the initial event.src. no packets will get tagged. flowbits:set.4 any -> 10. a tagged packet limit will be used to limit the number of tagged packets regardless of whether the seconds or bytes count has been reached. (Note that the tagged packet limit was introduced to avoid DoS situations on high bandwidth sensors for tag rules with a high seconds or bytes counts. any packets that generate an alert will not be tagged.600.1.Tag packets containing the destination IP address of the packet that generated the initial event.1.2. it may seem that the following rule will tag the first 600 seconds of any packet involving 10.6 for more information. Units are specified in the <metric> field.src. The flowbits option would be useful here.) However. 148 .0.1.seconds.1.1. tag:host.1. Doing this will ensure that packets are tagged for the full amount of seconds or bytes and will not be cut off by the tagged packet limit.Tag the host/session for <count> bytes direction .1. tag:session.) alert tcp 10.• <integer> .1 any (flowbits:isnotset.conf file (see Section 2.) 3. See Section 3.1.packets.src. The default tagged packet limit value is 256 and can be modified by using a config option in your snort.seconds.10.6 activates The activates keyword allows the rule writer to specify a rule to add when a specific network event occurs.12.) Example This example logs the first 10 seconds or the tagged packet limit (whichever comes first) of any telnet session.conf to 0). • src . Note. alert tcp any any -> any 23 (flags:s. You can disable this packet limit for a particular rule by adding a packets metric to your tag option and setting its count to 0 (This can be done on a global scale by setting the tagged packet limit option in snort. Format activates: 1.seconds.3 on how to use the tagged packet limit config option).1.seconds. • dst .1. For example.600.) Also note that if you have a tag option in a rule that uses a metric other than packets.1.only relevant if host type is used.1.Count is specified as a number of units.tagged.1 any \ (content:"TAGMYPACKETS".Tag the host/session for <count> packets • seconds . alert tcp any any <> 10.7.tagged. \ count <c>.7. 3. The value must be nonzero.2. Format activated_by: 1.6 for more information.2. The maximum number of rule matches in s seconds allowed before the detection filter limit to be exceeded. Example . after the first 30 failed login attempts: 149 .7.6 for more information. 3. This means count is maintained for each unique source IP address or each unique destination IP address.7.3. Option track by src|by dst count c seconds s Description Rate is tracked either by source IP address or destination IP address.7 activated by The activated by keyword allows the rule writer to dynamically enable a rule when a specific activate rule is triggered.100 during one sampling period of 60 seconds. after evaluating all other rule options (regardless of the position of the filter within the rule source). Time period over which count is accrued.9 replace The replace keyword is a feature available in inline mode which will cause Snort to replace the prior matching content with the given string. Snort evaluates a detection filter as the last step of the detection phase. It allows the rule writer to specify how many packets to leave the rule enabled for after it is activated. seconds <s>. detection filter has the following format: detection_filter: \ track <by_src|by_dst>. See section 1. At most one detection filter is permitted per rule. 3. count: 50.7. one per content. Both the new string and the content it is to replace must have the same length. You can have multiple replacements within a rule.10 detection filter detection filter defines a rate which must be exceeded by a source or destination host before a rule can generate an event.5 for more on operating in inline mode.8 count The count keyword must be used in combination with the activated by keyword.1. replace: <string>. See Section 3. See Section 3. Format activated_by: 1.2.this rule will fire on every failed login attempt from 10. C must be nonzero. 7. nocase.drop tcp 10. detection filter Track by source or destination IP address and if the rule otherwise matches more than the configured rate it will fire. activated by This keyword allows the rule writer to dynamically enable a rule when a specific activate rule is triggered. There is no functional difference between adding a threshold to a rule. \ content:"SSH". This can be done using the ‘limit’ type of threshold. or using a standalone threshold applied to the same rule. \ detection_filter: track by_src.7.4. activates This keyword allows the rule writer to specify a rule to add when a specific network event occurs.2) as standalone configurations instead.1. Available in inline mode only. It allows the rule writer to specify how many packets to leave the rule enabled for after it is activated. offset:0.1. session The session keyword is built to extract user data from TCP Sessions. There is a logical difference.11 Post-Detection Quick Reference Table 3.11: Post-detection rule option keywords Keyword logto Description The logto keyword tells Snort to log all packets that trigger this rule to a special output log file. rev:1. a rule for detecting a too many login password attempts may require more than 5 attempts. \ sid:1000001. seconds 60. count This keyword must be used in combination with the activated by keyword.10) within rules. react This keyword implements an ability for users to react to traffic that matches a Snort rule by closing connection and sending a notice.to_server. These should incorporate the threshold into the rule.100 any > 10.8 Rule Thresholds ! △NOTE are deprecated and will not be supported in a future release.) Since potentially many events will be generated. or you can use standalone thresholds that reference the generator and SID they are applied to. threshold can be included as part of a rule.100 22 ( \ msg:"SSH Brute Force Attempt".1. tag The tag keyword allow rules to log more than just the single packet that triggered the rule. count 30. replace Replace the prior matching content with the given string of the same length.2. Rule thresholds Use detection filters (3. For instance. a detection filter would normally be used in conjunction with an event filter to reduce the number of logged events. 3. depth:4. Format 150 . Some rules may only make sense with a threshold. or event filters (2. flow:established. It makes sense that the threshold feature is an integral part of this rule. resp The resp keyword is used attempt to close sessions when an alert is triggered. 3. s must be nonzero value. rev:1.9 Writing Good Rules There are some general concepts to keep in mind when developing Snort rules to maximize efficiency and speed.) This rule logs at most one event every 60 seconds if at least 10 events on this SID are fired. or destination IP address. seconds <s>. sid:1000852. seconds 60 . Ports or anything else are not tracked. number of rule matching in s seconds that will cause event filter limit to be exceeded. a new time period starts for type=threshold. nocase. count 10 . time period over which count is accrued. reference:nessus. rev:1.txt". alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots. nocase. seconds 60 . threshold: type threshold.10302. established.10302. established. established. \ track <by_src|by_dst>. \ uricontent:"/robots. threshold: type limit. track \ by_dst. rev:1. threshold: type both . alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots. reference:nessus. \ count <c>. 151 .txt access". count 10 . seconds 60 .txt". Type both alerts once per time interval after seeing m occurrences of the event. flow:to_server. This means count is maintained for each unique source IP addresses. \ classtype:web-application-activity. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots. then ignores any additional events during the time interval. sid:1000852. \ classtype:web-application-activity. then ignores events for the rest of the time interval. flow:to_server.10302. \ uricontent:"/robots. reference:nessus. So if less than 10 events occur in 60 seconds. Option type limit|threshold|both Description type limit alerts on the 1st m events during the time interval. nothing gets logged.txt access".) This rule logs every 10th event on this SID during a 60 second interval. sid:1000852. rate is tracked either by source IP address. flow:to_server.threshold: \ type <limit|threshold|both>. c must be nonzero value. count 1 . Type threshold alerts every m times we see this event during the time interval.txt access". or for each unique destination IP addresses.txt". nocase. \ track by_dst. track by src|by dst count c seconds s Examples This rule logs the first event of this SID every 60 seconds.) 3. \ classtype:web-application-activity. \ uricontent:"/robots. track \ by_src. Once an event is logged. 3 Catch the Oddities of the Protocol in the Rule Many services typically send the commands in upper case letters. For example. If at all possible.9. For example.3.2 Catch the Vulnerability.) While it may seem trivial to write a rule that looks for the username root. look for a the vulnerable command with an argument that is too large. 152 . the client sends: user username_here A simple rule to look for FTP root login attempts could be: alert tcp any any -> any any 21 (content:"user root". Rules without content (or uricontent) slow the entire system down. followed by root. The longer a content option is. looking for user.established.1 Content Matching The 2. • The rule has a pcre option. a good rule will handle all of the odd things that the protocol might handle when accepting the user command. verifying this is traffic going to the server on an enstablished session.9. pcre:"/user\s+root/i". such as pcre and byte test.0 detection engine changes the way Snort works slightly by having the first phase be a setwise pattern match. the more exact the match.) There are a few important things to note in this rule: • The rule has a flow option. instead of a specific exploit. Not the Exploit Try to write rules that target the vulnerability. By writing rules for the vulnerability. 3. try and have at least one content option if at all possible. perform detection in the payload section of the packet. to send the username. followed at least one space character (which includes tab). A good rule that looks for root login on ftp would be: alert tcp any any -> any 21 (flow:to_server. FTP is a good example. instead of shellcode that binds a shell. \ content:"root". most unique string in the attack. ignoring case. they do not use the setwise pattern matching engine. This option is added to allow Snort’s setwise pattern match detection engine to give Snort a boost in speed. While some detection options.9. each of the following are accepted by most FTP servers: user root user root user root user root user<tab>root To handle all of the cases that the FTP server might handle. the rule is less vulnerable to evasion when an attacker changes the exploit slightly. 3. which is the longest. the rule needs more smarts than a simple string match. looking for root. In FTP. • The rule has a content option. that may not sound like a smart idea. Rules that are not properly written can cause Snort to waste time duplicating checks. because of recursion. For example. once it is found. . Without recursion.) This rule would look for “a”. immediately followed by “b”. While recursion is important for detection. and because of recursion. because the first ”a” is not immediately followed by “b”. Why? The content 0x13 would be found in the first byte. For example. then look for the pattern again after where it was found the previous time. content:"|13|". as the dsize check is the first option checked and dsize is a discrete check without recursion. within:1. the content 0x13 would be found again starting after where the previous 0x13 was found. repeating until 0x13 is not found in the payload again. and if any of the detection options after that pattern fail. then check the dsize again. Repeat until the pattern is not found again or the opt functions all succeed. By looking at this rule snippit. Reordering the rule options so that discrete checks (such as dsize) are moved to the begining of the rule speed up Snort. However. content:"b". On first read. but it is needed. A packet of 1024 bytes of 0x13 would fail immediately. The optimized rule snipping would be: dsize:1. then the dsize option would fail.9.4 Optimizing Rules The content matching portion of the detection engine has recursion to handle a few evasion cases. The way the recursion works now is if a pattern matches. the payload “aab” would fail. take the following rule: alert ip any any -> any any (content:"a". even though it is obvious that the payload “aab” has “a” immediately followed by “b”. it is obvious the rule looks for a packet with a single byte of 0x13. the following rule options are not optimized: content:"|13|".3. the recursion implementation is not very smart. dsize:1. a packet with 1024 bytes of 0x13 could cause 1023 too many pattern match attempts and 1023 too many dsize checks. ............ ........ In order to understand why byte test and byte jump are useful......... RPC was the protocol that spawned the requirement for these two rule options.......• seq • session • tos • ttl • ack • window • resp • sameip 3... 89 00 00 00 09 00 00 01 9c 00 00 87 e2 00 02 88 the rpc rpc rpc request id.. ........5 Testing Numerical Values The rule options byte test and byte jump were written to support writing rules for protocols that have length encoded data.. Let’s break this up. as RPC uses simple length based encoding for passing data...metasplo it............ .................9..... There are a few things to note with RPC: • Numbers are written as uint32s.. . ................ a random uint32............ The string “bob” would show up as 0x00000003626f6200............... unique to each request type (call = 0.. and figure out how to write a rule to catch this exploit...... taking four bytes....... ........metasplo it........./bin/sh......system.. .. .. .......... response = 1) version (2) program (0x00018788 = 100232 = sadmind) 154 ..... let’s go through an exploit attempt against the sadmind service..e.... ... and then null bytes to pad the length of the string to end on a 4 byte boundary......... • Strings are written as a uint32 specifying the length of the string..... the string......./..... aka none) . However. we need to make sure that our packet is a call to the procedure 1. we use: 155 . aka none) The rest of the packet is the request that gets passed to procedure 1 of sadmind.length of the client machine name (0x0a = 10) 53 50 4c 4f 49 54 00 00 . we need to make sure that our packet is a call to sadmind..unix timestamp (0x40283a10 = 1076378128 = feb 10 01:55:28 2004 gmt) . we are now at: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 which happens to be the exact location of the uid. we have decoded enough of the request to write our rule. we want to read 4 bytes. offset:16. we need to make sure that our packet has auth unix credentials.length of verifier (0. content:"|00 00 00 01|". offset:4. offset:20. If we do that.gid of requesting user (0) 00 00 00 00 . offset:12. depth:4. We don’t care about the hostname. In english. Then. As such.metasploit 00 00 00 00 . we need to make sure that our packet is an RPC call. depth:4. depth:4. aligning on the 4 byte boundary. turn it into a number. Starting at the length of the hostname. the value we want to check. making sure to account for the padding that RPC requires on strings. the vulnerable procedure . depth:4. This is where byte test is useful. but we want to skip over it and check a number value after the hostname. content:"|00 00 00 00|". content:"|00 01 87 88|". we know the vulnerability is that sadmind trusts the uid coming from the client.extra group ids (0) 00 00 00 00 00 00 00 00 . To do that in a Snort rule. Then. Then. content:"|00 00 00 01|". and turn those 4 bytes into an integer and jump that many bytes forward. 36 bytes from the beginning of the packet. and jump that many bytes forward.uid of requesting user (0) 00 00 00 00 . First. sadmind runs any request where the client’s uid is 0 as root.verifier flavor (0 = auth\_null. we do: byte_test:4. depth:4.align.200. In Snort. offset:4. The 3rd and fourth string match are right next to each other. depth:4. content:"|00 00 00 00|". byte_test:4.36. within:4. offset:16. offset:4. To do that. within:4. If the sadmind service was vulnerable to a buffer overflow when reading the client’s hostname. instead of reading the length of the hostname and jumping that many bytes forward.>. depth:4.align. content:"|00 00 00 01 00 byte_jump:4. offset:12. we would read 4 bytes. offset:4.200. depth:4. content:"|00 00 00 00|".>. depth:4. content:"|00 00 00 00|". byte_jump:4.align. we would check the length of the hostname to make sure it is not too large. within:4. content:"|00 01 87 88|". content:"|00 00 00 01 00 00 00 01|". offset:16. We end up with: content:"|00 00 00 00|". depth:8.byte_jump:4. content:"|00 00 00 01|". content:"|00 01 87 88|". and then make sure it is not too large (let’s say bigger than 200 bytes). let’s put them all together. offset:16. depth:4. 00 00 01|". Our full rule would be: content:"|00 00 00 00|". offset:12. content:"|00 00 00 01|".36. content:"|00 00 00 00|". content:"|00 01 87 88|". then we want to look for the uid of 0. depth:4.36. starting 36 bytes into the packet. offset:12. depth:8.36.36. so we should combine those patterns. depth:4. 156 . turn it into a number. offset:20. Now that we have all the detection capabilities for our rule. 157 . Phil Woods (cpw@lanl. as this appears to be the maximum number of iovecs the kernel can handle. for a total of around 52 Mbytes of memory for the ring buffer alone. by using a shared memory ring buffer. On Ethernet. a modified version of libpcap is available that implements a shared memory ring buffer.1 MMAPed pcap On Linux. This change speeds up Snort by limiting the number of times the packet is copied before Snort gets to perform its detection upon it. Instead of the normal mechanism of copying the packets from kernel memory into userland memory. libpcap will automatically use the most frames possible. PCAP FRAMES is the size of the ring buffer. enabling the ring buffer is done via setting the enviornment variable PCAP FRAMES.Chapter 4 Making Snort Faster 4.lanl. Once Snort linked against the shared memory libpcap.gov) is the current maintainer of the libpcap implementation of the shared memory ring buffer. According to Phil. libpcap is able to queue packets into a shared buffer that Snort is able to read directly. By using PCAP FRAMES=max. this ends up being 1530 bytes per frame. The shared memory ring buffer libpcap can be downloaded from his website at. the maximum size is 32768. but typically is limited to a single functionality such as a preprocessor. Beware: the definitions herein may be out of date.1 Data Structures A number of data structures are central to the API. 5. rules. This inclues functions to register the preprocessor’s configuration parsing. It includes 158 . and processing functions. The definition of each is defined in the following sections.1 DynamicPluginMeta The DynamicPluginMeta structure defines the type of dynamic module (preprocessor.1.2 DynamicPreprocessorData The DynamicPreprocessorData structure defines the interface the preprocessor uses to interact with snort itself. and rules as a dynamic plugin to snort.h as: #define MAX_NAME_LEN 1024 #define TYPE_ENGINE 0x01 #define TYPE_DETECTION 0x02 #define TYPE_PREPROCESSOR 0x04 typedef struct _DynamicPluginMeta { int type. the version information. restart. exit. char uniqueName[MAX_NAME_LEN]. char *libraryPath. 5. and path to the shared library. detection capabilities. the dynamic API presents a means for loading dynamic libraries and allowing the module to utilize certain functions within the main snort code. or detection engine). and rules can now be developed as dynamically loadable module to snort. It is defined in sf dynamic meta. } DynamicPluginMeta. When enabled via the –enable-dynamicplugin configure option. int build. A shared library can implement all three types.1. int major. detection engines. 5. The remainder of this chapter will highlight the data structures and API functions used in developing preprocessors. int minor. check the appropriate header files for the current definitions.Chapter 5 Dynamic Modules Preprocessors. 5. DetectAsn1 asn1Detect. It also includes a location to store rule-stubs for dynamic rules that are loaded. int *debugMsgLine. It also includes information for setting alerts. PCREStudyFunc pcreStudy. This includes functions for logging messages.h as: typedef struct _DynamicEngineData { int version. and it provides access to the normalized http and alternate data buffers.4 SFSnortPacket The SFSnortPacket structure mirrors the snort Packet structure and provides access to all of the data contained in a given packet. It is defined in sf dynamic preprocessor. PCREExecFunc pcreExec. GetPreprocRuleOptFuncs getPreprocOptFuncs.1. RegisterRule ruleRegister. UriInfo *uriBuffers[MAX_URIINFOS]. char *dataDumpDirectory. CheckFlowbit flowbitCheck. RegisterBit flowbitRegister. It is defined in sf dynamic engine.3 DynamicEngineData The DynamicEngineData structure defines the interface a detection engine uses to interact with snort itself. SetRuleData setRuleData. Check the header file for the current definitions. PCRECompileFunc pcreCompile. } DynamicEngineData. access to the StreamAPI. DebugMsgFunc debugMsg.h.h. LogMsgFunc logMsg. It and the data structures it incorporates are defined in sf snort packet. GetRuleData getRuleData.1. u_int8_t *altBuffer. and it provides access to the normalized http and alternate data buffers. 5. errors.function to log messages. LogMsgFunc errMsg. handling Inline drops. Additional data structures may be defined to reference other protocol fields. LogMsgFunc fatalMsg. and debugging info. errors. Check the header file for the current definition. #ifdef HAVE_WCHAR_H DebugWideMsgFunc debugWideMsg. 159 . fatal errors. and debugging info as well as a means to register and check flowbits. fatal errors. This data structure should be initialized when the preprocessor shared library is loaded. #endif char **debugMsgFile. void *ruleData. /* NULL terminated array of references */ RuleMetaData **meta. char noAlert. message text. char initialized.5 Dynamic Rules A dynamic rule should use any of the following data structures. typedef struct _RuleInformation { u_int32_t genID.1. /* Rule Initialized. and a list of references). 160 . classification. char *message. That includes protocol.5. used internally */ /* Flag with no alert. RuleReference **references. generator and signature IDs. /* String format of classification name */ u_int32_t priority. and a list of references. u_int32_t numOptions. u_int32_t revision. signature ID. revision. Rule The Rule structure defines the basic outline of a rule and contains the same set of information that is seen in a text rule. classification. It also includes a list of rule options and an optional evaluation function. used internally */ /* Rule option count. char *classification. address and port information and rule information (classification. #define RULE_MATCH 1 #define RULE_NOMATCH 0 typedef struct _Rule { IPInfo ip. revision. where the parameter is a pointer to the SFSnortPacket structure. The following structures are defined in sf snort plugin api. priority. used internally */ Hash table for dynamic data pointers */ The rule evaluation function is defined as typedef int (*ruleEvalFunc)(void *). RuleOption **options. RuleInformation info. /* NULL terminated array of RuleOption union */ ruleEvalFunc evalFunc. priority. RuleInformation The RuleInformation structure defines the meta data for a rule and includes generator ID. /* } Rule. /* NULL terminated array of references */ } RuleInformation. u_int32_t sigID.h. including the system name and rereference identifier. typedef struct _IPInfo { u_int8_t protocol. char * dst_port. char * src_addr. OPTION_TYPE_ASN1. Some of the standard strings and variables are predefined . src address and port. /* 0 for non TCP/UDP */ char direction. IPInfo The IPInfo structure defines the initial matching criteria for a rule and includes the protocol. Each option has a flags field that contains specific flags for that option as well as a ”Not” flag. and direction. OPTION_TYPE_LOOP. OPTION_TYPE_SET_CURSOR. } RuleReference. . OPTION_TYPE_PCRE. OPTION_TYPE_BYTE_EXTRACT. OPTION_TYPE_CONTENT. HOME NET. OPTION_TYPE_FLOWFLAGS. destination address and port. char * src_port. /* non-zero is bi-directional */ char * dst_addr. OPTION_TYPE_BYTE_TEST. The ”Not” flag is used to negate the results of evaluating that option. typedef enum DynamicOptionType { OPTION_TYPE_PREPROCESSOR. typedef struct _RuleReference { char *systemName. HTTP PORTS. etc. HTTP SERVERS. OPTION_TYPE_MAX 161 . /* 0 for non TCP/UDP */ } IPInfo.RuleReference The RuleReference structure defines a single rule reference. OPTION_TYPE_HDR_CHECK. OPTION_TYPE_FLOWBIT. OPTION_TYPE_BYTE_JUMP.any. OPTION_TYPE_CURSOR. char *refIdentifier. Asn1Context *asn1.}. u_int32_t flags. } RuleOption. FlowFlags *flowFlags. #define NOT_FLAG 0x10000000 Some options also contain information that is initialized at run time. Additional flags include nocase. The most unique content. The option types and related structures are listed below. . such as the compiled PCRE information. ContentInfo *content. int32_t offset. In the dynamic detection engine provided with Snort. BoyerMoore content information. /* must include a CONTENT_BUF_X */ void *boyer_ptr. } ContentInfo. the one with the longest content length will be used. etc. typedef struct _RuleOption { int optionType. typedef struct _ContentInfo { u_int8_t *pattern. } option_u. CursorInfo *cursor. depth and offset. relative. u_int8_t *patternByteForm. if no ContentInfo structure in a given rules uses that flag. It includes the pattern. • OptionType: Content & Structure: ContentInfo The ContentInfo structure defines an option for a content search. the integer ID for a flowbit. and flags (one of which must specify the buffer – raw. unicode. that which distinguishes this rule as a possible match to a packet. u_int32_t incrementLength. LoopInfo *loop. ByteExtract *byteExtract. union { void *ptr. FlowBitsInfo *flowBit. URI or normalized – to search). PreprocessorOption *preprocOpt. u_int32_t patternByteFormLength. and a designation that this content is to be used for snorts fast pattern evaluation. u_int32_t depth. ByteData *byte. HdrOptCheck *hdrData. PCREInfo *pcre. isset. u_int32_t id.. /* must include a CONTENT_BUF_X */ } PCREInfo. } FlowBitsInfo. u_int32_t compile_flags. u_int8_t operation. • OptionType: Flowbit & Structure: FlowBitsInfo The FlowBitsInfo structure defines a flowbits option. unset. void *compiled_extra. u_int32_t flags. • OptionType: Flow Flags & Structure: FlowFlags The FlowFlags structure defines a flow option. It includes the PCRE expression.h. as defined in PCRE. void *compiled_expr. u_int32_t flags. which specify the direction (from server. toggle. to server).#define CONTENT_BUF_RAW #define CONTENT_BUF_URI 0x200 0x400 • OptionType: PCRE & Structure: PCREInfo The PCREInfo structure defines an option for a PCRE search. #define FLOW_ESTABLISHED 0x10 #define FLOW_IGNORE_REASSEMBLED 0x1000 #define FLOW_ONLY_REASSMBLED 0x2000 163 . /* pcre. established session. It includes the flags. pcre flags such as caseless. etc. isnotset). and flags to specify the buffer. } FlowFlags. This can be used to verify there is sufficient data to continue evaluation. int offset_type. /* specify one of CONTENT_BUF_X */ • OptionType: Protocol Header & Structure: HdrOptCheck The HdrOptCheck structure defines an option to check a protocol header for a specific value.etc). • OptionType: ASN. int length. and flags. -.is option xx included */ IP Time to live */ IP Type of Service */ #define TCP_HDR_ACK /* TCP Ack Value */ 164 . It mirrors the ASN1 rule option and also includes a flags field. u_int32_t flags. int double_overflow. as related to content and PCRE searches. as well as byte tests and byte jumps. a value. typedef struct _CursorInfo { int32_t offset. unsigned int max_length.=. The cursor is the current position within the evaluation buffer. the operation (¡. It includes an offset and flags that specify the buffer.#define #define #define #define FLOW_FR_SERVER FLOW_TO_CLIENT FLOW_TO_SERVER FLOW_FR_CLIENT 0x40 0x40 /* Just for redundancy */ 0x80 0x80 /* Just for redundancy */ typedef struct _FlowFlags { u_int32_t flags. int print. } CursorInfo. • OptionType: Cursor Check & Structure: CursorInfo The CursorInfo structure defines an option for a cursor evaluation.1 & Structure: Asn1Context The Asn1Context structure defines the information for an ASN1 option. It incldues the header field. similar to the isdataat rule option. a mask to ignore that part of the header field. u_int32_t flags. int offset. #define ASN1_ABS_OFFSET 1 #define ASN1_REL_OFFSET 2 typedef struct _Asn1Context { int bs_overflow. } Asn1Context.¿. /* u_int32_t value. The flags must specify the buffer. int32_t offset. for checkValue. -. • OptionType: Set Cursor & Structure: CursorInfo See Cursor Check above. u_int32_t multiplier. /* u_int32_t mask_value op. and increment values as well as the comparison operation for 165 . or extracted value */ Offset from cursor */ Used for byte jump -.=. u_int32_t value.etc). /* u_int32_t flags. /* u_int32_t op. for checkValue */ Value to compare value against. It includes the number of bytes. an offset.DynamicElement The LoopInfo structure defines the information for a set of options that are to be evaluated repeatedly.32bits is MORE than enough */ must include a CONTENT_BUF_X */ • OptionType: Byte Jump & Structure: ByteData See Byte Test above. The loop option acts like a FOR loop and includes start. } ByteData. and flags. Field to check */ Type of comparison */ Value to compare value against */ bits of value to ignore */ • OptionType: Byte Test & Structure: ByteData The ByteData structure defines the information for both ByteTest and ByteJump operations.¿. /* /* /* /* /* /* Number of bytes to extract */ Type of byte comparison. ¡. multiplier.ByteExtract. end. a value. u_int32_t flags. } HdrOptCheck. • OptionType: Loop & Structures: LoopInfo. /* u_int32_t multiplier. DynamicElement *increment. the value is filled by a related ByteExtract option that is part of the loop.. /* Value of static */ int32_t *dynamicInt. /* char *refId. /* reference ID (NULL if static) */ union { void *voidPtr. u_int32_t flags. For a dynamic element. typedef struct _LoopInfo { DynamicElement *start. One of those options may be a ByteExtract. /* /* /* /* /* /* * /* /*. /* Pointer to value of dynamic */ } data. /* int32_t offset. /* u_int32_t flags. DynamicElement *end. flags specifying the buffer. multiplier. an offset. 166 .termination. /* } ByteExtract. specifies The ByteExtract structure defines the information to use when extracting bytes for a DynamicElement used a in Loop evaltion. and a reference to the DynamicElement. struct _Rule *subRule. 5. typedef struct _ByteExtract { u_int32_t bytes. } DynamicElement. #define DYNAMIC_TYPE_INT_STATIC 1 #define DYNAMIC_TYPE_INT_REF 2 typedef struct _DynamicElement { char dynamicType. u_int8_t initialized. a reference to a RuleInfo structure that defines the RuleOptions are to be evaluated through each iteration. /* type of this field . } LoopInfo. /* void *memoryLocation.static or reference */ char *refId.2 Required Functions Each dynamic module must define a set of functions and data objects to work within this framework. It includes the number of bytes. It includes a cursor adjust that happens through each iteration of the loop. u_int32_t op. It includes whether the element is static (an integer) or dynamic (extracted from a buffer in the packet) and the value. /* Holder */ int32_t staticInt. • int RegisterRules(Rule **) This is the function to iterate through each rule in the list. Cursor position is updated and returned in *cursor. PCRE evalution data. • int DumpRules(char *. FlowFlags *flowflags) This function evaluates the flow for a given packet. initialize it to setup content searches. • int InitializeEngineLib(DynamicEngineData *) This function initializes the data structure for use by the engine. Value extracted is stored in ByteExtract memoryLocation paraneter. etc). and the distance option corresponds to offset. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. Each of the functions below returns RULE MATCH if the option matches based on the current criteria (cursor position. 5. dpd and invokes the setup function.2 Detection Engine Each dynamic detection engine library must define the following functions. The sample code provided with Snort predefines those functions and defines the following APIs to be used by a dynamic rules library. drop.c. These are defined in the file sf dynamic preproc lib. and register flowbits. ContentInfo* content. – int processFlowbits(void *p. checking for the existence of that content as delimited by ContentInfo and cursor. Rule *rule) This is the function to evaluate a rule if the rule does not have its own Rule Evaluation Function.1 Preprocessors Each dynamic preprocessor library must define the following functions. log. • int ruleMatch(void *p.5. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. – int extractValue(void *p. 167 . FlowBitsInfo *flowbits) This function evaluates the flowbits for a given packet. as specified by FlowBitsInfo. – int contentMatch(void *p. the with option corresponds to depth. ByteExtract *byteExtract.Rule **) This is the function to iterate through each rule in the list and write a rule-stop to be used by snort to control the action of the rule (alert. The metadata and setup function for the preprocessor should be defined sf preproc info. u int8 t **cursor) This function evaluates a single content for a given packet. etc).2. It will interact with flowbits used by text-based rules. – int checkFlow(void *p. This uses the individual functions outlined below for each of the rule options and handles repetitive content issues. as specified by ByteExtract and delimited by cursor. u int8 t *cursor) This function extracts the bytes from a given packet. • int InitializePreprocessor(DynamicPreprocessorData *) This function initializes the data structure for use by the preprocessor into a library global variable.2. With a text rule.h. Take extra care to handle this situation and search for the matched pattern again if subsequent rule options fail to match. HdrOptCheck *optData) This function evaluates the given packet’s protocol headers. u int8 t *cursor) This function compares the value to the value stored in ByteData. u int8 t **cursor) This function evaluates a single pcre for a given packet. Asn1Context *asn1. as delimited by Asn1Context and cursor. CursorInfo *cursorInfo. ByteData *byteData.c The metadata and setup function for the preprocessor should be defined in sfsnort dynamic detection lib. It is also used by contentMatch. New cursor position is returned in *cursor. CursorInfo *cursorInfo. This should be done for both content and PCRE options. Cursor position is updated and returned in *cursor.1 check for a given packet. as delimited by LoopInfo and cursor.– int setCursor(void *p.2. – int pcreMatch(void *p. 5. Examples are defined in the file sfnort dynamic detection lib. u int8 t *cursor) This function validates that the cursor is within bounds of the specified buffer. as specified by HdrOptCheck. u int8 t **cursor) This function adjusts the cursor as delimited by CursorInfo. – int byteTest(void *p. – int loopEval(void *p. ByteData *byteData. u int8 t *cursor) This is a wrapper for extractValue() followed by checkValue(). u int8 t **cursor) This function iterates through the SubRule of LoopInfo. – int byteJump(void *p. – int checkHdrOpt(void *p. PreprocessorOption *preprocOpt. LoopInfo *loop. and pcreMatch to adjust the cursor position after a successful match. u int8 t *cursor) This function evaluates an ASN. u int8 t **cursor) This function is used to handled repetitive contents to save off a cursor position temporarily to be reset at later point. u int8 t **cursor) This is a wrapper for extractValue() followed by setCursor(). ! △NOTE If you decide to write you own rule evaluation function. ByteData *byteData.h. Cursor position is updated and returned in *cursor. • int EngineVersion(DynamicPluginMeta *) This function defines the version requirements for the corresponding detection engine library. as spepcifed by PreprocessorOption. It handles bounds checking for the specified buffer and returns RULE NOMATCH if the cursor is moved out of bounds. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. u int8 t **cursor) This function evaluates the preprocessor defined option. – int detectAsn1(void *p. checking for the existence of the expression as delimited by PCREInfo and cursor. byteJump. patterns that occur more than once may result in false negatives. u int8 t **cursor) This function is used to revert to a previously saved temporary cursor position. PCREInfo *pcre. – int checkCursor(void *p. – void revertTempCursor(u int8 t **temp cursor. Cursor position is updated and returned in *cursor. 168 . – int preprocOptionEval(void *p. – int checkValue(void *p.3 Rules Each dynamic rules library must define the following functions. – void setTempCursor(u int8 t **temp cursor. u int32 t value. defined in sf preproc info.c into lib sfdynamic preprocessor example.c and is compiled together with sf dynamic preproc lib.• int DumpSkeletonRules() This functions writes out the rule-stubs for rules that are loaded. register flowbits. 5. • int InitializeDetection() This function registers each rule in the rules library.h.1 Preprocessor Example The following is an example of a simple preprocessor.). 5. "Preprocessor: Example is setup\n"). #define GENERATOR_EXAMPLE 256 extern DynamicPreprocessorData _dpd.c and sf dynamic preproc lib. This is the metadata for this preprocessor. void ExampleSetup() { _dpd. The sample code provided with Snort predefines those functions and uses the following data within the dynamic rules library. void *).debugMsg(DEBUG_PLUGIN.h are used. void ExampleInit(unsigned char *).3. } The initialization function to parse the keywords from snort. ExampleInit).registerPreproc("dynamic_example". This preprocessor always alerts on a Packet if the TCP port matches the one configured. The remainder of the code is defined in spp example. void ExampleProcess(void *. etc. DEBUG_WRAP(_dpd.conf. #define #define #define #define MAJOR_VERSION 1 MINOR_VERSION 0 BUILD_VERSION 0 PREPROC_NAME "SF_Dynamic_Example_Preprocessor" ExampleSetup #define DYNAMIC_PREPROC_SETUP extern void ExampleSetup().3 Examples This section provides a simple example of a dynamic preprocessor and a dynamic rule. It should set up fast pattern-matcher content. 169 . This assumes the the files sf dynamic preproc lib.so. • Rule *rules[] A NULL terminated list of Rule structures that this library defines. Define the Setup function to register the initialization function. arg).fatalMsg("ExamplePreproc: Missing port\n"). if (port < 0 || port > 65535) { _dpd.debugMsg(DEBUG_PLUGIN. " \t\n\r"). #define SRC_PORT_MATCH 1 #define SRC_PORT_MATCH_STR "example_preprocessor: src port match" #define DST_PORT_MATCH 2 #define DST_PORT_MATCH_STR "example_preprocessor: dest port match" void ExampleProcess(void *pkt. _dpd. if (!p->ip4_header || p->ip4_header->proto != IPPROTO_TCP || !p->tcp_header) { /* Not for me. "\t\n\r"). &argEnd. } port = strtoul(arg.logMsg(" } else { _dpd.). Transport layer. ID 10000 */ _dpd. "Preprocessor: Example is initialized\n"). . if(!strcasecmp("port". 10000). arg)) { arg = strtok(NULL. if (!arg) { _dpd. 10). void ExampleInit(unsigned char *args) { char *arg.logMsg("Example dynamic preprocessor configuration\n"). } if (p->src_port == portToCheck) { 170 Port: %d\n".u_int16_t portToCheck. } /* Register the preprocessor function. return */ return. _dpd. PRIORITY_TRANSPORT. DEBUG_WRAP(_dpd. } portToCheck = port. port). void *context) { SFSnortPacket *p = (SFSnortPacket *)pkt. arg = strtok(args.fatalMsg("ExamplePreproc: Invalid port %d\n".addPreproc(ExampleProcess.fatalMsg("ExamplePreproc: Invalid option %s\n". char *argEnd. unsigned long port. portToCheck). } The function to process the packet and log an alert if the either port matches. alertAdd(GENERATOR_EXAMPLE.) This is the metadata for this rule library. The snort rule in normal format: alert tcp $HOME_NET 12345:12346 -> $EXTERNAL_NET any \ (msg:"BACKDOOR netbus active". \ sid:109. SRC_PORT_MATCH.alertAdd(GENERATOR_EXAMPLE.h. return. take from the current rule set. 0). } } 5./* Source port matched. DST_PORT_MATCH_STR. } if (p->dst_port == portToCheck) { /* Destination port matched. 3. • Flow option Define the FlowFlags structure and its corresponding RuleOption.3. Per the text version. 1. static RuleOption sid109option1 = { 171 .c. classtype:misc-activity. 3. reference:arachnids. SRC_PORT_MATCH_STR.401. defined in detection lib meta. Declaration of the data structures. 0. flow:from_server. It is implemented to work with the detection engine provided with snort. 0). flow is from server. return. static FlowFlags sid109flow = { FLOW_ESTABLISHED|FLOW_TO_CLIENT }. rev:5. log alert */ _dpd. 0.established. DST_PORT_MATCH. 1.2 Rules The following is an example of a simple rule. \ content:"NetBus". log alert */ _dpd. SID 109. /* Type */ "401" /* value */ }. case sensitive. NULL }. The list of rule options. and non-relative. /* holder for 0. static RuleReference sid109ref_arachnids = { "arachnids". /* pattern to 0. Rule options are evaluated in the order specified. NULL }. search for */ boyer/moore info */ byte representation of "NetBus" */ length of byte representation */ increment length */ 172 . static RuleOption sid109option2 = { OPTION_TYPE_CONTENT. Per the text version. NOTE: This content will be used for the fast pattern matcher since it is the longest content option for this rule and no contents have a flag of CONTENT FAST PATTERN. /* flags */ NULL. no depth or offset. /* depth */ 0. /* offset */ CONTENT_BUF_NORMALIZED. /* holder for NULL. Search on the normalized buffer by default. static ContentInfo sid109content = { "NetBus". /* holder for 0 /* holder for }. content is ”NetBus”.OPTION_TYPE_FLOWFLAGS. • Content Option Define the ContentInfo structure and its corresponding RuleOption. static RuleReference *sid109refs[] = { &sid109ref_arachnids. { &sid109content } }. { &sid109flow } }. &sid109option2. • Rule and Meta Data Define the references. RuleOption *sid109options[] = { &sid109option1. /* revision */ "misc-activity". /* proto */ HOME_NET. /* Use internal eval func */ 0. /* Holder. /* Direction */ EXTERNAL_NET. used internally */ 0. akin to => tcp any any -> any any */ { IPPROTO_TCP. pcre. &sid637. /* source IP */ "12345:12346". with the protocl header. option count. /* Holder. /* source port(s) */ 0. used internally for flowbits */ NULL /* Holder. used internally */ 0. The InitializeDetection iterates through each Rule in the list and initializes the content. extern Rule sid109. extern Rule sid637. /* priority */ "BACKDOOR netbus active". 173 .use 3 to distinguish a C rule */ 109. /* message */ sid109refs /* ptr to references */ }. /* destination port */ }. /* metadata */ { 3. message. not yet initialized. used internally */ • The List of rules defined by this rules library The NULL terminated list of rules. /* ptr to rule options */ NULL. sid109options. Rule *rules[] = { &sid109. Rule sid109 = { /* protocol header. meta data (sid. /* classification */ 0. /* sigid */ 5.The rule itself. etc. etc). /* destination IP */ ANY_PORT. rule data. /* Holder. flowbits. /* genid -. NULL }. no alert. classification. Bugfixes are what goes into STABLE. Patches should done with the command diff -nu snort-orig snort-new. This is intended to help developers get a basic understanding of whats going on quickly.h for the list of pkt * constants. If you are going to be helping out with Snort development. This allows this to be easily extensible. this chapter is here as a place holder. We’ve had problems in the past of people submitting patches only to the stable branch (since they are likely writing this stuff for their own IDS purposes). Packets are then sent through the registered set of preprocessors.2. a TCP analysis preprocessor could simply return if the packet does not have a TCP header. traffic is acquired from the network link via libpcap.net mailing list.sourceforge. Packets are passed through a series of decoder routines that first fill out the packet structure for link level protocols then are further decoded for things like TCP and UDP ports.Chapter 6 Snort Development Currently.1 Submitting Patches Patches to Snort should be sent to the snort-devel@lists. Each of the keyword options is a plugin. 6. please use the HEAD branch of cvs.1 Preprocessors For example. Features go into HEAD. Similarly. Each preprocessor checks to see if this packet is something it should look at. The detection engine checks each packet against the various options listed in the Snort rules files. 174 . there are a lot of packet flags available that can be used to mark a packet as “reassembled” or logged.2 Snort Data Flow First. 6. It will someday contain references on how to create new detection plugins and preprocessors. It can do this by checking: if (p->tcph==null) return. Packets are then sent through the detection engine. 6. End users don’t really need to be reading this section. Check out src/decode. new output plugins should go into the barnyard project rather than the Snort project.2 Detection Plugins Basically. Later. 175 .6. 6. We are currently cleaning house on the available output options.3 Output Plugins Generally.2. look at an existing output plugin and copy it to a new item and change a few things.2. we’ll document what these few things are. . net/dedicated/cidr.com [5] [4] [1] [2] 177 .org/snortdb [6] [3]. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue listening from where you left off, or restart the preview.
https://www.scribd.com/doc/44795465/Snort-Manual-2-8-5-1
CC-MAIN-2016-36
refinedweb
49,068
60.82
Mongoose OS community chat Hi Everyone, I have been fighting an issue with Little FS and the Winbond W25Nxxx device driver / VFS driver for some time now. Using an ESP32-WROOM-32U with a W25N 2 Gbit device (I have not tried the 2 Gbit) I am able to start Mongoose and create the file system. But if I open a file using fopen() and start appending data, after just a few hundred bytes LFS reports the device is fill. Also, if I create a number of small files, after about 18 or so, it reports that the superblock is full. Does this ring any bells? mos.yml #include "mgos.h" #include "mgos_cron.h" static void cron_1min_cb(void *user_data, mgos_cron_id_t id) { time_t now = time(0); struct tm *tm = localtime(&now); char t[32]; strftime(t, sizeof(t), "%T", tm); LOG(LL_INFO, ("Uptime: %.2lf, time: %s, free_heap_size: %lu, min_free_heap_size: %lu", mgos_uptime(), t, (unsigned long) mgos_get_free_heap_size(), (unsigned long) mgos_get_min_free_heap_size())); (void) user_data; (void) id; } enum mgos_app_init_result mgos_app_init(void) { /* * Add a cron job at every 5 seconds of a minute. */ mgos_cron_add("5 */1 * * * *", cron_1min_cb, NULL); return MGOS_APP_INIT_SUCCESS; }
https://gitter.im/cesanta/mongoose-os?at=606d658fabf94b631dc0715b
CC-MAIN-2021-21
refinedweb
183
71.24
Back to: ASP.NET Web API Tutorials For Begineers and Professionals How to Enable SSL in Visual Studio Development Server In this article, I am going to discuss How to Enable SSL in Visual Studio Development Server with an example. While developing any application if you want to test the service using protocol then you need to enable SSL in visual studio. Let us understand how to enable SSL in Visual Studio with an example. Creating an Empty Application: First, create an empty Web API application with the name WebAPIEnableHTTP. Once you create the project then add the following model class (Employee.cs) with the Models folder. namespace WebAPIEnableHTTPS.Models { public class Employee { public int EmployeeID { get; set; } public string EmployeeName { get; set; } } } Once you add the Model then you need to add a Web API 2 Controller – Empty within the Controllers folder and name it as EmployeesController and then copy and paste the following code in it. using System.Collections.Generic; using System.Linq; using System.Web.Http; using WebAPIEnableHTTPS.Models; namespace WebAPIEnableHTTPS.Controllers { public class EmployeesController : ApiController { List<Employee> employees = new List<Employee>() { new Employee() { EmployeeID = 101, EmployeeName = "Anurag"}, new Employee() { EmployeeID = 102, EmployeeName = "Priyanka"}, new Employee() { EmployeeID = 103, EmployeeName = "Sambit"}, new Employee() { EmployeeID = 104, EmployeeName = "Preety"}, }; public IEnumerable<Employee> Get() { return employees; } public Employee Get(int id) { return employees.FirstOrDefault(s => s.EmployeeID == id); } } } At the moment when we navigate to the following URL, we get the output as expected. (please change the port number where your application is running), Let’s change the protocol to instead of HTTP and see what happened. We get the error page as This site can’t provide a secure connection. This is because we have not enabled SSL for our Web API Service. How to Enable SSL in Visual Studio Development Server? To enable SSL in the Visual Studio development server, you need to follow the below steps In the Solution Explorer click on the WebAPIEnableHTTP Web API project and press F4 key on the keyboard which will open the Project Properties window. From the Project Properties window, we need to set the SSL Enabled property to true. As soon as you do this Visual Studio sets the SSL URL as shown in the image below. As shown in the above image, once you set the SSL Enabled property to True, now you have two URLs one is SSL URL and the other one is the only URL. The SSL URL is and the URL is At this point, build the solution and then navigate to URL in the browser and you will see the following browser security page. Make sure you click on the “Advanced” link to see the “Proceed to localhost” link. Once you click on the above Advanced tab it opens the following section within the same page. Once you click on the Proceed to localhost (unsafe) tab, it will give you the response as shown in the image below. As shown in the above image, once you click on the Not Secure link, you will see that the certificate is invalid as shown below. The reason is that the certificate that Visual Studio installed automatically is not trusted. To solve the above problem, what we need to do is, we need to place the certificate that visual studio has issued in the Trusted Root Certificates folder Generating a Trusted Certificate: In order to use a trusted certificate, please follow the below steps Open the RUN window, then type mmc.exe and click on the OK button as shown below When you click on the OK button, one window will open, click “File” => “Add/Remove Snap-in” from that window and then from the “Available snap-ins” list select the “Certificates” and click on the “Add” button as shown in the below image Once you click on the Add button it will open another screen from where select the “Computer account” radio button and then click on the Next button as shown below When you click on the Next button, it will open another screen and from that screen select the “Local computer” radio button and click on the “Finish” button as shown below. Once you click on the Finish button, it will take you back to the Add or Remove Snap-ins screen and from there click on the OK button as shown in the below image. Expand the Console Root => Certificates (Local Computer) => Personal => Certificates folder and you will find a certificate that is Issued To localhost and Issued By localhost as shown in the image below. Right click on the localhost certificate, and then select “All Tasks” and then click on the “Export” option as shown in the image below. Once you click on the Export option, it will open the welcome to Welcome to Certificate Export Wizard screen and from there just click on the “Next” button. From the next screen select the No, do not export the private key radio button and click on the Next radio button as shown below. Once you click on the Next button, it will open the select File Format wizard and from that wizard select the “DER encoded binary X.509 (.CER)” radio button, and click on the Next button as shown in the below image. From the next screen, provide a meaningful name (in my case I have given MyLocalhostCertificate) for the certificate that you are exporting and then click on the “Next” button. Once you click on the Next button, it will open the following window from there just click on the Finish button. Please remember the path where your certificate is stored. In my case, it is C:\Windows\System32\ MyLocalhostCertificate Once you click on the Finish button, if everything is ok, then you will get the message Export Successful. How to Import the newly Generated Certificate in the Trusted Root Certification Folder? Expand the Console Root – Certificates (Local Computer) – Trusted Root Certification Authorities – Certificates. And then right click on the “Certificates“, and select “All Tasks” and then select the “Import” option as shown below. Click on the “Next” button on the subsequent screen. In the next screen, you have to enter the complete path where you have exported the certificate and then click on the click “Next” as shown below. In my case, the certificate is at C:\Windows\System32\MyLocalhostCertificate.cer Once you click on the Next button, it will open another screen and from that screen select “Place all certificates in the following store” radio button and click on the “Next” button as shown below Finally, click on the “Finish” button which will give you one message that import was successful. So that’s it. We have created and import the certificate for localhost in the trusted certificate location. Now first close all the instances of the browser. Open a new browser instance and navigate to and you will not get any certificate error. At the moment we can access the web API service using both HTTP and In the next article, I am going to discuss how to automatically redirect an HTTP request to HTTPS, when the request is made using HTTP to our Web API service. Here, in this article, I try to explain how to enable SSL in the Visual Studio Development Server step by step with an example. I hope this article will help you with your need. 1 thought on “Enable SSL in Visual Studio Development Server” Is there any way to specify the SSL URL. I can never remember the port number they provide so I would like to set my own as 50505.
https://dotnettutorials.net/lesson/enable-ssl-in-visual-studio-development-server/
CC-MAIN-2022-21
refinedweb
1,270
57.91
Journal tools | Personal search form | My account | Bookmark | Search: ... chick video preview globalpotd addiction diazepam fullmetal alchemist doujinshi dandelion seed hipaa seane corn video clip city of ottawa carmen elecktra nude tilbrook distribution alichino naked souls rainbow connection video gay head lottery number payday latest version windows movie ... ... actress everything orrico stacie video music video emo licenseplates cc skateboard trick video clip jeremy camp videos ambien cr rash carmen elecktra import race videos pierce county realtor jeans size 4 notebook hard drives and video cards capital of uganda sex space comic... ... a little bit this week, and by little I mean it, but I worked out extra too. Husband even worked out with me for an hour on Saturday to my Carmen Elecktra DVD! LOL It was cute. I was still nervous I wouldn’t make my weekly goal. After getting frustrated with my loss last week I decided I’m going to focus... ... film porn biggest tit boob atresia child choanal in ca condominium folsom sale adult video distributor carmen elecktra nude pussy footing aurora snow anal pic wet teen pussy com hot mature nude rich ...and gay men ass day pic condominiums for rent playa del carmen naked latin women black dicks pass condition wa state porn ... ...and scratchy movie spongebob video game cheats big dick tranny nude family portraits playa del carmen barcelo maya komplete 2 serial weidhaas internet mortgage lead north-east agriculture india ... free myspace video codes 2007 motorcycle suzuki dodge viper music tupac video watch elecktra movie website kumbia kings video codes new york lottery latest video codecs for ... ... translator fbi jobs asia carrera free porn video amateur mature woman rextron video splitter carmen electra getting fucked icpr 2004 mcfly video all about you sound advice audio and ...all in one video codec student credit cards kay parker movie download movie scary video elecktra the movie porn movie auditions play station needle aspirate six flags magic mountain x ... fbi job requirement asia carrera free porn video amateur mature couple rextron video splitter carmen electra getting fucked icpr 2004 mcfly video all about you sound advice audio and video... in one video codec student credit cards kay parker movie download movie scary video elecktra the movie porn movie auditions play station 2 needle aspirate six flags magic mountain ... Jennifer Lopez Cindy Margolis Selma Hayek Ines Rivero Carmen Frederique Carmens Kazanova Elecktra Yvonne Reyes Art By Carmen Yeoman Branda Where In The World Is Carmen San Diego Kate Groombridge Carmen Del Playa Villa Fantezi Carmens Veranda Loundry Playa Del Carmen Villas Jennifer Walcott Carmen Electra Result Page: 1 for Carmen Elecktra
http://www.ljseek.com/Carmen-Elecktra_s4Zp1.html
crawl-002
refinedweb
435
54.15
Chapter 4: Lock Down Security IT professionals have the option of using Internet Explorer (IE) security settings to disable or restrict the applicability of the MSJVM. Locking down security rather than migrating to another solution allows the MSJVM to stay on some machines. This is a viable option for internal applications that are dependant on the MSJVM, but are not intended for public use. This security policy can be deployed to an organization as a Group Policy with Active Directory. However, if your organization currently does not utilize Active Directory, it will be necessary to make the appropriate adjustments on each client desktop. This option reduces, but does not completely eliminate, security concerns with respect to the MSJVM. The optimal method for eliminating security concerns is to completely remove the MSJVM and dependencies, rather than limiting the execution of the MSJVM. On This Page Locking Down Internet Explorer Pointing to the MSJVM Locking Down Internet Explorer In Internet Explorer, from the Tools menu, select Internet Options and then the Security Settings tab to disable the MSJVM. To turn off Java for the Internet zone, select the Custom Settings button. In the Microsoft VM section, under Java Permissions, select Disable Java. These settings are stored in the registry for the current user; they can be stored as a .reg file and propagated throughout an organization through a login script that loads the .reg file. On a machine with multiple user accounts, these changes will have to be done separately for each account. You should note that if you have installed Sun’s Java plug-In for Windows on your system, this option also impacts the Sun JRE, which “impersonates” the MSJVM in IE. A second possibility for customers who continue using the MSJVM is to reduce security issues by limiting the use of Java to specific sites. After turning off Java in the Internet zone, they can turn on Java for all trusted sites and then re-enable support for applets by adding the specific sites with the applets to their Trusted Sites list. Pointing to the MSJVM An option when locking down the availability of the MSJVM to only secure and trusted sites is to install a second JRE. A second JRE allows applications that support multiple JREs, and may allow applets and applications without MSJVM-specific dependencies, to continue running. Test all JREs sufficiently during implementation to ensure the applications work properly with the newly installed JRE. There are additional considerations if you are planning to install a second JRE on some machines that are locked down. For example, some JRE vendors provide a configuration item that allows you to specify one JRE to run <OBJECT> tags and another JRE to run <APPLET> tags. Refer to the documentation associated with your JRE to determine how to configure tag handling. The IE Security Zone settings specified applies to all <APPLET> tags. You can edit the Web files to use the <OBJECT> tag instead of the <APPLET> tag. The <OBJECT> tag is a more generic tag for embedded objects, including applets. The structures of the two tags are similar: <APPLET code=Applet1.class height=200 width=320 name=Applet1 VIEWASTEXT> <PARAM NAME="foreground" Value="FFFFFF"> <PARAM NAME="background" Value="008080"> <PARAM NAME="label" Value="This string was passed from the HTML host."> No applet tag handlers installed. </APPLET> In the <OBJECT> tag, some attributes have become parameters (to avoid namespace conflicts), and there are additional parameters because the <OBJECT> tag is more generic: <OBJECT CLASSID="clsid:08B0E5C0-4FCB-11CF-AAA5-00401C608501" height=200 width=320 VIEWASTEXT> <PARAM NAME="code" Value="Applet1.class"> <PARAM NAME="type" Value="application/x-java-applet"> <PARAM NAME="scriptable" Value="true"> <PARAM NAME="foreground" Value="FFFFFF"> <PARAM NAME="background" Value="008080"> <PARAM NAME="label" Value="This string was passed from the HTML host."> No Microsoft VM installed </OBJECT> The classid attribute is a URI that can indicate inline data. For an Active X control — and a JRE is considered an Active X control — the URI begins with “clsid” and is followed by the clsid value in the registry. The value 08B0E5C0-4FCB-11CF-AAA5-00401C608501 indicates the MSJVM; different versions of third-party JREs may use different numbers. Some Java plug-ins install on a Windows system by “impersonating” the MSJVM using the same registry entry and class ID on installation. Be cautious when installing JREs that impersonate the MSJVM, as you may need to configure the CLSID or the individual JREs to make sure the desired one loads. If you are determined to point to a particular JRE in some circumstances, you may run into a second problem. As of this writing, at least one JRE installs a subkey under the key that identifies the MSJVM. The following key has as its default value the CLSID of the new JRE: HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{08B0E5C0-4FCB-11CF-AAA5-00401C608501}\TreatAs. Attempting to explicitly load the MSJVM through the <OBJECT> tag when this key exists will actually load the new JRE. Deleting the subkey will solve this problem. Downloadmsjvm_itpro.pdf Get Adobe Acrobat Reader
https://technet.microsoft.com/en-us/library/bb463181.aspx
CC-MAIN-2017-22
refinedweb
847
51.89
04 November 2009 01:59 [Source: ICIS news] SHANGHAI (ICIS news)--A new petrochemical complex by a joint venture of Saudi Basic Industries Corp (SABIC) and state-owned Sinopec in China will be ready to begin production by the first quarter of next year, the companies said in a statement late on Tuesday. Built at a total cost of CNY18.3bn ($2.7bn), the complex in northern ?xml:namespace> The complex includes a “We have developed great synergy between the two companies based on our shared goal of providing high quality petrochemical products to the domestic Chinese market. This project will benefit us both greatly,” said SABIC’s Chairman, Prince Saud Bin Thenayan Al Saud. “The joint-venture ethylene project in The two companies formed SINOPEC SABIC Tianjin Petrochemical Co., Ltd, last year as a 50/50 joint-venture to build and operate the new complex. ($1 = CNY6.83).
http://www.icis.com/Articles/2009/11/04/9260642/sinopec-sabic-tianjin-jv-petchem-complex-to-start-in-q1-10.html
CC-MAIN-2013-20
refinedweb
149
59.94