text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Unable to return a tuple when mocking a function python unittest mock return tuple python mock variable in function python mock exception python mock tuple return values python mock reset python mock function return value pytest-mock I'm trying to get comfortable with mocking in Python and I'm stumbling while trying to mock the following function. helpers.py from path import Path def sanitize_line_ending(filename): """ Converts the line endings of the file to the line endings of the current system. """ input_path = Path(filename) with input_path.in_place() as (reader, writer): for line in reader: writer.write(line) test_helpers.py @mock.patch('downloader.helpers.Path') def test_sanitize_line_endings(self, mock_path): mock_path.in_place.return_value = (1,2) helpers.sanitize_line_ending('varun.txt') However I constantly get the following error: ValueError: need more than 0 values to unpack Given that I've set the return value to be a tuple, I don't understand why Python is unable to unpack it. I then changed my code to have test_sanitize_line_endings store print return value of input_path.in_place() and I can see that the return value is a MagicMock object. Specifically it prints something like <MagicMock name='Path().in_place()' id='13023525345'> If I understand things correctly, what I want is to have mock_path be the MagicMock which has an in_place function that returns a tuple. What am I doing wrong, and how can I go about correctly replacing the return value of input_path.in_place() in sanitize_line_ending. After much head scratching and attending a meetup I finally came across this blog post that finally solved my issue. The crux of the issue is that I was not mocking the correct value. Since I want to replace the result of a function call the code I needed to have written was: @mock.patch('downloader.helpers.Path') def test_sanitize_line_endings(self, mock_path): mock_path.return_value.in_place.return_value = (1,2) helpers.sanitize_line_ending('varun.txt') This correctly results in the function being able to unpack the tuple, it then immediately fails since like @didi2002 mentioned this isn't a context manager. However I was focussed on getting the unpacking to work, and after I was able to achieve that replaced the tuple with a construct with the appropriate methods. Having a mock return a tuple (for example mocking out os.walk , Having a mock return a tuple (for example mocking out os.walk()) #86. Closed. dhommel Ran 1 test in 0.000s. FAILED (errors=1) The outer parens are for the function call obviously so let's ignore them. Next is the list, Unable to return a tuple from a constexpr function visual studio 2017 version 15.8 windows 10.0 Jason Turner reported Sep 17, 2018 at 01:24 PM I struggled with this error ValueError: need more than 0 values to unpack for several hours. But the problem was not in the way I set the mock up (the correct way was described by @varun-madiath here). It was in using @mock.patch() decorator: @mock.patch('pika.BlockingConnection') @mock.patch('os.path.isfile') @mock.patch('subprocess.Popen') def test_foo(self, **mocked_connection**, mock_isfile, **mock_popen**): The order of parameters must be reversed! See python docs. 26.4. unittest.mock — mock object library, Mock and MagicMock objects create all attributes and methods as you access or methods on the mock that don't exist on the spec will fail with an AttributeError . If side_effect is an iterable then each call to the mock will return the next value This will be in the form of a tuple: the first member is any ordered arguments Return multiple items from a mocked function with Python's mock. - gist:1174019 To be valid, the return value of input_path.in_place() must be an object that has an __enter__ method that returns a tuple. This is a (very verbose) example: def test(): context = MagicMock() context.__enter__.return_value = (1, 2) func = MagicMock() func.in_place.return_value = context path_mock = MagicMock() path_mock.return_value = func with patch("path.Path", path_mock): sanitize_line_ending("test.txt") When and how to use Python mock, We cannot see print statements of cow and dog anymore. The default return value of calling a Mock function is another Mock instance. I have a function (foo) which calls another function (bar). If invoking bar() raises an HttpError, I want to handle it specially if the status code is 404, otherwise re-raise. I am trying to write some unit tests around this foo function, mocking out the call to bar(). try this for return tuple from mocked function: ret = (1, 2) type(mock_path).return_value = PropertyMock(return_value = ret) Mocking in Python ⋆ Mark McDonnell, In this example, the side_effect attribute used the create_response() method to build a dynamic For non-trivial tests, it's important to be sure that the test can actually fail. We used the following: return Mock( status = 201, getheaders The value of response_headers is a sequence of two-tuples that has (key, value) pairs.. Modern Python Cookbook, DELETED def _copy(value): if type(value) in (dict, list, tuple, set): return create magic methods on the # class without stomping on other mocks new = type(cls. fail early? return obj class CallableMixin(Base): def __init__(self, spec=None,’. unittest.mock, A system Jarvis requested for a specific site "gradeup.co", in return to this request it got the error as "LOOKUP FAILED". S3: Count(*) also counts the tuples The function F will be high only when atleast two of the input variables are set to. Note 2: A Tuple has advantages. It is a reference and can be reused. GATE 2021: CS & IT Engineering (12 Mock Tests + 20 Subject-wise , If no return value is specified, functional mocks return null. By default, mocks expect one or more calls (i.e., only fail if the function or method is never called). More ways of setting return values. This covers the very basics of setting a return value, but NSubstitute can do much more. Read on for other approachs, including matching specific arguments, ignoring arguments, using functions to calculate return values and returning multiple results. - type(mock_path).return_value = PropertyMock(return_value = (1, 2))
https://thetopsites.net/article/53665653.shtml
CC-MAIN-2021-25
refinedweb
1,007
56.86
, Dec 17, 2005 at 04:24:03PM +0100, Bram Moolenaar wrote: Hi Bram, > > ,------------------------------------------------------ > > |: Yeah, that patch catches it. There seems to be a bad(probably good anyway?) thing about it: tmpstring = _no.$(match)_title @try: @ title = eval(_no.tmpstring) @except: @ title = "" :print $title that does not work anymore, because the error gets catched. I think there is another way, to check if a variable exists: @if not hasattr(globals(), 'tmpstring'): @ print "variable is not defined" That would work with checking for globals, but I think I would need to check for the _no namespace? On #python they told me something about dir(obj) but I can't get it to work. How could I check if a variable is 'defined' without the @try: @except: since that don't seems to work anymore with your patch? Thanks a lot marco PS: I have that all, because I wanted to store 'some' variables of some title of some pages at the top of the aap recipe, and later 'trying' to get that value and replacing @TITLE@ with it, if it exists. If it exists... that's the prob :) PSII: I'm not even qualified to think about that, if it's ok to catch all 'internal errors'. But anyway (my thoughts): normally 'internal error' just sound 'bad', but it also seems to disable that specific try constuct. But who (unless me) would need a try/execpt construct? And when it's a user-failure to write wrong python code with not defined variables, it's maybe ok to get a normal python error output as you said it is. Called internal or not. It's not aap's fault so. -- calmar (o_ It rocks: LINUX + Command-Line-Interface //\ V_/_ Calmar wrote: >: Index: Process.py =================================================================== RCS file: /cvsroot/a-a-p/Exec/Process.py,v retrieving revision 1.71 diff -u -r1.71 Process.py --- Process.py 29 Nov 2005 10:19:46 -0000 1.71 +++ Process.py 17 Dec 2005 15:06:38 -0000 @@ -556,8 +556,11 @@ while tb.tb_next: tb = tb.tb_next fname = tb.tb_frame.f_code.co_filename - if py_line_nr == -2 or (fname and not fname == "<string>"): + if py_line_nr == -2 or (fname and not fname == "<string>" + and string.find(fname, "Scope.py") < 0): # If there is a filename, it's not an error in the script. + # Except when the filename is Scope.py, then it's probably a + # variable that could not be found. from Main import error_msg error_msg(recdict, _("Internal Error")) # traceback.print_exc() -- ROBIN: (warily) And if you get a question wrong? ARTHUR: You are cast into the Gorge of Eternal Peril. ROBIN: Oh ... wacho! "Monty Python and the Holy Grail" PYTHON (MONTY) PICTURES LTD /// Bram Moolenaar -- Bram@... -- \\\ /// sponsor Vim, vote for features -- \\\ \\\ download, build and distribute -- /// \\\ help me help AIDS victims -- /// Hi all, When I provide a: test_title = whateever variable, then, there is no error Anyway, I get following error message: ,---------------------------------------------------------------------------- | _no.test_title | Aap: Internal Error | Aap: Traceback (most recent call last): | File "/usr/local/lib/aap/Exec-1.080/Process.py", line 1163, in Process | exec script_string in recdict, recdict | File "<string>", line 10, in ? | File "<string>", line 0, in ? | File "/usr/local/lib/aap/Exec-1.080/Scope.py", line 103, in __getattr__ | return self.data[key] | KeyError: 'test_title' | | Aap: Aborted `--- I just wanted to let know, just in case. Cheers and thanks marco -- calmar (o_ It rocks: LINUX + Command-Line-Interface //\ V_/_ I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/a-a-p/mailman/a-a-p-develop/?viewmonth=200512&viewday=17
CC-MAIN-2017-09
refinedweb
612
75.5
Hi guys, first post here. I decided to come here because it looks really friendly. Ive looked around on the internet, but I dont really understand how to do this. Take note that I am just beginning in c++. (Few days practice), so i decided to try to make a little test program that can help me improve. I only really know basic functions at the moment though. The program works out the area of a rectangle (yeah, easy.. but im terrible.) and now im trying to add on to that, and make it be able to work out the volume of a triangular prism too. What I want: I want to add an option at the start of the program to ask the user whether they want to work out the volume, or area. Then it would run that part of the code. I really dont know how this would work, so i tried having a go, but im really stumped. I thought that i could somehow use a variable as a letter, and wait for that to be pressed, but I really dont have a clue. I also thought that when they press the letter, I would turn the boolean selection variable (whichever one they picked) to true, which would make it run that part of the program. Again, I really have no idea how this works. :/ So, how would you make the program wait for a specific button press? ("v" to work out the volume, "a" to work out the area) My code so far: (its most likely messy.. and havn't started with the volume yet.) The program doesnt wait for the choice because I havn't put anything inbetween there, so it just goes straight onto the area. What I had in my head was wait for a key press, if a is pressed then run the area code, etc.. But I dont know. #include <iostream> using namespace std; int main() { int length, width, area; // variables char volumeselectionletter, areaselectionletter; // variables bool areaselection, volumeselection; // variables areaselection = false; // variables volumeselection = false; // variables areaselectionletter = 'a'; // variables volumeselectionletter = 'v'; // variables cout << "What would you like to do? \n \n"; cout << "Press A to work out the area of a rectangle, \nor V to work out the volume of a triangular prism. \n"; // choice, (doesnt wait here, havn't got round to it yet) cout << "Enter the length of the rectangle: "; cin >> length; cout << "Enter the width of the rectangle: "; cin >> width; area = length * width; cout << "\n \n"; cout << length << " x " << width << " = " << area; cout << "\nThe area of the rectangle is: " << area << "\n \n"; cout << "Press the enter key to exit. \n"; cin.get(); cin.get(); return 0; } It would be REALLY helpful if someone could walk me through it, but any help is appreciated. Thanks.
https://www.daniweb.com/programming/software-development/threads/348181/how-to-wait-for-a-specific-button-press-v
CC-MAIN-2017-09
refinedweb
465
78.18
Python 3 can revive Python I read this post by a certain Stephen A. Goss about how “Python3 is killing Python”. It has some compelling arguments, and while I don’t necessarily agree that Python 3 is indeed killing Python, the whole situation doesn’t do Python much good either. But maybe every crisis is an opportunity, as the cheesy motto says. Maybe Python 3 can help revive Python. You see, it’s not just the Python 2 to Python 3 migration that’s troubling. We’re not in 2005 anymore, and newer programmers are not that impressed with either version of Python. Sure, there are lots of Python jobs, but then again, there are even more Java jobs. And once upon a time there were many Perl jobs — I hear they’re not doing that well nowadays. I’m not talking about job count or GitHub repos. I’m talking about mindshare and enthusiasm, and I know that these are a little subjective, but I feel like Python has been lacking in these two regards as of late. For example we see people going from Python to Go. Again, they’re not many, but they are quite vocal (including whole startup dev teams blogging about switching their codebase), and enough to create a certain buzz (and to surprise Rob Pike, who initially expected people to come to Go from C/C++). Python faces competition from all sides. There’s competition that eats Python’s lunch in certain areas (e.g. new async projects seem to prefer Node or Go instead of Twisted, and Rails still dominates the web framework landscape), competition of wannabe contenters in specific niches (like Julia for scientific computing), and general competition (Clojure, Groovy, Javascript, Dart, etc). So here’s my idea about Python 3. It’s a simple idea: Make it compelling. It’s already incompatible with Python 2. And it’s not like people are migrating to it in droves, so adding a few more incompatible changes might not only not hurt, but even benefit the language. Seriously — if Python 3 had enough enticing new features, more people would migrate to it (at least for new projects), and more people would be inclined to port their Python 2 libs/projects. And, methinks, it would also attract more people that aren’t currently using Python. You see, Python 3 was a ho-hum update. Sure, it made the language more coherent and fixed some long standing issues and annoyances. But it didn’t provide that much sizzle. Then again, the language landscape back when Python 3 was conceived and its roadmap was set was far more relaxed. Heck, Javascript wasn’t popular back then. YouTube didn’t exist. It was THAT far back. Nowadays stuff like proper closures, immutability, a good async story, etc, is considered a necessity by discerning hackers. Without further TL;DR; here are a few things that might make Python 3 an interesting proposition for today’s hacker types. I, for one, know they would rekindle MY interest: - Remove the GIL. Or provide a compelling async story. Guido’s PEP 3156 might, or might not be that. Some primitives like Go’s channels would be nice to have. - Make it speedy. Seriously, if Javascript can be made fast, CPython could be made fast too. Or PyPy could mature enough to replace it (there should be only one). If it takes big bucks or Lars Bak to do it, start a Kickstarter — I’ll contribute. Shame major companies into contributing too. Isn’t Dropbox spending some money on its own LLVM based Python anyway? - Add types. Well, opt-in types. That you can use to speed some code up (a la Cython), or to provide assurances and help type-check (a la Dart). Add type annotations to everything in the standard library. - Shake the standard libraries. Get a team together to go through them and fix long standing annoyances, improve speed and fix bugs. Improve their API, and offer nice simpler interfaces for common stuff (think requests vs urllib) Offer the new improved library alongside the current standard library in a different namespace. Make it easy to switch over (perhaps with some automated tool). - Revamp the REPL. It’s 2014 already. Redo the REPL in a modern way. Add some color. Take notes from IPython. Make it a client/server thing, so IDEs and editors can embed it. So, Python 3 dev people, take your time. Well, not too much time. Perhaps, 3-4 years. We’ve waited 10 years for ES6, we can wait half of that for you. It’s not like anyone is using Python 3 anyway, so take some chances. Break things. Release early and often. Engage the community. You see, Python 3 is not really killing Python. But it might have a chance to save it from what’s actually killing it. P.S. What do you think? Can you think of additional stuff that could make Python 3 more enticing? What new feature would tickle your fancy?
https://medium.com/@opinionbreaker/python-3-can-revive-python-2a7af4788b10
CC-MAIN-2019-18
refinedweb
841
76.01
CS::Math::Noise::Model::Plane Class Reference [Models] Model that defines the surface of a plane. More... #include <cstool/noise/model/plane.h> Detailed Description Model that defines the surface of a plane.: - two-dimensional textures - terrain height maps for local areas This plane extends infinitely in both directions. Definition at line 53 of file plane.h. Constructor & Destructor Documentation Constructor. Constructor. - Parameters: - Member Function Documentation Returns the noise module that is used to generate the output values. - Returns: - A reference to the noise module. - Precondition: - A noise module was passed to the SetModule() method. Definition at line 73 of file plane.h. Returns the output value from the noise module given the ( x, z ) coordinates of the specified input value located on the surface of the plane. - Parameters: - - Returns: - The output value from the noise module. - Precondition: - A noise module was passed to the SetModule() method. This output value is generated by the noise module passed to the SetModule() method. The documentation for this class was generated from the following file: Generated for Crystal Space 2.1 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/api/classCS_1_1Math_1_1Noise_1_1Model_1_1Plane.html
crawl-003
refinedweb
183
51.55
. Finally, we run benchmarks for these operations and discuss the actual speed-up, which ranges from 1% to 70%. A Class with Move Semantics Let me introduce the class CTeam, which will accompany us through our experiments with move semantics. The class represents a football team with its name, points and goal difference. The vector m_statistics of 100 double numbers is only there to make CTeam objects more heavy-weight during copying. // cteam.h #ifndef CTEAM_H #define CTEAM_H #include <string> #include <vector>; static int copies; private: std::string m_name; int m_points; int m_goalDifference; static const int statisticsSize = 100; std::vector<double> m_statistics; }; #endif // CTEAM_H I implemented all the constructors and assignment operators, because they print their name to the console and because the copy variants increment the counter copies. We must only comment out the move constructor and assignment to turn the C++11 code into C++98 code. The implementation comes with no surprises. The print statements are commented out, because they would completely contort the performance results. We should uncomment them for the detailed analysis. Here is the part that is the same for C++98 and C++11. // cteam.cpp (same for C++98 and C++11) CTeam::~CTeam() { // cout << "dtor" << endl; } CTeam::CTeam() : m_name("") , m_points(0) , m_goalDifference(0) { // cout << "default ctor" << endl; } CTeam::CTeam(const string &n, int p, int gd) : m_name(n) , m_points(p) , m_goalDifference(gd) { // cout << "name ctor" << endl; m_statistics.reserve(statisticsSize); srand(p); for (int i = 0; i < statisticsSize; ++i) { m_statistics[i] = static_cast (rand() % 10000) / 100.0; } } CTeam::CTeam(const CTeam &t) : m_name(t.m_name) , m_points(t.m_points) , m_goalDifference(t.m_goalDifference) , m_statistics(t.m_statistics) { // cout << "copy ctor" << endl; ++CTeam::copies; } CTeam &CTeam::operator=(const CTeam &t) { // cout << "copyAssign" << endl; ++CTeam::copies; if (this != &t) { m_name = t.m_name; m_points = t.m_points; m_goalDifference = t.m_goalDifference; m_statistics = t.m_statistics; } return *this; } The move constructor and move assignment are specific to C++11. Again, there is nothing special about them. // cteam.cpp (specific to C++11) CTeam::CTeam(CTeam &&t) : m_name(move(t.m_name)) , m_points(move(t.m_points)) , m_goalDifference(move(t.m_goalDifference)) , m_statistics(move(t.m_statistics)) { // cout << "move ctor" << endl; } CTeam &CTeam::operator=(CTeam &&t) { // cout << "move assign" << endl; m_name = move(t.m_name); m_points = move(t.m_points); m_goalDifference = move(t.m_goalDifference); m_statistics = move(t.m_statistics); return *this; } Note that we use the move function when assigning to the member variables. This tells the compiler to move the members of the argument t into the members of this object. Moving of plain old data types is the same as copying. Hence, the integers m_points and m_goalDifference are copied. The STL data types std::string and std::vector support move semantics. Hence, the members m_name and m_statistics are eligible to moving. Detailed Analysis of Copying versus Moving Pushing an Element to the Back of a Vector The function testPushBack() appends a CTeam object in three slightly different ways to the vector table. void Thing::testPushBack() { vector table; table.reserve(4); cout << "@ Adding Arsenal" << endl; table.push_back(CTeam("Arsenal", 68, 25)); // [1] QCOMPARE(CTeam::copies, 0); cout << "@ Adding Man United" << endl; CTeam manUtd("Man United", 63, 24); table.push_back(manUtd); // [2] QCOMPARE(CTeam::copies, 1); QVERIFY(!manUtd.name().empty()); cout << "@ Adding Liverpool" << endl; CTeam liverpool("Liverpool", 54, 32); table.push_back(move(liverpool)); // [3] QCOMPARE(CTeam::copies, 1); QVERIFY(liverpool.name().empty()); cout << "@ End of test" << endl; } Let us first look at what the C++98 compiler does for line [1]. table.push_back(CTeam("Arsenal", 68, 25)); // [1] The debug trace looks as follows. @ Adding Arsenal // [1], C++98 name ctor copy ctor dtor The C++98 compiler creates a temporary CTeam object with the name constructor, copies this object into table container with the copy constructor, and destroys the temporary object again. The C++11 compiler recognises that the argument of push_back in [1] is an rvalue, that is, a temporary object that doesn't have a name and to which the address-of operator cannot be applied. When the C++11 compiler sees an rvalue, it calls the overload push_back(T &&value) taking an rvalue reference. This overload creates the CTeam object and moves it into the table container, as the following debug trace proves. The C++11 compiler chooses the move constructor instead of the copy constructor. @ Adding Arsenal // [1], C++11 name ctor move ctor dtor We don't have any speed-up for the two data members m_points and m_goalDifference, because - as built-in data types - they are simply copied. The other two data members - m_name and m_statistics - are types - std::string and std::vector, respectively - that support move semantics. Both hold pointers to their contents. The move constructor copies the pointers, which is cheaper than copying the contents. So, the C++11 version should be slightly faster than the C++98 version. Finally, the destructor is called on the emptied temporary object. After moving, the temporary object contains an empty string m_name and an empty vector m_statistics. Let us move on to line [2]. CTeam manUtd("Man United", 63, 24); table.push_back(manUtd); // [2] The debug trace is the same for C++98 and C++11: one call to the name constructor and one to the copy constructor. @ Adding Man United // [2] name ctor copy ctor This time we add the local object manUtd to the table. This object is not temporary, as it lives on after the call to push_back. Obviously, it has a name manUtd and the expression &manUtd is valid. Hence, manUtd is an lvalue and not an rvalue. Move semantics and its optimisation do not apply to lvalues. The behaviour and performance for adding an lvalue to a container is the same for C++98 and C++11. Line [3] CTeam liverpool("Liverpool", 54, 32); table.push_back(move(liverpool)); // [3] shows that there is a way to force the C++11 compiler to apply move semantics. We can cast the lvalue liverpool to an rvalue by calling std::move on an lvalue. Line [3] behaves the same as line [1], as the debug trace shows. @ Adding Liverpool // [3] name ctor move ctor @ End of test It is important to understand that the local variable liverpool doesn't contain anything any more. It is empty, because it has passed its contents to the containter table. Hence, the expression liverpool.name().empty() is true. The local variable liverpool has no contents any more but it is still valid (in the sense that it doesn't crash). In summary, we see that the C++98 version of testPushBack performs 3 copies, whereas the C++11 version performs only 1 copy (line [2]). Emplacing an Element to the Back of a Vector Let us have another close look at line [1] of the push_back example. table.push_back(CTeam("Arsenal", 68, 25)); // [1] This line creates a temporary CTeam object, which is half-moved and half-copied into the table container and then destroyed. The moving is still inefficient. It would be much more efficient to construct this temporary object in-place at the location provided by the table container. This is exactly what return-value optimisation (RVO) would do for a return value (even for many C++98 compilers). And, this is exactly what the C++11 compiler can do using std::vector::emplace_back() and move semantics. If we rewrite the line above as table.emplace_back("Arsenal", 68, 25); // [1] the debug trace @ Adding Arsenal // [1], C++11 name ctor shows that only the name constructor is called. Neither the move constructor nor the destructor are called. This is the optimal solution! Note that the three versions table.emplace_back({"Arsenal", 68, 25}); table.emplace_back(Team{"Arsenal", 68, 25}); table.emplace_back(Team("Arsenal", 68, 25)); are less efficient, because they call the name constructor, the move constructor and the destructor. emplace_back degenerates into push_back. Replacing push_back by emplace_back in lines [2] and [3] of the push_back example CTeam manUtd("Man United", 63, 24); table.emplace_back(manUtd); // [2] name ctor, copy ctor CTeam liverpool("Liverpool", 54, 32); table.emplace_back(move(liverpool)); // [3] name ctor, move ctor doesn't make any difference. Initialising Vectors As enthusiastic C++11 developers, we may be tempted to initialise a vector of teams using C++11's new initialiser lists. vector<Team> table = { {"Leicester", 80, 32}, {"Spurs", 70, 38}, {"Arsenal", 68, 25}, {"Man City", 65, 30} }; QCOMPARE(Team::copies, 4); This solution triggers four name-constructor calls for the four teams followed by four copy-constructor calls to place the four teams in the table and finished by four destructor calls destroying the four temporary objects. This is equivalent to default-constructing an empty table first and calling push_back for each team. OK, we know how to improve that. We use emplace_back instead of push_back. vector<Team> table; table.emplace_back("Leicester", 80, 32); table.emplace_back("Spurs", 70, 38); table.emplace_back("Arsenal", 68, 25); table.emplace_back("Man City", 65, 30); QCOMPARE(Team::copies, 3); Well, this saves us one copy, but we expected no copy at all. The reason is that vectors expand and shrink their storage capacity as needed. At the beginning, table has the capacity to store one element. So, team "Leicester" can be name-constructed into the container. When we add the second team, the "Spurs", the capacity is too small and is doubled. Team "Leicester" is copied into the new storage and the "Spurs" are name-constructed into the new storage. When we add the third team, "Arsenal", the capacity is doubled to 4. The first two teams are copied into the new storage and "Arsenal" is name-constructed into the new storage. Now, the capacity is sufficient and we can simply name-construct "Man City" into the existing storage. This makes three copies in total. The cure is obvious. We reserve four elements in the container before-hand and emplace the team into the existing slots. vector<Team> table; table.reserve(4); table.emplace_back("Leicester", 80, 32); table.emplace_back("Spurs", 70, 38); table.emplace_back("Arsenal", 68, 25); table.emplace_back("Man City", 65, 30); QCOMPARE(Team::copies, 0); Finally, there are zero copies. Of course, reserving enough storage capacity only works if we know the approximate number of elements before-hand. If space is at a premium, we may have to use a different data structure like std::unorderd_map, std::map or even std::list. Shuffling and Sorting Shuffling and sorting a vector of teams should be where move semantics shines brightest. void Thing::testShuffleAndSort() { vector<CTeam> table; table.reserve(20); table.emplace_back("Leicester", 81, 32); table.emplace_back("Arsenal", 71, 29); // Adding 18 more teams ... random_shuffle(table.begin(), table.end()); // [1] sort(table.begin(), table.end(), greaterThan); // [2] QCOMPARE(CTeam::copies, 0); } We first shuffle the 20 teams in table (line [1]) and then sort them again (line [2]). When the shuffle and sort algorithms swap elements, they can move these elements instead of copying. The function testShuffleAndSort() doesn't need a single copy. If we replace the emplace_back functions by table.push_back(CTeam("Leicester", 81, 32)); table.push_back(CTeam("Arsenal", 71, 29)); we get perfectly valid C++98 code. If we run the function testShuffleAndSort() in C++98, we'll see 130 or even 160 copies. The number of copies depends on how badly the container got shuffled. Shuffling, sorting and moving around elements in containers should be significantly faster in C++11 than in C++98. This is where move semantics really shines. The Benchmarks The first benchmark calls the testShuffleAndSort function 5000 times. We use push_back instead of emplace_back for the C++11 version as well, because we can then use identical code for both C++98 and C++11. Note that the C++11 version still works with zero copies. // thing.cpp namespace { bool greaterThan(const CTeam &t1, const CTeam &t2) { return t1.points() > t2.points() || (t1.points() == t2.points() && t1.goalDifference() > t2.goalDifference()); } } void Thing::benchmarkShuffleAndSort() { vector<CTeam> table; table.reserve(20); table.push_back(CTeam("Leicester", 81, 32)); table.push_back(CTeam("Arsenal", 71, 29)); // Adding 18 more teams ... QBENCHMARK { for (int i = 0; i < 5000; ++i) { random_shuffle(table.begin(), table.end()); sort(table.begin(), table.end(), greaterThan); } } } The second benchmark adds 20 teams to the table container using push_back and clears the container. It repeats these steps 5000 times. void Thing::benchmarkPushBack() { vector<CTeam> table; table.reserve(20); QBENCHMARK { for (int i = 0; i < 5000; ++i) { table.push_back(CTeam("Leicester", 81, 32)); table.push_back(CTeam("Arsenal", 71, 29)); // Adding 18 more teams ... table.clear(); } } } The third benchmark is the same as the second, but it uses emplace_back instead of push_back. Obviously, we cannot run this test with C++98. void Thing::benchmarkEmplaceBack() { vector<CTeam> table; table.reserve(20); QBENCHMARK { for (int i = 0; i < 5000; ++i) { table.emplace_back("Leicester", 81, 32); table.emplace_back("Arsenal", 71, 29); // Adding 18 more teams ... table.clear(); } } } I ran the first and second benchmark with C++98. The results are shown in the column "C++98" in the table below. I ran all three benchmarks with C++11 in two variants. I removed the move constructor and assignment from the class CTeam for the first variant denoted "C++11/Copy". The second variant denoted "C++11/Move" comes with the full glory of move semantics. QtTest has the nifty feature that we can run benchmarks in callgrind by passing the option -callgrind to the test exectuable. Callgrind counts the number of read instructions the CPU executes for the benchmark. Here is the table with the results. As we want to compare the different implementations of the same problem, we are more interested in relative performance than in absolute numbers of read instructions. We define the implementation "EmplaceBack - C++11/Move" as the reference point with a value of 1.00. The values for the other four implementations of the PushBack and EmplaceBack benchmarks show how much slower the other implementation is than the reference (best) implementation. The implementation "ShuffleAndSort - C++11/Move" is the reference point for the other two ShuffleAndSort implementations. Move semantics speeds up the ShuffleAndSort benchmark by a factor of nearly 1.7. This is quite impressive but not surprising as the ShuffleAndSort benchmark hits the sweet spot of move semantics. Whenever the shuffle and sort algorithm swap the position of two elements, they do a three-way move instead of a three-way copy. The results for the PushBack and EmplaceBack benchmarks are within 1% of each other. Let us compare the different implementations one by one. The difference between "C++98" and "C++11/Copy" is negligible. It is within the measurement variance. The "C++11/Copy" implementation of the PushBack benchmark uses the push_back(const T&) overload, whereas the "C++11/Move" implementation uses the push_back(T&&) overload. The former implementation calls the copy constructor of CTeam, whereas the latter implementation calls the move constructor. This explains the speed-up of nearly 1% when using move semantics. At first glance, it is slightly surprising that we see a similar speed-up of 1% when we use emplace_back instead of push_back for "C++11/Copy" - although there is no move semantics involved. If we think about it briefly, it is pretty obvious though. emplace_back constructs the CTeam object directly into the proper place in the container. It does neither call the copy constructor nor the destructor. For "C++11/Move", emplace_back is slightly faster (0.2%) than push_back. emplace_back only calls the name constructor of CTeam, whereas push_back calls the name constructor, move constructor and destructor. The difference between "C++11/Copy" and "C++11/Move" for emplace_back is negligible. It is within the measurement variance. Accidental Performance Gains I started writing the code examples in C++11. When I made the code C++98 conformant, I had to use the C++98 way for sorting containers and for generating random numbers. When I profiled the C++98 way against the C++11 way, I noticed significant performance differences. The following table shows columns "C++98" and "C++11/Move" from the Benchmark table and adds the column "C++11/Opt", which uses move semantics and optimised algorithms. The speed-up is massive: 13% for ShuffleAndSort and 70% for PushBack and EmplaceBack. How can we achieve this speed-up? For ShuffleAndSort, I replaced sort(table.begin(), table.end(), greaterThan); by sort(table.begin(), table.end(), [](const CTeam &t1, const CTeam &t2) { return t1.points() > t2.points() || (t1.points() == t2.points() && t1.goalDifference() > t2.goalDifference()); }); For C++98, the sort algorithm calls the function greaterThan for every comparison. For C++11, it inlines the lamda expression and runs through the inlined instructions of the body of greaterThan for every comparison. The algorithm saves the overhead of calling greaterThan for every comparison. For PushBack and EmplaceBack, the winning change was how the name constructor of CTeam generates random numbers. In C++98, we call the system function rand. srand(p); for (int i = 0; i < statisticsSize; ++i) { m_statistics[i] = static_cast (rand() % 10000) / 100.0; } In C++11, we use the new classes default_random_engine and uniform_real_distribution to generate the random numbers. std::default_random_engine engine(p); std::uniform_real_distribution range(0, 100); for (int i = 0; i < statisticsSize; ++i) { m_statistics[i] = range(engine); } Every time, the system function rand is called, it sets up the machinery to generate random numbers, generates a number and then tears down the machinery again. In C++, we do the setup only once, when creating the engine and range objects. Moreover, we only call a member function ( range(engine)) instead of a more heavy-weight system function. Conclusion Equipping our classes with move constructors and assignment operators makes our code sometimes slightly faster and sometimes even a lot faster, but sometimes it doesn't change anything. We must understand when we benefit from C++11's new move semantics. The most important factor is whether a class is suited for moving. If the data members of a class are builtin types like int or double, we will not see any speed-up. If the data members are pointers to "big" objects, the move constructor and assignment will copy the pointer instead of the object pointed to. This applies recursively to data members like strings and vectors whose data members are of pointer type. This is why we see a slight speed-up (factor: 1.012) for CTeam objects, whose data members m_name and m_statistics are of type std::string and std::vector, respectively. This slight speed-up multiplies if such objects are moved around heavily as it happens during sort algorithms. The resulting speed-up can be significant as our ShuffleAndSort benchmark (factor: 1.7) shows. Even if a class doesn't support move semantics, we can speed up the insertion of elements by using emplacement (factor: 1.012). Emplacing an element into a container constructs the element directly into the right place in the container. It does not move or copy the newly constructed element into the container. In summary, we can say that using move semantics on suitable classes will speed up operations - some more than others. By using lambda expressions instead of functions in algorithms (e.g., sorting criterion) and by using C++11's new way of generating random numbers, we can speed up our code significantly (by factor 1.15 and 3.4, respectively).
https://embeddeduse.com/2016/05/25/performance-gains-through-cpp11-move-semantics/
CC-MAIN-2021-43
refinedweb
3,205
58.38
In this post, we're going to walk through another common coding interview problem: finding the longest word in a paragraph. This is a really good question, because it's very easy to forget some important details, and the perhaps obvious solution isn't necessarily the best. Walking Through the Problem For this particular problem, we're going to imagine we've been given the following string of lorem ipsum text. For the sake of space, we'll assume that this string is being used as the value for a variable called base_string. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas luctus ipsum et facilisis dignissim. Duis mattis enim urna. Quisque orci nunc, vulputate id accumsan nec, imperdiet sit amet sem. Integer consequat nibh vel mattis elementum. Nam est elit, sodales vitae augue nec, consequat porttitor nibh. Aliquam ut risus vehicula, egestas ligula sed, egestas neque. Fusce hendrerit vel risus in molestie. Morbi molestie eleifend odio vel ullamcorper. Donec at odio libero. Quisque vulputate nisl nisi, ut convallis lorem vulputate et. Aenean pretium eu tellus a dapibus. Ut id sem vulputate, finibus erat quis, vestibulum enim. Donec commodo dui eros, non hendrerit orci faucibus eu. Integer at blandit ex. Duis posuere, leo non porta tincidunt, augue turpis posuere odio, sed faucibus erat elit vel turpis. Quisque vitae tristique leo. Lorem ipsum is just a common dummy text used in typesetting and printing to simulate actual text content. It includes words of various lengths, as well as punctuation, so it's perfect for our purposes. So how do we find the longest word? Since our paragraph is currently a single string, we're probably going to have to find a way to break it up into individual words. This will let us to perform a direct comparison between different words, allowing us to find out whether a given word is longer than another. There are a few ways we can perform the actual comparison. We can create a variable called something like longest_word set to an empty string, and then iterate over all the words, comparing the length of the current longest_word with the word we're checking as part of this iteration. If the new word is longer, we can replace the value of longest_word with the new word. Repeating this process for every word in the paragraph will leave us with the longest word in the paragraph bound to the longest_word variable. Another option is that we use the max function, which takes in an iterable, and returns a single item from the collection. It will return the single largest member of the collection, but we'll have to provide some configuration to max so that it knows what that means given the context. Both of these solutions potentially have a small issue in that they only return a single word. What if two words are of identical length? Do we provide one of these longest words as a solution, or do we provide a collection of all the longest words? If we have to provide all the longest words, then we're going to need to use a different method. With that, let's move onto out first bit of code for this problem. There is a problem with these solutions, which I'll talk about in a moment. Our First Solution Python includes a handy method for strings called split. You can read about split in the official documentation. split will essentially allow us to divide a string into a list of items using a delimiter string. Whenever this delimiter is encountered in the string, split will create a break point. Everything between these break points becomes an item in the resulting list. We can therefore do something like this: word_list = base_string.split(" ") # ["Lorem", "ipsum", "dolor", "sit", "amet,", ... etc.] Eagle eyed readers may have already noticed a problem. The comma after amet is included in the corresponding list item. This is the case for all words that have punctuation, and it poses a serious problem. Our text actually has two longest words, but one of them is at the end of a sentence, and therefore has an appended full stop. With the punctuation included, this word incorrectly becomes the sole longest word in the paragraph. The problem could be even worse if we have punctuation like quotation marks, since the inclusion of two punctuation characters might make a word longer than the real longest word, despite being potentially a character shorter in reality. This only gets worse as punctuation gets compounded together. It's clear then that we have to do something about all this extra punctuation. We can make use of another string method called strip to take of the problem. Once again, you can find details on how strip works in the official docs. strip allows us to remove characters from either end of a string, and it will keep removing those characters until it encounters a character which does not match those it was instructed to remove. By default, it removes whitespace characters, but we can provide a collection of punctuation instead. Using a list comprehension, we can therefore iterate over our word_list and correct the punctuation issue: word_list = base_string.split(" ") processed_words = [word.strip(".,?!\"\':;(){}[]") for word in word_list] This is absolutely going to work for our example string, but it's also not a very good solution. For a start, having to use these escape characters is ugly, but more importantly, this group of punctuation is not complete, and it would very tedious to make it complete. Right now we don't do anything to filter out tildes for example ( ~), or percentage signs, or hash symbols. There's also another issue we didn't consider at all thus far: numbers. Numbers could easily end up being larger than our longest words, giving us erroneous results. Instead of removing things we don't wan't, perhaps we can go the other way, adding only the things we do want. Enter RegEx. Using RegEx RegEx or regular expressions are a means through which we can define patterns to look for in strings. RegEx is a pretty expansive topic, certainly too big to cover here, but there's a tutorial on using RegEx in Python available in the documentation. In order to work with regular expressions in Python, we need to import the re module, so that's our first step: import re Next, we'll take our base_string variable, containing our lorem ipsum text, and provide as an argument to the findall function that we find in the re module. Along with base_string, we're also going to provide a pattern using the RegEx syntax like so: import re word_list = re.findall("[A-Za-z]+", base_string) RegEx is notoriously cryptic, but our pattern here is relatively simple. It says that we're looking for any letter character in the basic Latin alphabet; we don't care about case; and we want patterns that include any number of these characters, as indicated by the + symbol. Our resulting word_list variable will now contain a list full of letter only strings. While traversing our string, any time a character was encountered that wasn't part of the our defined pattern, findall considered that the end of a word. Pretty neat stuff if you ask me. Another Minor Problem Unfortunately, it's too early to celebrate just yet. There's once again a problem with our current implementation: punctuation inside words. Consider a situation like this: John's dog hasn't eaten its food. We might make a very good argument that "John's" and "hasn't" are single words, in which case, we have a problem. Right now if we run our code on this sentence we get the following: import re base_string = "John's dog hasn't eaten its food." word_list = re.findall("[A-Za-z]+", base_string) # ['John', 's', 'dog', 'hasn', 't', 'eaten', 'its', 'food'] This might be perfectly fine, but what happens if your interviewer asks you to treat "John's" as one word? In that case, "John's" is a 5 letter word with a piece of punctuation. The best answer might be to combine our methods. We can split the string based on spaces, and then check each word for punctuation. This time, however, instead of splitting the string, we'll perform a replacement using another re function called sub. sub takes a pattern, a replacement string for that pattern, and a string to search. import re base_string = "John's dog hasn't eaten its food." word_list = base_string.split(" ") processed_words = [re.sub("[^A-Za-z]+", "", word) for word in word_list] # ['Johns', 'dog', 'hasnt', 'eaten', 'its', 'food'] Here our pattern is exactly the same as before, except we've added a special character, ^, which essentially says match anything which is not in this pattern. As our pattern includes only basic Latin letters, all punctuation is going to be matched by this pattern. Our replacement string is an empty string, which means any time we find a match, the matching characters will simply be removed. Putting this in a list comprehension, we can perform the substitution for every word in the word_list, giving us a list of properly processed words. With that, we can finally start counting the length of the words and find the longest word in the paragraph. Finding the Longest Word As I mentioned before, we have a couple of options for finding the longest word in our list of words. Let's start with the string replacement method, using our original paragraph: import re word_list = base_string.split(" ") processed_words = [re.sub("[^A-Za-z]+", "", word) for word in word_list] longest_word = "" for word in processed_words: if len(longest_word) < len(word): longest_word = word print(longest_word) # consectetur In this implementation, we start with an empty string, and begin iterating over all the words in the processed_words. If the current word is longer than the current longest_word we replace the longest word. Otherwise, nothing happens. Since we start with an empty string, the first word will become the longest string initially, but it will be dethroned as soon as a longer word comes alone. Instead of this fairly manual method, we could make use of the max function. import re word_list = base_string.split(" ") processed_words = [re.sub("[^A-Za-z]+", "", word) for word in word_list] longest_word = max(processed_words, key=len) print(longest_word) # consectetur In this method we have to pass in the processed_words as the first argument, but we also have to provide a key. By providing the len function as a key, max will use the length of the various words for determining which is of the greatest value. Both of these methods only provide a single word, so how do we get all of the longest words if we have multiple words of equal length? Finding a List of the Longest Words In order to find all of the longest words, we can do a second pass over our processed_words now that we know what the longest word actually is. We can do this using a conditional list comprehension. import re word_list = base_string.split(" ") processed_words = [re.sub("[^A-Za-z]+", "", word) for word in word_list] max_word_length = len(max(processed_words, key=len)) longest_words = [word for word in processed_words if len(word) == max_word_length] print(longest_words) # ['consectetur', 'ullamcorper'] Wrapping Up With that, we've successfully found the longest words in a paragraph. Of course this isn't the only way you could solve this problem, and the specific problem might call for minor variations as well. You may be asked to find the length of the longest word in the paragraph, but if you can do the version shown here, that question should be a breeze. We'd love to see your own solutions to this problem, so get on repl.it and share your solutions with us on Twitter. If you liked the post, we'd also really appreciate it if you could share it with your techy friends. We'll see you again Monday when we'll be releasing part two of our brief look at the itertools module!
https://blog.tecladocode.com/coding-interview-problems-find-the-longest-word-in-a-paragraph/
CC-MAIN-2019-26
refinedweb
2,012
63.09
I wrote a very simple app for Ubuntu for Bad Voltage, the finest podcast in the land. It shows you the list of shows, and lets you play them. Simple. Streaming: there’s no downloading for offline use here, no notifications of new shows; it’s a little app, only. So the first thing you should do is go search for it on your Ubuntu phone and install it. More interestingly, though, I tried to make this a generic app. That is: the actual code which defines this as a Bad Voltage app looks like this: import QtQuick 2.0 import Ubuntu.Components 0.1 import "components" MainView { objectName: "mainView" applicationName: "org.kryogenix.badvoltage" automaticOrientation: false width: units.gu(40) height: units.gu(71) id: root backgroundColor: "black" GenericPodcastApp { name: "Bad Voltage" squareLogo: "" author: "Stuart Langridge, Jono Bacon, Jeremy Garcia, and Bryan Lunduke" category: "Technology" feed: "" description: "Every two weeks Bad Voltage " + "delivers an amusing take on technology, " + "Open Source, politics, music, and anything " + "else we think is interesting." } } To make a similar app for your podcast, just fetch the GenericPodcastApp.qml file from the Bad Voltage app source, include it in your project, and then use the GenericPodcastApp component. Define a name, squareLogo, author, category, feed, and description, and that’s it; you’re done. I’d love there to be a whole bunch of generic components like this. Something where I don’t really have to mind how it works, I just grab it from somewhere, drop it into my project, and use it. A QR code scanner, a QR code generator, a circular dial widget, an online high-score table. Obviously it’s possible to make reusable components right now, but there’s no market in them; what I want is something almost like the Ubuntu app store but for developers, where I can look for components, grab one, and insert it into my project, right from the Ubuntu SDK editor. One button-push should update any of these components that I have in my project; that way, if someone fixes a component I can rebuild my app to include it with ease. What I really want is the Ubuntu component equivalent of npm install, I think. The ultimate destiny of any such component is to be so useful to so many people that the Ubuntu core team lift it out of the component store and into the SDK, but it’d be great if it were easier to do this before things get to that stage, and the SDK can’t contain everything. I see no reason why some of these components couldn’t be open source and some sold for money, so there’s potentially an income stream there for Ubuntu developers who make reusable things. GenericPodcastApp is hugely trivial, but an example of the sort of thing that I think could develop. Any component which doesn’t use anything very Ubuntu-specific would work on other QML platforms too, and vice-versa, so the market could even be usable by developers across many platforms.
https://www.kryogenix.org/days/2014/02/15/bad-voltage-apps-and-generic-components-for-ubuntu/
CC-MAIN-2019-04
refinedweb
510
59.53
DOHOOKS(9) BSD Kernel Manual DOHOOKS(9) dohooks - run all hooks in a list #include <sys/types.h> #include <sys/systm.h> void dohooks(struct hook_desc_head *head, int flags); The dohooks() function invokes all hooks established using the hook_establish(9) function. Hooks are called in the order of the TAILQ that head points to, however hook_establish(9) can put the hooks either at the head or the tail of that queue, making it possible to call the hooks either in the order of establishment, or its reverse. The flags can specify HOOK_REMOVE to remove already processed hooks from the hook list and HOOK_FREE to also free them. In most cases either no flags should be used or HOOK_REMOVE and HOOK_FREE at the same time, since just HOOK_REMOVE will drop the only reference to allocated memory and should only be used in situations where freeing memory would be illegal and unnecessary. This function is used to implement the doshutdownhooks(9) as well as the dostartuphooks(9) macros. doshutdownhooks(9), dostartuphooks(9), hook_establish(9) MirOS BSD #10-current July.
http://www.mirbsd.org/htman/sparc/man9/dohooks.htm
CC-MAIN-2015-35
refinedweb
177
59.33
Hi, I'm having a problem with this file. It's meant to shuffle 3 cups but I get this error: TypeError: Error #1010: A term is undefined and has no properties. at cupsc_fla::MainTimeline/frame1(). It was working in AS2 but I changed it around to get it to work for AS3 but now it won't work. Here's the code: import fl.transitions.Tween; import fl.transitions.easing.*; import flash.events.*; var posArray:Array = new Array(cup0.DisplayObject.x, cup1.DisplayObject.x, cup2.DisplayObject.x); move_mc.addEventListener(MouseEvent.CLICK, pressBtn); shuffle = function (targetArray) { for (i=0; i<targetArray.length; i++) { var tmp = targetArray[i]; var randomNum = Math.random(targetArray.length); targetArray[i] = targetArray[randomNum]; targetArray[randomNum] = tmp; } } function pressBtn(e:Event) :void{ shuffle(posArray); var tween0 = new Tween(cup0, "_x", Strong.easeIn, cup0.DisplayObject.x, posArray[0], 1.5, true); var tween1 = new Tween(cup1, "x", Strong.easeIn, cup1.DisplayObject.x, posArray[1], 1.5, true); var tween2 = new Tween(cup2, "x", Strong.easeIn, cup2.DisplayObject.x, posArray[2], 1.5, true); } I'm stumped! There are a number of potential issues with the code as you show it. cup0.DisplayObject.x, ??? what are these shuffle = function... change that to be written just like your pressBtn function Math.random(nothing goes in here)... should be Math.random()*targetArray.length, although you want to get an integer value out of it, not a decimal value As far as the 1010 error goes, go into your Flash Publish Settings and select the option to Permit Debugging. That can help by adding the line number of the offending code to the error message.
https://forums.adobe.com/message/4324985
CC-MAIN-2016-44
refinedweb
273
53.27
Passcert offers you the latest EMC exam E20-020 dumps to help you best prepare for your test and pass your test EMC exam E20-020 dumps Cloud Infrastructure Specialist Exam for Cloud Architects EMC E20-020 exam is very popular in EMC field, many EMC candidates choose this exam to add their credentials, There are many resource online to offering Latest EMC exam E20-020 dumps, Through many good feedbacks, we conclude that Passcert can help you pass your test easily with Latest EMC exam E20-020 dumps, Choose Passcert to get your EMC E20-020 certification. E20-020 exam service: : Free update for ONE YEAR PDF and Software file Free demo download before purchasing 100% refund guarantee if failed Latest EMC exam E20-020 dumps are available in pdf and Software format. This makes it very convenient for you to follow the course and study the exam whenever and wherever you want. The Latest EMC exam E20-020 dumps follows the exact paper pattern and question type of the actual E20-020 certification exam. it lets you recreate the exact exam scenario, so you are armed with the correct information for the E20-020 certification exam. The safer , easier way to help you pass any IT exams. 1: *In the ITaas model, corporate IT offers a menu of SaaS, PaaS and IaaS options for business users via acentralized service catalog. Business users are free to pick and choose cloud services that corporate IT has vetted, or provide themselves. These users can make informed decisions based on service pricing and SLAs, and can in many cases can provision services on their own. *ITaas can be implemented through new consumption models leveraging self-service catalogs offering bothinternal and external services; providing IT financial transparency for costs and pricing; offering consumerized IT – such as bring your own device (BYoD) – to meet the needs of users. All of which simplify and encourage consumption of services. *A service-level agreement (SLA) is a part of a standardized service contract where a service is formallydefined. Particular aspects of the service – scope, quality, responsibilities – are agreed between the service provider and the service user. References: 2.Which infrastructure does VCE Vblock represent? A. Brownfield B. Traditional C. Hyper-converged D. Converged: 3? A. Server hardware is compatible with PaaS solution B. Server hardware is on the hypervisor vendor’s compatibility list C. Server hardware is supported by the service catalog solution D. IaaS instances have the compatible drivers for the physical server hardware Answer: D 4.What is an advantage of using a brownfield infrastructure? 2 / 3 The safer , easier way to help you pass any IT exams. A. Avoids upgrades to existing infrastructures B. Promotes staff familiarity with technology C. Enables migration to new technologies D. Avoids older and less efficient processes Answer: B Explanation: Brownfield fund investments involve established assets in need of improvement. Investing at the operational stage of a project is brownfield investment. This may be preferred as completion and usage risks should be reduced because the project will have been operating for some time. References: 5.An organization has internal applications that require block, file, and object storage. They anticipate the need for multi-PB storage within the next 18 months. In addition, they would prefer to use commodity hardware as well as open source technologies. Which solution should be recommended? A. Cinder B. Hadoop C. Swift D. Ceph Answer: C Explanation: OpenStack Swift Object Storage on EMC Isilon EMC Isilon with OneFS 7.2 now supports OpenStack Swift API. Isilon is simple to manage, highly scalable (up to 30PB+ in a single namespace) and highly efficient (80%+ storage utilization) NAS platform. References:
http://www.slideserve.com/passcert/emc-exam-e20-020-dumps
CC-MAIN-2016-50
refinedweb
613
53.21
Most will know that C++ mangles external names in a compiler specific way such that they encode the types of function parameters and the nesting of classes and namespaces. People are probably less aware that most C compilers also mangle some names to make them unique inside compilation units. Namely, most compilers will create “local” symbol for static variables, and since the naming of static variables in function scope is not unique, they have to mangle the names. gcc, e.g, does that by appending a dot and a unique number to the name. Since the dot is not part of any valid identifier, this makes sure that there is no name clash. Let us have a look at two simple variables: static int f = 0; static int e = 42; and the symbols in the compiled object that are visible with nm. For gcc I have 0000000000000000 d e.1504 0000000000000004 d f.1503 so this looks something like the original name (the human readable part) followed by a dot and a series number. (The long numbers are the offsets of the objects in the object file, the “d” says that it is an internal symbol.) icc does it differently 0000000000000004 d e.3.0.2 0000000000000000 d f.3.0.2 so here we have, as above, the original name and then followed by something that is an identification of the scope inside the compilation unit. Observe that here icc relies on the fact that e and f are unique in their respective scopes, we come back to that later. Things become more obscure when the compiler supports universal characters in identifiers. Depending on the environment, the compiler has to mangle such identifiers, since e.g the loader might not support them in the symbol table. icc chooses something like replacing such a character by _uXXXX where XXXX is the hex representation of the character. In that case (icc) this results in two subtle compiler bugs. First, this mangling uses valid identifiers that the user is allowed to use, so they may clash for global symbols with identifiers from the same compilation unit or even from other units. int o\u03ba; int o_u03ba; The symbols that gcc produces for these objects (placed in file scope) are straight forward namely o_u03ba and oκ which also shows you that the Unicode character with position 03ba is a Greek kappa. In contrast to that icc just has the same external name o_u03ba for the two objects. Even if these two are placed in different compilation units, when they are linked together, there is a clash. Second, icc even mixes up its own internal naming. All goes well as long as the local variables that we declare are auto, e.g in the local scope of some function int \u03ba; int _u03ba; Here _u03ba is a valid name inside a function, one leading underscore followed by a non-capitalized letter is allowed. Now as long as we define the variables like that, icc internally distinguished them and all goes fine. But if we declare them as static icc’ mangling convention fires back. The internal names are both folded on the name _u03ba.83.0.1. Remember that icc needs the “real name” of the variables to distinguish its local statics. The code still behaves somewhat as expected if the compiler can optimize the access to the static location away and hold the values in temporaries. It then crashes at seemingly random points when the optimizer can’t keep track of the value anymore or when the variables are also declared volatile. #include <stdio.h> int o\u03ba; int o_u03ba; int main(int argc, char*argv[]) { printf("addresses %p and %p\n", (void*)&o\u03ba, (void*)&o_u03ba); static int volatile \u03ba; static int volatile _u03ba; \u03ba = !!argc; _u03ba = !argc; printf("values %d and %d\n", \u03ba, _u03ba); return \u03ba == _u03ba; } This little example program shows correct output with gcc: addresses 0x60103c and 0×601038 values 1 and 0 and goes fundamentally wrong with icc: addresses 0x604a44 and 0x604a44 values 0 and 0 Hi Jens. Great article. I need a way to build deterministically where the unique numbers added by gcc’s C-compiler is always the same for the same static variable. Do you know if this is possible? Regards, Micke. Comment by Micke — November 25, 2011 @ 12:59 Hm, I am not sure what you want to achieve. Folding several local static variables on top of each other? I don’t think you should do such a thing, it can only cause you headaches. In any case, in C static variables should always be defined in .c files. So if you’d have to have several functions that “share” the same variable, why not just have a global staticvariable in the corresponding C file with a “good” name that makes clear that the purpose is to be shared between the functions. Jens Comment by Jens Gustedt — November 25, 2011 @ 13:14 That’s true: it’s not that universally known that C compilers can also mangle names. I guess they didn’t some long time ago. So, thanks for reminding about it. Comment by Alexey Ivanov — June 8, 2012 @ 10:33 Even before we had support for complicated character sets, C compilers did already rudimentary mangling, I think: prefixing names with an underscore, shortening long names, mapping to lower or upper case. Comment by Jens Gustedt — June 8, 2012 @ 11:13 Yes, it’s true… Comment by Alexey Ivanov — June 13, 2012 @ 13:27
http://gustedt.wordpress.com/2011/06/24/name-mangling-in-c/
CC-MAIN-2014-49
refinedweb
921
69.01
Pointers and arrays are intrinsically related in C++. Similarities between pointers and fixed arrays In lesson 6.1 -- Arrays (part i),)., int(*)[5]). It’s unlikely you’ll ever need to use this. Revisiting passing fixed arrays to functions Back in lesson 6 ptr is dereferenced, the actual array is dereferenced!. 32 4 I do not understand at all how 32 and 4 are arrive at. Hi Reader! On the machine used to run the code above sizeof(int) is 4 and sizeof(int *) is 4 too. The sizeof call in line 13 returns the sum of the sizes of all elements in the array. That is 8 * 4 = 32 (8 elements, 4 bytes each). When passing an array as argument it will decay to a pointer. This means that the sizeof operator will no longer returns the sum of the sizes of all elements but the size of the pointer instead. Since the array parameter is an int* the sizeof operator returns 4. Hi Alex, if array points to the first element why &array gives the same address as the first element shouldn't the" address of" operator give array's address. In C++, built-in arrays don't have any overhead (they just have different type information). Therefore, the start address of the array is identical to the start address of the first element! Alex you have quoted "because the pointer doesn’t know how large the array is, you’ll need to pass in the array size as a separate parameter anyway (strings being an exception because they’re null terminated)." please can you provide an example for strings holding this exception. Hi price GUPTA! Let's look at what an array is in memory, The [0] in the end of "Hello" makes the difference, it's a byte with the value 0. If you look at an ASCII table you'll see that 0 isn't a character, it's used to mark the end of a string. This way, your program can keep reading characters from memory until it finds 0, then it know it found the end of the string. For normal arrays this can't be done, because 0 could just as well be an element of the array. The @strlen function makes use of this, you pass it you char *, in this case @szText, and it will walk through the string until it finds 0. With every step it takes it increases a counter by one. By the time the 0 has been found the counter will hold the strings length. Here's code to print an integer array. We need to pass the array's length since we can't tell where the array ends. But for strings, we can rely on the presence of the '\0' char (0) to denote the end of the array. So if we're just iterating through the array, we can iterate until we hit the '\0', like this: Hi Alex, Thanks a lot for the tutorial. At the very first example of this part I feel like instead of std::cout << "The array has address: " << array << 'n'; It should be std::cout << "The array has value: " << array << 'n'; I haven't read the rest. I hope I am not bothering. Well, since arrays decay to pointers when evaluated, in this context, array decays to a pointer to the array. And since the pointers hold addresses, we can say the value of a pointer is an address. So, I think either is probably appropriate. Hello dear Alex and thank you very much for this great site. First of all I apologize for the weakness of my English .. I want to declare a function prototype for printing a 2dArray in a header file (and define a function to permanent use). I searched the Internet but I didn't get the appropriate answer because they mostly printed the array in the main function and I didn't want that. If I use the following syntax: or: (note: ==> (int arr[][], int row, int col) this is the method that came to my mind for having row and column values in the function.) the compiler will gives this error: `error: multidimensional array must have bounds for all dimensions except the first` Yes it's true but I can not put a number in the second [] in a header file! What should I do to fix this problem? What about using pointers? Thanks again and good luck... Since it was said: "Arrays decay into pointers when passed to a function" this is the only way that I could do without causing error: in header file: in .cpp file for implementing: and in main file: unfortunately in this code I must use *arr while calling print2DArray... any other ways that I tried, resulted in an error.(e.g. int**) Thanks a lot. 😀 Read lesson 6.14 and see if that's helpful in figuring it out. If not, ask again there. Thank you very much... I am running these 2 statements: std::cout << sizeof(int); std::cout << sizeof(int*); Both of these should print 4 bytes(if compiler take 4 bytes for int) but sizeof(int) is printing 4 and sizeof(int*) is printing 8 You are using a 64-bit address space, so your integer pointers are 8 bytes in size so your addresses are 64 bits in size. Now I understand why my passing an array to a function didn't work! hahaha.. I tried to create a function to accept an array for Chapter 6 lesson 3 quiz and I didn't understand the result so I had to dump the function and create the instructions inside main(). but I modified it now and it works fine. Chapter 6 Lesson 3 Quiz 2. void printArrayAndIndex(int *array, int number, int arrayLength) { for (int i = 0; i < arrayLength; ++i)// prints the array using pointer std::cout << *(array+i) << " "; std::cout << "\n"; for (int i = 0; i < arrayLength; ++i)// finding the index of the number given using pointer if (*(array+i) == number) std::cout << "Your number is at " << i << " index. \n"; } Here also, the array being passed doesn't decay, no? Because then we wouldn't have been able to change all of the elements of the array. The array does decay into a pointer when passed to the function. However, we can still use the pointer to modify the original array argument. It means that the function parameter is also a pointer that is capable of holding the memory address of the array being passed ! Output :- 0x60ff08 0x60ff08 0x60ff08 Output :- 0x60ff0c 0x60ff0c 0x60ff08 Why The Outputs are different? Is it because it(in the 2nd case) produces the memory address of the ptr var rather than the var it is pointing to? Plz Explain ! In the last example, you're printing the address of variable ptr, not the address that ptr is holding (which is array's address). It should be consistent if you change to this: "Is it because it(in the 2nd case) produces the memory address of the ptr var rather than the var it is pointing to?" I said that btw ! But, Thanks 🙂 ! No, I just wanted to experiment it ! The confusion is primary The confusion is primarily Fixed, thanks! a) "When evaluated, a fixed array will “decay” into a pointer to the first element of the array. All elements of the array can still be accessed through the pointer, but information derived from the array’s type can not be accessed from the pointer." What do you mean with "All elements of the array can still be accessed through the pointer", how could this happen? b) " Taking the address of the array returns a pointer to the entire array. This pointer also points to the first element of the array, but the type information is different (in the above example, int(*)[5])." What do you mean with "int(*)[5]"? 1) I mean a pointer pointing at the first element of an array can be used just like the array itself can. For example: This works because of pointer arithmetic. 2) Don't worry about it for now. It's not something you need to know to progress. Thank you Alex! Hi Alex, The last line (std::cout << ptr[1]; // will print 3 (value of element 1) is so strange to me. In this chapter, I only see std::cout<<ptr & std::cout<<*ptr in the examples. We cover that usage in the very next lesson. I've finished 6.8B & I don't see it???? I don't want to miss it. Thanks, Have a great day. I was referring to 6.8A, particularly the subsection titled, "Pointer arithmetic, arrays, and the magic behind indexing" Based on what I've learned from 6.8A, it would make sense to me if the last line looked like in the following: int array[] = { 5, 3, 6, 7 }; int *ptr = array; std::cout << array[1]; // will print 3 (value of element 1) std::cout << *(ptr + 1); // will print 3 (value of element 1) ===================================================== Oh ok, I think I understand it now. array[n] is the same as *(array + n); therefore, *(prt + 1) is the same as prt[1]. Yah! To me, prt[1] seemed very strange to me at first because I did not see it in the lesson. It would be nice if it somehow was seen/mentioned in the examples. Why ???? it still printed. #include struct new1 { int a; }; void printsize(new1 arr[]); int main() { new1 arr[5]; printsize(arr); } void printsize(new1 arr[]) { std::cout << sizeof(arr) << "n"; } You created an array of structs rather than a struct containing an array. // it is worth noting that arrays that are part of structs or classes do not decay when the whole struct or class is passed to a function.// I don't understand why you said arrays do not decay when struct is passed to a function but in above code arr still has size is 4 provide that it decays in pointer... sorry for my bad english .)) Because you're code isn't doing what I'm suggesting. 🙂 An array that is inside a struct will not decay: An array of structs will still decay: Oh thanks Alex, you are very enthusiastic Is there any way of passing and array into a function without decaying it except using struct or classes or some other thing, just passing it through main()? Yes, there is. You can pass an array by reference: But the syntax is messy. A better solution is just to use std::array. We cover std::array later in this chapter. Alex love your site so much it helped me a lot, i have one off topic quesiton, I have searched today a little bit on the internet and found out that visual studio 2012+ don't support windows form applications, i also found out there is a way to go around it but i dont have a tools menu or things like that (if u understand me), so my questions are how do i create real life applications for windows with c++ and where also can i use my c++ knowledge for creating some real life programs? Thank you for everythig, you are the real HERO <3 My understanding is that Windows Form applications are part of Microsoft .net, so you need to use managed C++ to create one. Unfortunately, I don't have any knowledge on that topic, as I've never used .net (outside of some dabbling with C#). Hi, Alex I have a question for you. Give this code: ... .... I know this will print 20 (NumberOfElements * sizeof(int)) But I want to ask that how can it know the size of the array? 'Who' is holding the information of the array's length? Thanks When you compile your program, the compiler builds a symbol table full of variable names and types as it encounters variables. The sizeof operator is resolved at compile time, and the size itself can be derived from the variable's type. So when you compile this program, the compiler will replace "sizeof(array)" with 20 (which it knows because it knows that array is an array of 5 integers). Thank you very much! You are amazing! Thanks Alex. why is my code failing? int* test(); int main() { int* ptr=test(); for(int i=0;i<3;i++) cout<<*(ptr+i)<<" "; } int* test() { int arr[3]; arr[0]=10; arr[1]=20; arr[2]=30; return arr; } i am getting some garbage output instead of actual values in the array Variable arr is a local variable. That means it goes out of scope (and is destroyed) at the end of test. However, you're passing the address of arr back to main(). By the time main() gets the address, arr has already been destroyed. Consequently, main() is accessing a pointer that is now pointing to garbage. I think there's an extra word this time: and we dereference the pointer to get the value at that the memory address the pointer is holding I think you meant to drop the "that" thanks! First I'd like to thank you for this great site teaching everything about c++. But my problem is i want to learn everything how to write the program use a command prompt and eventually become good to able to code my own video games. I have more then 15 hours a day and this is great. Please provide a learning path or what to do or etc. So I'll has a good perceptive of it all. I believe you have my email thanks!!!! krisgoku2@hotmail.com Hey Kris, I am glad to see that you are passionate about learning how to code bottoms up, that's necessary to start learning anything generally. I am not an expert but I might be able to share with you some suggestions that I received from many programmers over the years - 1- Get your hands dirty- Just start coding. This site can help you with that. Do the exercises/quizzes at the end of the chapters to begin with. Learn a concept and apply it. Man everyone starts with a hello world. Every single programmer I came in touch with said me the same thing. Follow the best practices in every code you write until it becomes your second nature. So, to quote from a fender squier affinity series slogan "stop dreaming, start playing" 🙂 . 2- Learn to interpret- Another piece of advice I got when I asked a programmer from the compiler team(Hey Bryan! in case if he ever reads this) - " What one piece of suggestion would you like to give me to become a good programmer like you? " . The answer was " Learn to understand what others have written, that's one of the most useful skill to become good at programming". The reason you have to grow that skill is becoz, apart from writing code most of the time is spent debugging and maintaining a program. You will need to spend considerable amount of time looking and fixing others code, so it's important to grow that skill early on. Apart from that some programs are well organized and some not at all, both will help you learn and become a better coder. A good code is one that is efficient, readable, re-usable and easily maintainable. On this site in the comment section there are so many coders posting there query, look at them, try to understand, see if you can answer them, if yes answer them. Trying to explain someone else increases your understanding. More importantly going through the comment section, answers a lot of questions you might have, may be opens a new dimension altogether you never thought about. Then there is stack overflow. 3- Aim small miss small (from Patriot) - Don't expect to learn everything in one day. Start slow, have small goals don't rush. Since you have a lot of time, make sure you complete each chapter properly. Don't jump coz these lessons have been organized after a lot of time spent on how to do it. These concepts are very important and will teach you the core C++. Once done you will be capable enough to absorb new concepts and more advanced topics. Which will require time, and lots of coding experience. Best of luck. Happy coding 🙂 Regards, Raquib I have a Query. I am Checking the below Program. #include <iostream> int main() { using namespace std; int array[5] = {1,6,9,5,7}; std::cout << &array << "n" << &array[0]; std::cout << '\n'; std::cout << *(&array) << "n" << *(&array[0]); return 0; } When I run this program, I got the address for the first element of the array in both the cases(for the first cout). I understand the reason, As you rightly said: "The only difference is the type information that is returned. A pointer to the first element would have type int*, whereas a pointer to the whole array would have type (int[5] *).". However, for the third cout, I am trying to dereference "&array" and "&array[0]". So while dereferenceing, For "&array[0]", I got the correct result, i,e 1. But for "&array", I got the memory address of the "array" again after dereferencing in stead of the array element. I couldn't understand the reason for this. Could you please explain, why *(&array) evaluated to memory address of the array rather then value of the array ? Remember that array decays to a pointer of some kind, and [] has an implicit deference. So: *(&array[0]) evaluates array[0], giving you the value of element 0, which you then take the address of (giving you a pointer), which you then dereference to get the value of element 0 again. *(&array) evaluates &array, which takes the address of the pointer to the array, essentially giving you a pointer to a pointer. You then dereference this to get back the original pointer to the array. This is why you're getting an address here. I think what you meant to do was this: On the left hand side, *array dereferences the pointer to the array, giving you the value of element 0. What do you think of the following syntax for passing arrays to a function? It generally won't work, unless arraySize is a macro. arraySize has to be a integer literal -- strangely enough, constexpr variables don't seem to work here. I remember there being some obscure syntax about arrays. Something like that myArray[n] is equivalent to *(myArray + n) which is equivalent to *(n + myArray) such that a statement like n[myArray], where n is an integer representing an array index, is actually valid syntax for access to the (n - 1)th array element. Not that you'd want to use such an odd thing. This is covered in the very next lesson. 🙂 Name (required) Website
http://www.learncpp.com/cpp-tutorial/6-8-pointers-and-arrays/comment-page-2/
CC-MAIN-2018-13
refinedweb
3,164
71.75
30 May 2012 19:43 [Source: ICIS news] WASHINGTON (ICIS)--US pending home sales fell sharply in April from March, an unexpected downturn following several months of improvement, and the drop helped trigger a steep decline on Wall Street on Wednesday. The National Association of Realtors (NAR) said that its pending home sales index (PHSI) fell by 5.5% in April to a reading of 95.5 compared with the March level of 101.1. The downturn came just a month after NAR cited a recent string of monthly improvements in pending sales as evidence that “the housing market has clearly turned the corner”. The unexpected drop in pending sales was seen as particularly worrisome for the long-troubled ?xml:namespace>. In the wake of April’s pending sales index downturn, the Dow Jones Industrial (DJI) average fell more than 160 points in midday trading, wiping out gains seen on Tuesday. The Dow decline also was attributed to renewed concerns among traders about the ongoing eurozone crisis and a possible sovereign debt default by However, NAR chief economist Lawrence Yun dismissed the April downturn in pending home sales as just a “one-month setback” that stands in contrast to “many moths of gains”. Yun said that US housing market conditions are fundamentally improving, and he noted that April’s pending sales index of 95.5 is 12 points or 14.4% above the April 2011 index measure of 83.5. “Home contract activity has been above year-ago levels now for 12 consecutive months [and] the housing recovery momentum continues,” he said. “Housing market activity has clearly broken out at notably higher levels and is on track to see the best performance since 2007,” Yun said. That year marked the beginning of the “All of the major housing market indicators are expected to trend gradually up,” he added. The housing industry, especially new home construction, is a key downstream consuming sector for a broad range of chemicals, resins and derivative products. The April pending home sales decline is one among a mixed bag of recent housing sector indicators. On Tuesday, Standard & Poors (S&P) reported that And despite March declines in sales of both new and existing
http://www.icis.com/Articles/2012/05/30/9565652/us-pending-home-sales-fall-in-april-rattling-wall-street.html
CC-MAIN-2015-11
refinedweb
367
58.92
On Sun, 29 Apr 2001, Jeremy Quinn wrote: >. Well, you are talking about taglibs. We are talking about XSP and that means the element available under the XSP namespace. I still don't thing that anything like <xsp:element> or <xsp:expr> or most of the other elements makes any sense in an Action, and thus XSP is of no use in Actions. It is the wrong markup. Of course you can tell us that the fp or esql logicsheet can be of some use in an Action but it's not the XSP markup you mean its the java code in there, right? > It would be extremely useful to be able to do any events, update and > setting up in Actions, controlled by the SiteMap, and then use Output tags > within the XSP Page. I'm not sure to understand you. I already can and do let the sitemap controll the events and let it dispatch them among Actions. I already use Actions to update my model and I use Action to update/create my business objects which gets output later on in the piipeline with the help of XSP pages. So what do you want more? Giacomo --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200104.mbox/%3CPine.LNX.4.31.0104291833550.19091-100000@lap1.otego.com%3E
CC-MAIN-2015-27
refinedweb
219
72.66
SOAP faults and Webservice faults from 3rd Party webservices There are several blogs and discussions about capturing and handling SOAP faults. This blog leverages on these blogs and discussions to provide a solution about how SOAP faults in addition to the web service faults can be captured in SAP PO/PI when invoking a 3rd Party web service. Requirement: A 3rd party web service returns errors as web service faults and SOAP fault. Web service faults can be handled easily by adding a fault message in the service interface and adding a fault mapping. However SOAP fault messages need to be explicitly captured in PO/PI. Solution: 1. In the SOAP receiver communication channel check do no use SOAP envelope. Doing this ensures that the entire SOAP message including Envelope, Body and Fault can be captured in PO/PI. 2. Create a simple XSLT mapping for adding the SOAP envelope (since this gets stripped off from the setting in step 1) and use this after the request mapping so that the request message has the necessary SOAP envelope when the web service is invoked. 3. Create a custom xsd which would have both the fault message and the response message. Sample message in external definition from imported XSD as below: 4. Use the above external message as the response message in the service interfaces and the response mapping. 5. If there are namespaces in the response message e.g. “WebServiceFault” has a namespace attached to it these can be removed by using the XMLAnonymizerBean in the receiver SOAP communication channel. 6. Also in the receiver SOAP communication channel set the parameter XMBWS.NoSOAPIgnoreStatusCode = true so that the receiver SOAP adapter ignores the HTTP Status code when “Do not user SOAP envelope” is used. 7. When the web service scenario is tested with the above settings, all the fault messages can be captured in PO/PI and you can decide on the further course of action. Note: Since the response and fault will be returned to the response mapping, you can further split this here or customise the mapping based on specific requirements. Sample error message returned by the web service with the above solution: Sample response message returned by the web service with the above solution: References: 856597 – FAQ: XI 3.0 / PI 7.0/7.1/7.3 SOAP Adapter Thanks a lot!!! This helped me to solve an issue that I was struggeling for years... To hell with SAP PI. Hi Chandra I hope this blog will solve my issue but please can you detail the step 3? My wsdl has a request and response but no fault so in this case do I need to have response and fault as part of the soap response? Hi Prabhu Apologies for such a delayed reply. Step 3 is a single custom XSD to cater for the fault message and the response message, if you do not need to capture the fault message you do not need the custom xsd, you can directly use the WSDL response. Hope this clarifies. Regards Chandra Hi Chandra they have the complete example because I am with the same error and I can not solve it, could they send captures? operation mapping, Service Interface and Message Mapping , to see how they configured it. the scenario that I have is PROXY TO SOAP, the channel receiver that returns the fault. regards,
https://blogs.sap.com/2015/07/29/soap-faults-and-webservice-faults-from-3rd-party-webservices/
CC-MAIN-2022-27
refinedweb
566
69.62
The image packaging system publication client, pkgsend, allows the publication of new packages and new package versions to an image packaging repository. Each publication is structured as a transaction. Transactions may contain actions as described in the section Actions in IPS. The pkgsend(1) command supports the following subcommands : Note - The default repository server is. To specify a different repository server, use the -s repository-server option with the pkgsend subcommands. Subcommand Description open Begins a transaction on the package specified by pkg_fmri. A transaction_id is outputted when the command executes successfully. Syntax : pkgsend open -en pkg_fmri -e– Use this option to prepend export PKG_TRANS_ID= to the transaction_id. This can be used by the shell to evaluate the output and set the environment variable PKG_TANS_ID. This environment variable can be used when executing future pkgsend commands within the same transaction. For information about FMRI, see the Solaris Basic Administration Guide. add Adds an action to the current transaction. Syntax : pkgsend add actions pkg_fmri include Adds multiple actions present in each manifest file to the current transaction. Each line in the file should be the string representation of an action. Do not use this sub command with the add, open or close sub-commands. Syntax : pkgsend include [-d basedir] filename import bundlefile Adds each given bundlefile into the current transaction. An SVR4 package is an example of a bundlefile. Syntax : pkgsend import [-T pattern] bundlefile Use the following shell matching rules with the pattern option to add the timestamp of the file to the action of that file, if the basename of the files in the bundle match the optional pattern(s). Note - The basename refers to the last component of a pathname. For example, in the path /usr/bin/ls, the basename is ls. * – Matches everything. ? – Matches any single character. [seq] – Matches any character in sequence. [!seq] — Matches any character not in sequence. Note - When a timestamp is added to a file's actions, the file will be installed with precisely that timestamp, regardless of the actual time of installation. This is particularly useful in cases where the software requires a particular timestamp on the files it reads. For instance, python wants the executable files .py that are installed to have the same timestamp on the filesystem as is recorded in the compiled versions (.pyc) of those files. Close the current transaction. Syntax : pkgsend close [-A] The -A option abandons the current transaction. Note the following limitations of using the pkgsend command: Use the pkgsend command only if one or two packages at a time are being published to the repository. The pkgsend command does not resolve dependencies. If more than one package is being published to the repository, the user must manually resolve all package dependencies. Note - When you are publishing a newer version of a package to the repository, specify the FMRI of the package correctly. The new version of the package will coexist with any prior version already in the repository.
https://www.linuxtopia.org/online_books/opensolaris_2008/IMGPACKAGESYS/html/pkgsend.html
CC-MAIN-2022-33
refinedweb
494
57.37
#if BYTE_ORDER == BIG_ENDIAN /* Copy a vector of big-endian int into a vector of bytes */ #define be32enc_vect(dst, src, len) \ memcpy((void *)dst, (const void *)src, (size_t)len) /* Copy a vector of bytes into a vector of big-endian int */ #define be32dec_vect(dst, src, len) \ memcpy((void *)dst, (const void *)src, (size_t)len) #else /* BYTE_ORDER != BIG_ENDIAN */ /* * Encode a length len/4 vector of (int) into a length len vector of * (unsigned char) in big-endian form. Assumes len is a multiple of 4. */ static void be32enc_vect(unsigned char *dst, const int *src, size_t len) { size_t i; for (i = 0; i < len / 4; i++) be32enc(dst + i * 4, src[i]); } /* * Decode a big-endian length len vector of (unsigned char) into a length * len/4 vector of (int). Assumes len is a multiple of 4. */ static void be32dec_vect(int *dst, const unsigned char *src, size_t len) { size_t i; for (i = 0; i < len / 4; i++) dst[i] = be32dec(src + i * 4); } #endif /* BYTE_ORDER != BIG_ENDIAN */ It creates the hash consistently under Ubuntu, seems to work well. It creates the same hash consistently under Raspbian (as expected) and works well, this being the same hash as the one under Ubuntu. Under Windows (the same code) the hash is consistent, but different than the Linux versions (that makes the pfiles non portable). I've checked the inputting char array in the debugger in both the Linux and Windows versions and the input (e.g. the password) seems identical character for character. Could this somehow be an encoding issue between platforms? Anybody have a thought off the top of their head what I might be missing? As a side note, I assumed this returned a sha256 hash but it doesn't match up to "echo -n thepassword | sha256sum" in either environment. The code is on github, I can share links to the sections if needed.
http://mudbytes.net/forum/topic/4816/
CC-MAIN-2018-51
refinedweb
310
68.81
generator-closure Generator for Closure Library npm install generator-closure Closure Library Generator Create a fully working Closure Library project in seconds. Getting Started The Library Version There is a Library version of the generator: yo closure:lib The Library version is for closure libraries that have no web output. The location of your project's base changes to lib/ instead of the default one app/js/. What you get - A fully working installation of closure compiler. - A battle-tested folder scaffolding for your closure application. - A skeleton sample application. - A third-party dependencies loader. - A set of helper and boilerplate code to speed up your time to productive code. - A Full Behavioral and Unit testing suite using Mocha with Chai.js and Sinon.js. - 52 BDD and TDD tests both for your development and compiled code. - Full open source boilerplace (README, LICENSE, .editorconfig, etc). - Vanilla and a special edition Closure Compiler that strips off all loggercalls from your production code. (The special edition is used). - Sourcemap file for your compiled code. - A set of Grunt Tasks that will: - Manage your dependencies. - Compile your code. - Run a static webserver with livereload. - Test your code on the CLI & the browser. Table Of Contents - Grunt Tasks - Your Closure Application - Third-Party Dependencies - The Test Suites - About Grunt Tasks Tasks Overview grunt serverStart a static server gruntor grunt depsCalculate Dependencies grunt buildCompile your code grunt testRun tests on the command line grunt server:testRun tests on the browser Tasks In Detail grunt server The grunt server task will do quite a few things for you. - A static server will listen on port 9000 (or anything you'd like, with: --port=8080). - A live reload server will be launched. - All your codebase will be wathed for changes and trigger livereload events - Finally, your browser will open on the project page: - Temporary folder tempis cleared. - All the defined third-party dependencies are minified using uglify. The output can be found in temp/vendor.js. - Closure Compiler will compile your closure codebase using ADVANCED_OPTIMIZATIONS. The output can be found at: temp/compiled.js - Produce the final file by concatenating the vendor and compiled files to: app/jsc/app.js Configure build options var ENTRY_POINT = 'app'; Third-Party Libraries Read about Configuring Third-Party dependencies for your build. EXTERNS_PATH Define the folder that will be scanned for externs files. The Closure Generator comes packed with these externs for you: - jQuery 1.9 - Angular - Facebook Javascript SDK - When.js 1.8 ENTRY_POINT The entry point is the top-most namespace level of your application. In the included sample application that namespace is app. Advanced Build Configuration.get. Your Closure Application... Folder Layout.provide('app.SomeClass'); goog.require('app.Module'); /** * @constructor * @extends {app.Module} */ app.SomeClass = function() { goog.base(this); /** .. */ }; goog.inherits(app.SomeClass, app.Module);. Your Application Folders Create new folders as you see fit. In the generator the folder app/ is included which contains a skeleton app. Third-Party Dependencies. Configure Third-Party for Development You need to edit the third-party library file which is located at: app/js/libs/vendor.loader.js. /** * EDIT THIS ARRAY. * * @type {Array} define the 3rd party deps. */ ssd.vendor.files = [ 'when.js', 'jQuery-1.9.1.js' ]; Add or remove strings in the array as you see fit. Configure Third-Party Building To define what third-party files will get bundled with your production file you need to edit the Gruntfile. At the top of the Gruntfile the vendorFiles Array is defined: // define the path to the app var APP_PATH = 'app/js'; // the file globbing pattern for vendor file uglification. var vendorFiles = [ // all files JS in vendor folder APP Suites The test suite uses Mocha with Chai.js and Sinon.js. Two seperate types are included, Behavioral and Unit tests. Behavioral Driven Development Test Drive Development Tests on the CLI grunt test 'nough said. Common Browser Pitfalls. Contributing Closure is so vast and we need to have a common place to document all the techniques and best practises used today. The ssd Namespace The ssd namespace that's included in some libraries stands for SuperStartup Development. It is the development namespace used by the Superstartup library. Release History - v0.1.15, 10 Jan 2014 - v0.1.13, 06 Jan 2014 - Force create temp/folder. - Updated grunt-closure-toolspackage to ~0.9.0, latest. - v0.1.12, 05 Dec 2013 - Updated all dependencies to latest. - v0.1.9, 14 Oct 2013 - Allow overriding hostname and port of grunt server. - Fix generator greenbug. - Enables defining closure-library location using the CLOSURE_PATHenv var closureplease/generator-closure#3 - v0.1.8, 8 Jun 2013 - Fix event handling so it works with recent closure changes. - v0.1.7, 3 Jun 2013 - Bug fix for library generation - v0.1.6, 1 May 2013 - v0.1.5, 07 May 2013 - Renamed component.jsonto bower.json - Linted Gruntfile - Minor color tweaks - v0.1.4, 14 Apr 2013 - Added Closure Linter task, thanks @pr4v33n - v0.1.3, 14 Apr 2013 - Minor bugs - v0.1.2, 21 Mar 2013 - Added Library generator - Added Bower support - Instruction changes - v0.1.1, 20 Mar 2013 - Added Source Map option - Minor typo fixes - v0.1.0, Mid Mar 2013 - Big Bang License
https://www.npmjs.org/package/generator-closure
CC-MAIN-2014-10
refinedweb
869
60.61
When the user places the focus on to the textbox, in the Windows Phone 7 , the default keyboard / onscreen keyboard will pop up for the user to touch and type the characters to the textbox. It is also important that the keyboard can be customized depending on the input the user will be typing or the data type the textbox is associated with. For example, user may want only numeric fields and some symbols when typing the Phone numbers or text with symbols like (@,.) when entering a email ID . As a developer, we are not just limited to the Default keyboard layout. We can specify the input scope, which determines the layout of the Keyboard that is used in Software Input panel. This will help the end user to easily type the required characters even faster. Setting the Input Scope is very simple and is demonstrated below.The sample Application contains a simple textbox (textBox1) with a title. When the focus comes to the textbox, the default onscreen keyboard will be displayed as shown in the screenshot below. It’s time to change the input scope now. In the PhoneApplicationPage_Loaded Event of the Form, I use the following code snippet . InputScope Keyboard = new InputScope(); InputScopeName ScopeName = new InputScopeName(); ScopeName.NameValue = InputScopeNameValue.Digits; Keyboard.Names.Add(ScopeName); textBox1.InputScope = Keyboard; When I run the Application now and the textbox is focused, the Keyboard layout is changed to support digits. To change the onscreen keyboard to support Url, Change the NameValue to InputScopeNameValue.Url . ScopeName.NameValue = InputScopeNameValue.Url; To change the onscreen keyboard to support Url, Change the NameValue to InputScopeNameValue.EmailSmtpAddress. ScopeName.NameValue = InputScopeNameValue.EmailSmtpAddress; The Input Scope and InputScopeName Class is part of the PresentationCore.dll Assembly and is found in the namespace System.Windows.Input
https://developerpublish.com/using-input-scope-in-windows-phone-7/
CC-MAIN-2021-21
refinedweb
296
55.74
Although the "Hello World!" program looks simple, it has all the fundamental concepts of Java which is necessary to learn to proceed further. Lets break down "Hello World!" program to learn all these concepts. // Hello World! Example public class MyClass { public static void main(String[] args) { System.out.println("Hello World!."); } } Hello World!. First line is a single line comment which starts with //. Generally, comments are added in programming code with the purpose of making the code easier to understand. It is ignored by compiler while running the code. Please visit comment page for more information. Every line of code in Java must be within a class. In this example, the name of class is MyClass. It is a public class too. Every java program file can have maximum one public class. Public class name and java program file name must be same. For example, in this example name of java program file is MyClass.java. To learn about class, please visit class page. Please note that Java is case-sensitive language. Along with this, body of the class must be within curly brackets. In Java, every public class must contain main method which is executed when public class is called. Please note that public class name and java program file name must be same. There are two methods in Java, which can be used to print the output. Inside println() or print(), multiple variables can be used, each separated by +. public class MyClass { public static void main(String[] args) { System.out.println("Hello " + "World!."); System.out.println("Print in a new line."); System.out.print("Hello " + "World!."); System.out.print("Print in the same line."); } } Hello World!. Print in a new line. Hello World!.Print in the same line. To facilitates a line change inside ystem.out.println() statement or ystem.out.print() statement, "\n" can be used. "\n" tells to start a new line to the program. public class MyClass { public static void main(String[] args) { System.out.println("Hello \nWorld!."); System.out.print("Learning Java is fun.\nAnd easy too."); } } Hello World!. Learning Java is fun. And easy too.
https://www.alphacodingskills.com/java/java-syntax.php
CC-MAIN-2019-51
refinedweb
352
70.9
thread with in the servlet..? (2 messages)import javax.servlet.*; import javax.servlet.http.*; public class NewProcess extends HttpServlet implements Runnable { HttpServletRequest request; protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { this.request = request; Thread up = new Thread(); up.start(); } public void run() { /*here i want to use request object . is it posible*/ } } my problem is i star the thread.. But it will not come to the run method... Give me your comments Threaded Messages (2) - Re: thread with in the servlet..? by Rajesh Patkar on March 12 2008 18:34 EDT - Re: thread with in the servlet..? by Anthony McClay on March 26 2008 17:44 EDT Re: thread with in the servlet..?[ Go to top ] The servlet container provides concurrency service to the servlet. There is normally no need to make a servlet runnable. It is an antipattern to use threading in servlet code. Your intention of using run is not clear. Every invocation of this servlet will cause a thread to be created. You are not able to execute run here because you have not provided the Thread object with the runnable handle. You can achieve your intended purpose with the following modification Thread up = new Thread(this); - Posted by: Rajesh Patkar - Posted on: March 12 2008 18:34 EDT - in response to arulraj v Re: thread with in the servlet..?[ Go to top ] I have seen new developers and transfer ASP developers do this in the past. This is just a question you should ask.. Why are you trying to create a thread? The Servlet API is concurrent complaint, and as long as you do not create instance variables, you need not worry about this. So this is an anti-pattern. HINT: If you wish for a service to run asynchronously, I would suggest using JMS functionality or JMS to message Drive Beans (MDB) if you are in a full J2EE JEE environment, which better suits this model, and performs better due to pooling. I cannot for see why you would need code to do this, and may conflict with the underlining code of the Servlet container ( Depending who the source of the container, and how they implemented their concurrency code) if you still have this issue, and must use Threads in the Servlet, re-post and I will at least think of a solution. Tony McClay - Posted by: Anthony McClay - Posted on: March 26 2008 17:44 EDT - in response to arulraj v
http://www.theserverside.com/discussions/thread.tss?thread_id=48660
CC-MAIN-2015-06
refinedweb
410
72.36
: Did this fall through the cracks? Markus Armbruster <armbru@...> writes: > John Levon <levon@...> writes: > >> On Fri, Dec 07, 2007 at 08:19:53AM +0100, Markus Armbruster wrote: >> >>> The new codes are only for PPC, where Xenoprof doesn't quite exist >>> yet. Therefore, we can stick to the old codes on platforms that have >>> Xenoprof already, and use the new ones everywhere else. Something >>> like this: >> >> Good point. Let's do this. It's ugly as sin but better than breaking >> people's machines. Nice big comment though please... >> >> regards >> john > > Here's a patch for that. Tested on i686 bare metal. > > Signed-off-by: Markus Armbruster <armbru@...> > > Index: daemon/opd_interface.h > =================================================================== > RCS file: /cvsroot/oprofile/oprofile/daemon/opd_interface.h,v > retrieving revision 1.6 > diff -p -u -w -r1.6 opd_interface.h > --- a/daemon/opd_interface.h 10 May 2007 23:42:33 -0000 1.6 > +++ b/daemon/opd_interface.h 12 Dec 2007 19:51:44 -0000 > @@ -27,8 +27,19 @@ > /* Code 9 used to be TRACE_END_CODE which is not used anymore */ > /* Code 9 is now considered an unknown escape code */ > #define XEN_ENTER_SWITCH_CODE 10 > +/* > + * Ugly work-around for the unfortunate collision between Xenoprof's > + * DOMAIN_SWITCH_CODE (in use on x86) and Cell's SPU_PROFILING_CODE > + * (in use with Power): > + */ > +#if defined(__i386__) || defined(__x86_64__) > +#define DOMAIN_SWITCH_CODE 11 > +#define LAST_CODE 12 > +#else > #define SPU_PROFILING_CODE 11 > #define SPU_CTX_SWITCH_CODE 12 > -#define LAST_CODE 13 > +#define DOMAIN_SWITCH_CODE 13 > +#define LAST_CODE 14 > +#endif > > #endif /* OPD_INTERFACE_H */ > Index: daemon/opd_trans.c > =================================================================== > RCS file: /cvsroot/oprofile/oprofile/daemon/opd_trans.c,v > retrieving revision 1.16 > diff -p -u -w -r1.16 opd_trans.c > --- a/daemon/opd_trans.c 10 May 2007 23:42:33 -0000 1.16 > +++ b/daemon/opd_trans.c 12 Dec 2007 19:51:45 -0000 > @@ -271,8 +271,11 @@ handler_t handlers[LAST_CODE + 1] = { > &code_trace_begin, > &code_unknown, > &code_xen_enter, > +#if !defined(__i386__) && !defined(__x86_64__) > &code_spu_profiling, > &code_spu_ctx_switch, > +#endif > + &code_unknown, > }; > > extern void (*special_processor)(struct transient *); View entire thread I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/oprofile/mailman/message/18310891/
CC-MAIN-2017-22
refinedweb
356
60.21
Rpi pico micropython ssd1306 OSError: 5 Ask QuestionAsked yesterdayActive todayViewed 45 times1 I have .91inch 128×32 pixel oled display. here im following Tom’s Hardware tutorial here I’m using Thonny on windows 10 ive basically copy and pasted from Tom’s hardware from machine import Pin, I2C from ssd1306 import SSD1306_I2C i2c=I2C(0,sda=Pin(0), scl=Pin(1), freq=400000) oled = SSD1306_I2C(128, 64, i2c) oled.text("Tom's Hardware", 0, 0) oled.show() I named the file “main.py” I have the ssd1306 micropython package. when i try to run it i get the following message Traceback (most recent call last): File "<stdin>", line 17, in <module> File "/lib/ssd1306.py", line 110, in __init__ File "/lib/ssd1306.py", line 36, in __init__ File "/lib/ssd1306.py", line 71, in init_display File "/lib/ssd1306.py", line 115, in write_cmd OSError: 5 I honestly dont know what this is trying to say. i googled “OSError 5” and the only solutions that i could find were to add a delay between the initiation and the first write to the i2c device. that didnt work. does anyone know how to fix this error? thank you.i2cpi-picomicropythonShareEditFollowCloseFlagedited yesterdayasked yesterdayZeno2133 bronze badges New contributor - #Zeno, Welcome and nice to meet you. Ah, let me see. Please let everybody know which SSD1306 you are testing, and which tutorial you are following. Two links would help a lot. Cheers. – tlfong01 yesterday - I searched my old files and found a setup record, as shown in Appendix A below. Please confirm your device is exactly the same as mine. So I would tailor my answer accordingly. – tlfong01 yesterday - I have added a couple of references to my answer. I would suggest you to skim them and perhaps also listed some references in your answer. This shows the research work you have done, and other readers would upvote you. At least you should show that you are reading Tom’s Hardware columns and give the link in your answer. – tlfong01 yesterday 1 Answer Question How can Rpi Pico Micropython Talk To I2C LCD SSD1306? / to continue, … Answer Short Answer - It is pretty likely that Pico Thonny Python’s error message “OSError: 5” is similar to Rpi Os’s “OSError: [Errno 121] Remote I/O error“ - Errno 121 usually occurs when running an I2C python program with the following situation:2.1 The hardware wiring connection is bad, eg,2.1.1 Forget to plug/connect the cable,2.1.2 Too long cabling, eg, over 30cm,2.1.2 I2C frequency too high, over 400kHz,2.1.3 I2C device, eg. I2C I2C MCP23017, which is sensitive to noise,2.1.4 I2C bus overloaded, with more than 4 devices, causing bus capacitance over 400pF. / to continue, … Long Answer / to continue, … References (1) Solomon OLED Display Catalog – Solomon Tech (2) Solomon OLED Driver IC Product Sheet – Solomon Tech (3) Solomon SSD1306 Product Sheet – Solomon Tech (4) SSD1306 OLED Display Module Product Spec – Denstron/Farnell (5) SSD1306 Tutorial – Components101 (6) How to Use an OLED Display With Raspberry Pi Pico – Les Pounder, Tom’s Hardware, 2021feb28 (7) Luma.OLED API Doc for SSD1306, SSD1309, SSD1322, SSD1325, SSD1327, SSD1331, SSD1351 and SH1106API OLED (8) Rpi3B SSD1306 OLED I2C Interface Problem (with debugged Hello World program) – tlfong01 Rpi.SE Asked 2019dec21, Viewed 799 times (9) Raspberry Pi Pico + 128×64 I2C SSD1306 OLED (MicroPython) Youtube (10) Raspberry Pi Pico + 128×64 I2C SSD1306 OLED (MicroPython) Tutorial / to continue, … Appendices Appendix A – Old SSD1306 Setup Record Appendix B – Installing Rpi Pico MicroPython OLED SSD1306 Library – Tom’s Hardware (6) How to Use an OLED Display With Raspberry Pi Pico – Les Pounder, Tom’s Hardware, 2021feb28 - Click on Tools > Manage Packages to open Thonny’s package manager for Python libraries. - Type “ssd1306” in the search bar and click “Search on PyPI”. - Click on “micropython-ssd1306” in the returned results and then click on Install. This will copy the library to a folder, lib on the Pico. - Click Close to return to the main interface. Note – Only partial installed, with error code 1 Update 2021apr03hkt1528 I installed ssd1308 second time and had good luck. Appendix C – Pico version of Rpi4B OS buster’s command “i2cdetect -y 1” Introduction I am using the Pico Thonny Python function to detect and list the I2C devices on I2C bus 1 and 2. The following sample output show two i2c pcf8574 devices are detected. Note – The Thonny sample is badly formatted in RpiSE here. So I am going to use PenZu to write a better formatted file (to be include in next Appendix D). MicroPython v1.14 on 2021-02-02; Raspberry Pi Pico with RP2040 Type “help()” for more information. %Run -c $EDITOR_CONTENT Begin scanI2cBuses(), tlfong01 2021apr03hkt1431, … Begin testPcf8574(), … >>>>>>>>>> *** Set up I2C bus list [i2cBus0, i2cBus1] *** *** I2C Bus List Config *** I2C Bus Num = 0 Bus nick name = Amy Frequency = 100 kHz sdaPin = 0 sclPin = 1 I2C Bus Num = 1 Bus nick name = Betty Frequency = 400 kHz sdaPin = 2 sclPin = 3 *** I2c Device Dict *** i2cDevNum = 0 DeviceName = PCF8574 #1 NickName = Connie I2cBusNum = 0 I2cDevAddr = 0x23 i2cDevNum = 1 DeviceName = PCF8574 #2 NickName = Daisy I2cBusNum = 1 I2cDevAddr = 0x24 i2cDevNum = 2 DeviceName = LCD2004 #1 NickName = Emily I2cBusNum = 0 I2cDevAddr = 0x27 i2cDevNum = 3 DeviceName = LCD1602 #1 NickName = Fanny I2cBusNum = 1 I2cDevAddr = 0x22 *** Scan and print I2C devices I2C bus list [i2cBus0, i2cBus1] *** I2C Bus Num = 0 I2cDeviceList = 0x23 I2C Bus Num = 1 I2cDeviceList = 0x24 End testPcf8574(). >>>>>>>>>> End scanI2cBuses(), tlfong01 2021apr03hkt1431, … / to continue, … Appendix D – Full program listing of I2C Bus Scan program discussed in Appendix C above. Pico I2C Bus Scan Program v0.1 Appendix E – MicroPython ssd1306.py Listing v0.1 MicroPython ssd1306.py Listing v0.1 Notes - Windows 10 Thonny Program and Data Folder Locations:Program Folder = c: > user > AppData > Local > Programs > Thonny > Lib > site-packages > thonnyData Folder = c: > user > AppData > Roaming > Thonny - This ssd130.6py is only 150 python statement long, and contains two classes, one for I2C, an another for SPI. Appendix F – Rpi4B Python Hello World Program for SSD1306 (8) Rpi3B SSD1306 OLED I2C Interface Problem (with debugged Hello World program) – tlfong01 Rpi.SE Asked 2019dec21, Viewed 799 times ShareEditDeleteFlagedited 4 hours agoanswered yesterdaytlfong013,62133 gold badges77 silver badges2222 bronze badges - yes, I have the same OLED as you. 128×32 pixels I2C… – Zeno yesterday - It appears that you have a rasp pi 4, I am doing this on windows with thonny, and this is the tutorial (one of many tutorials that i have tried to get the oled working) tomshardware.com/how-to/oled-display-raspberry-pi-pico – Zeno yesterday - perhaps, i should also mention that I have gotten the oled to work on a pro micro which is a copy of adafruit’s pro micro – Zeno yesterday - Ah, some misunderstanding here. As I said, the setup I showed was 2 years ago on Rpi4B Thonny. I am showing this for your reference and also to refresh my memory. I am glad that you are using Windows Thonny Micro Python and not ESP32 or AdaFruit Circuit Python. I am using both Rpi4B and Windows Thonny. So I will try to use Windows for you to compare and contrast, to troublsshoot. – tlfong01 yesterday - here is one of the websites discussing adding a small delay that i mentioned earlier forum.micropython.org/viewtopic.php?t=4746 – Zeno yesterday - Ah, your successful testing on AdaFruit Pro Micro is very useful to our troubleshooting. So I won’t guess your OLED is bad. – tlfong01 yesterday - Just to confirm, so you have already installed the SSD1306 micropython library in your Pico? – tlfong01 yesterday - 1yes, i have selected micropython ssd1306 and not circuitpythonand not ssd1327. the website suggests taking this to chat. should we? – Zeno yesterday - 1i added a picture that might help you see what i chose, just in case. – Zeno yesterday - Ha, your pic shows you have a long and thin SSD1306. But mine is a short, fat guy. I forgot the details of what I did two years ago. So I need to refresh my memory. Ah locking down lunch time. I need to go and eat. See you later. PS – And let us move to chat room – tlfong01 yesterday - Let us continue this discussion in chat. – Zeno yesterday Please avoid extended discussions in comments. Would you like to automatically move this discussion to chat?Add a comment Categories: Uncategorized
https://tlfong01.blog/2021/04/05/ssd1306/
CC-MAIN-2022-05
refinedweb
1,395
64
Type: Posts; User: Swerve Hi, I have been trying to create my first doubly linked list, but have been unable to properly create it due to using the wrong parameters in the function 'insertBeginningWhenListIsEmpty'. ... Hi, I have a base class which has various inherited classes derived from it. I'm trying to store them all in a vector, but have learned that vectors cannot hold mixed classes. So I decided... Thank you laserlight. I've never created a 'container class' but reading how it would allow me to use shared_ptr is something I may go on to need I think ( due to having mixed classes? ), so for... Thanks laserlight! I have not yet added a virtual destructor, but... I have created Vehicle to now be an Abstract Base Class, but originally I was going to use the class Vectorclass for... Hi, I am starting to write a program which will store pointers to mixed classes within a single vector, but have been having trouble with Downcasting and the inherited classes datamembers being... Hi, I have put together a class which listens for a packet on a port. When it receives the packet, it will prinntln "Packet received". I have installed XAMP so that it can run as a... Hi! I was hoping for some advice on how best to create a single doubly-linked list which could contain all of the different types of classes I have. This is the code I have written so far.... ... Oh wow That has just totally fixed the error, I was expecting the use of return to not be possible at all. I was thinking I'd have to use operator overloading instead. Thanks for all the... Yikes! I mean the container type, a 'variable length array' . Plan to hold all of the car objects within their own vector. Reading about the STL says I can use a type T template, and then when... I have come a little further along now and was hoping someone could spot where my mistake is. After calling a constructor inside a global function, I tried to return this newly existing object to... Hi, I am making a console program which has a menu, for example: I then have a switch in main, if the user selects 1, I call a function and create a new object. I'm wondering, once the... Thanks for helping PeejAvery, appreciated. So from reading your explanation, I DO have to use some server-side validation due to the fact that it is PHP which will be sending the form, yes? ... Hi, I am creating a contactus form, which when submitted sends an email to the sites admin. As the form is not placing data into the database, but simply sending an email containing the... Thank you VictorN, I think I was using the incorrect variable names. For the text boxes I was right clicking and selecting "Add variable", and using a CEdit type, gave the names 'var1' and 'var2'... Hi, I am trying to create a standard Windows Form which comprises of the following: Button1 Button2 TextBox1 TextBox2 When the user clicks the Option1 button, I am trying to display some... I've been trying to get this XML data to show in a webpage, but I'm doing something obvious wrong, but I'm totally failing to get it to work.. The site uses an external CSS page to style it, BUT I... Thanks, I have figured it out, if anyone else has the problem, I was using 'gcc' instead of 'g++', once it's replaced it works fine. Also system("PAUSE") doesn't work in Linux, cin.get(); does... Hi there :) I'm trying to use g++, but am experiencing some difficulty despite following what the tutorials describe. I must be going wrong somewhere. I am using Ubuntu 8.10 32bit Desktop. I... Hi, I am trying to control the movement of a graphics image across the screen, and was hoping of some advice on the matter. for(;;) { Sleep(1); Hi everyone, I'm new to JS,and am having a little trouble with my script. It's a picture slide show. Basically, when I click 'Previous' to call the 'moveToPreviousSlide' function, on the first... Many thanks Paul! I was not sure about that either. I'm thinking it is still a connection issue tho. :( Hi, I am trying to declare a global array for use throughout a program. The program is a math question generator (sets). The main program file is saved as 'main.cpp', 'globalarray.h' is the... Hi! I am trying to write a function which manipulates the variables of a struct using pointers, but am having trouble with the syntax. I am still at the start with is, and so am just trying to... thanks Paul! I have managed to read the first item into a struct:- #include <iostream> #include <fstream> using namespace std; Hi! I've wrote a program which placed the contents of three array's into a txt file. #include <iostream> #include <fstream> using namespace std;
http://forums.codeguru.com/search.php?s=fc1a1fe822a546c4743d8fff94cd89e1&searchid=1921263
CC-MAIN-2013-48
refinedweb
843
73.27
Events and Delegates. While delegates have other uses, the discussion here focuses on the event handling functionality of delegates.. Public Delegate Sub AlarmEventHandler(sender As Object, e As AlarmEventArgs). Note A delegate declaration is sufficient to define a delegate class. The declaration supplies the signature of the delegate, and the common language runtime provides the implementation. An instance of the AlarmEventHandler delegate can bind to any method that matches its signature, such as the AlarmRang method of the WakeMeUp class shown in the following example. public class WakeMeUp { // AlarmRang has the same signature as AlarmEventHandler. public void AlarmRang(object sender, AlarmEventArgs e){...}; ... } Args). See Also Consuming Events | Raising an Event | Event Sample
http://msdn.microsoft.com/en-us/library/17sde2xt(d=printer,v=vs.71).aspx
CC-MAIN-2013-48
refinedweb
112
50.63
Is there a universal JavaScript function that checks that a variable has a value and ensures that it’s not undefined or null? I’ve got this code, but I’m not sure if it covers all cases: function isEmpty(val){ return (val === undefined || val == null || val.length <= 0) ? true : false; } 13 You can just check if the variable has a truthy value or not. That means if( value ) { } will evaluate to true if value is not: - null - undefined - NaN - empty string (“”) - 0 - false. 36 - 162 - 118 - 36 - 17 Just want to add that if you feel the ifconstruct is syntactically too heavy, you could use the ternary operator, like so: var result = undefined ? "truthy" : "falsy". Or if you just want to coerce to a boolean value, use the !!operator, e.g. !!1 // true, !!null // false. Aug 26, 2014 at 7:24 - 9 Also note that this will not check for strings which only contain whitespace characters. Nov 21, 2014 at 10:28 The verbose method to check if value is undefined or null is: return value === undefined || value === null; You can also use the == operator but this expects one to know all the rules: return value == null; // also returns true if value is undefined 8 - 41 Checking for only nullor undefinedcan be done like so: if (value == null). Mind the ==operator that coerces. If you check like this if (value === null || value === undefined), you forgot/don’t know how Javascript coerces. webreflection.blogspot.nl/2010/10/… Jul 3, 2014 at 11:46 - 44 - 15 arg == nullis pretty common in my experience. Oct 10, 2014 at 1:56 - 9 - 6 @Sharky There’s a difference between a variable that is undefined and an undeclared variable: lucybain.com/blog/2014/null-undefined-undeclared Oct 17, 2016 at 10:22 function isEmpty(value){ return (value == null || value.length === 0); } This will return true for undefined // Because undefined == null null [] "" and zero argument functions since a function’s length is the number of declared parameters it takes. To disallow the latter category, you might want to just check for blank strings function isEmpty(value){ return (value == null || value === ''); } 2 - 11 - 3 @IanBoyd that is because you are comparing == to ===. this means that undefined == null (true) undefined != null (false) undefined === null (false) undefined !== null(true) would be better to give a bit more information in order to be helpful and push people in the right direction. moz doc on the difference developer.mozilla.org/en-US/docs/Web/JavaScript/… Jan 14, 2016 at 18:03 possible duplicate of How do you check for an empty string in JavaScript? Jun 10, 2013 at 19:40 Protip, never do (truthy statement) ? true : false;. Just do (truthy statement);. Aug 2, 2017 at 17:55 @GeorgeJempty not a dup, since the other answer asks about strings in particular, whereas this one asks about variables. Feb 1, 2018 at 0:55 Any correct answer to this question relies entirely on how you define “blank”. Feb 1, 2018 at 1:03 @Jay It doesn’t hurt anything as far as execution of your code. It’s just overly verbose. You wouldn’t say, “Is are you hungry is true?” You just “Are you hungry” So in code just say if (hungry) …instead of if (hungry === true) …. Like all coding things in this manner, it’s just a matter of taste. More specific to the example provided by the OP he’s saying even more verbosely, “If it’s true, then true, if not then false” But if it’s true, then it’s already true. And, if it’s false, it’s already false. This is akin to saying “If you’re hungry then you are, and if not then you aren’t.” Jan 30, 2019 at 22:58 | Show 8 more comments
https://coded3.com/is-there-a-standard-function-to-check-for-null-undefined-or-blank-variables-in-javascript/
CC-MAIN-2022-40
refinedweb
633
70.63
XJ: Extensible Java (a proposal) (20 messages) Java is a fixed language. In Java 5 annotations were introduced in recognition that the Java language need to accomodate customisation towards specific use cases such as Enterprise systems. Annotations can be seen as an approach to defining Domain Specific Languages (DSLs) based on the core Java language. Annotations are essentially name-value bindings and are limited in what they can express. They cannot define a new language constructs such as a "select" from collection statement. The fact that annotations exists suggests a requirement that Java needs DSL capabilities but without the richness to realise fully blown DSLs. Here we make a proposal that extends Java to support user defined syntax. The extension is conservative in the sense that it will not conflict with any existing language features and will preserve backward compatibility. The proposal extends classes with syntax defnitions to produce new modular language constructs, called syntax-classes, that can easily be distributed along with an application in the usual way. The proposal is called XJ (eXtensible Java). XJ introduces the idea of a syntax-class into Java. A syntax-class is a normal Java class that defines a language grammar. When the Java parser encounters an occurrence of a language feature delimited by a syntax-class, the class's grammar is used to process the input. If the parse succeeds then the grammar synthesizes an Java abstract syntax tree (AST). An object of type AST has a standard interface that is used by the Java compiler when it processes the syntax. New types of AST can be defined providing that they implement the appropriate interface. Consider a simple language construct in Java that selects an element from a collection based on some predicate. An example of using the new construct is shown below:import language mylang.Select; public Person getChild(Vector people) { @Select Person p from people where p.age < 18 { return p; } else { return null; } }The new language construct is called Select which is defined by prefixing a reference to the syntax-class with the @-character. The value p is selected from the vector people providing that the age of the person is less than 18. If a value can be selected then it is returned otherwise null is returned. The use of Select is equivalent to the following definition:public Person getChild(Vector people) { for(int i = 0; i < people.size(); i++) { Person p = people.elementAt(i); if(p.age < 18) return p; } return null; }A new language construct is introduced in XJ by defining a syntax-class. A syntax-class contains a grammar which is used by the Java parser to process the concrete program syntax and to return an abstract syntax tree (AST). Once a syntax-class has been defined, it can be used in program code by referencing the class after the syntax escape character '@'. The definition for the select construct looks like the following:package mylang; import language java.syntax.Grammar; import java.syntax.AST; import java.syntax.Block; import java.syntax.Context; import java.syntax.Statement; import java.syntax.Sugar; import java.syntax.Type; import java.syntax.Var; public class Select extends Sugar { private Type type; private Var var; private AST collection; private AST test; private Block body; private Block otherwise; public Select(Type T,String n,AST c,AST t,Block b,Block o) { type = T; var = new Var(n); collection = c; test = t; body = b; otherwise = o; } // Select Grammar definition @Grammar extends Statement { Select ::= T = Type n = Name 'from' c = Exp 'when' t = Exp b = Block o = ('else' Block | { return new Block(); }) { return new Select(T,n,c,t,b,o); }. } // Desugar to produce an abstract syntax tree public AST desugar(Context context) { Class cType = context.getType(collection); if(isVector(cType)) return desugarVector(cType,contect); else // More cases... } public AST desugarVector(Class cType,Context context) { Var done = context.newVar(); Var coll = context.newVar(); return [| boolean = false; coll = ; for(int i = 0; i < .size(); i++) { = .elementAt(i); if() { = true; ; } } if(!) ; |]; } }The Select grammar rule specifies that a well-formed statement is a type followed by a name, the keyword 'from' followed by an expression, the keyword 'when' followed by an expression and then a block which is the body of the select. After the body there may be an optional else keyword preceding a block. In each case within the Select rule, the parse elements produce a value that may optionally be associated with names. For example, the type is associated with the name T. In addition, a parse rule can contain Java statements that return a value. These are enclosed in { and }, and may reference any of the names that have been defined to the left of the statement. The final value returned by the Select rule is an instance of the class Select. The value synthesized and returned by a grammar must be an instance of java.syntax.AST. If the return value is an instance of one of the standard Java AST classes then no special action needs to be taken by the syntax-class. If the return value is an instance of a user-defined syntax-class then that class must implement the AST interface which is used by the compiler to translate the source code into Java VM code. To make this process easier, a user defined syntax-class can extend java.syntax.Sugar which implements the AST interface through a method called desugar. The desugar method is responsible for translating the receiver into an AST for which the interface is defined (typically desugaring into standard Java code). A common way of desugaring syntax is to use quasi-quotes to express abstract syntax in terms of existing concrete syntax. Quasi-quotes are used in the desugarVector method in the Select definition, to introduce these here is a simple quasi-quoted AST:[| x + |]which is equivalent to the Java expression:new java.syntax.BinExp( new java.syntax.Var("x"), "+", new java,syntax.Int(1))The delimiters [| and |] transform the enclosing concrete syntax into the corresponding abstract syntax. Within [| and |] the delimiters < and > can be used to 'unquote' the syntax in order to drop in some abstract syntax. The two forms of delimiters can be arbitrarily nested. Quasi-quotes are an easy way to create code templates in XJ. In this short article we have introduced an extension to Java called XJ which allows new Java syntax to be defined thus allowing extensions to the Java language. Although XJ has not yet been implemented in Java, it is one of the key features of the XMF language that has been used in commercial tools (XMF-Mosaic) and has been made open-source in 2008. More details of this proposal and extended examples (including the application of XJ to Enterprise Java) can be found in the paper "Beyond Annotations: A Proposal for Extensible Java (XJ)." - Posted by: James Willans - Posted on: April 14 2008 10:08 EDT Threaded Messages (20) - Re: XJ: Extensible Java (a proposal) by Jevgeni Kabanov on April 14 2008 11:17 EDT - Re: XJ: Extensible Java (a proposal) by Time PassX on April 14 2008 11:45 EDT - Re: XJ: Extensible Java (a proposal) by Karl Peterbauer on April 14 2008 18:28 EDT - Re: XJ: Extensible Java (a proposal) by Nikita Ivanov on April 14 2008 13:29 EDT - Re: XJ: Extensible Java (a proposal) by Persistability Ltd on April 14 2008 15:00 EDT - Re: XJ: Extensible Java (a proposal) by Nikita Ivanov on April 14 2008 07:24 EDT - Re: XJ: Extensible Java (a proposal) by Tamas Cserveny on April 15 2008 09:21 EDT - Re: XJ: Extensible Java (a proposal) by AD aa on April 14 2008 18:28 EDT - Re: XJ: Extensible Java (a proposal) by Nagraj Chakravarty on April 14 2008 18:29 EDT - Nice as an add-on by Denis Bredelet on April 14 2008 19:15 EDT - In the wrong hands.. by Francois Swiegers on April 15 2008 01:58 EDT - Macros by Kit Davies on April 15 2008 04:23 EDT - Re: XJ: Extensible Java (a proposal) by Tom Eugelink on April 15 2008 04:36 EDT - Bad idea by Jesper Nordenberg on April 15 2008 06:26 EDT - Re: Bad idea by Hans Prueller on April 16 2008 03:41 EDT - Re: Bad idea by Werner Punz on April 17 2008 02:17 EDT - Re: XJ: Extensible Java (a proposal) by Guido Anzuoni on April 15 2008 06:55 EDT - Re: XJ: Extensible Java (a proposal) by Time PassX on April 15 2008 10:21 EDT - Re: XJ: Extensible Java (a proposal) by Dushyanth Inguva on April 15 2008 11:25 EDT - Bad proposal by Pavel Ivanov on April 16 2008 01:16 EDT Re: XJ: Extensible Java (a proposal)[ Go to top ] That's all cool, but how do you relate errors (especially type errors, especially generic type errors) to the DSL code? What about IDE support? Autocompletion? I could come up with quite a few things more... Isn't it better to just improve the language to the point where you can define embedded DSLs that make use of the existing tools, like Ruby or Haskell? I for one would be scared if Java would have such a "feature". - Posted by: Jevgeni Kabanov - Posted on: April 14 2008 11:17 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] I like the idea, and the example was pretty straightforward up to the 'desugar' definition. I'm sure there's a good way to do the syntax, but hopefully it's not by adding everywhere. That's just ugly and makes it a bear to read. Besides that, it's an interesting idea. I wonder how it compares to C# 3.0's new features that enables LINQ and all those associated goodies. This seems like a different approach (I don't think those features in C# do anything at the AST level). - Posted by: Time PassX - Posted on: April 14 2008 11:45 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] - Posted by: Karl Peterbauer - Posted on: April 14 2008 18:28 EDT - in response to Time PassX (I don't think those features in C# do anything at the AST level).Not directly, but in C# you have access to the AST as an in-memory datastructure representing the parsed code at runtime, which is a quite cute feature. Re: XJ: Extensible Java (a proposal)[ Go to top ] Begs the question: why? W H Y? This is all cool stuff and reminds me "Look, ma, no hands!" type of technology. What is so wrong with plain JDBC, or iBatis, or JPA? What advantage does it bring? Will it be dramatically simpler, safer or more productive to use? These questions really need to be answered in details before considering something like that. Regards, Nikita Ivanov. GridGain - Grid Computing Made Simple - Posted by: Nikita Ivanov - Posted on: April 14 2008 13:29 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] - Posted by: Persistability Ltd - Posted on: April 14 2008 15:00 EDT - in response to Nikita Ivanov What is so wrong with plain JDBC, or iBatis, or JPA? What advantage does it bring?Well JPQL/JDOQL(/SQL) is String-based, has no type checking and no auto-refactoring if a class is refactored - have to manually update the queries. Having a type-safe refactorable query language available for persistence solution makes a heck of a lot of sense. Doesn't mean that this is the best way to provide it though. Re: XJ: Extensible Java (a proposal)[ Go to top ] Raul, Refactor-safe SQL is a good argument. I'll buy that. Thanks, Nikita Ivanov. GridGain - Grid Computing Made Simple - Posted by: Nikita Ivanov - Posted on: April 14 2008 19:24 EDT - in response to Persistability Ltd Re: XJ: Extensible Java (a proposal)[ Go to top ] -1 Java have a definite syntax. It is clear, what is written does everywhere the same. This enables the great tooling possibility. We should not litter this with unnecessary complexity, in order to be able to write something what is already possible another way. I think, this is not where the productivity is gone, for most of the project... Cheers, Tamas - Posted by: Tamas Cserveny - Posted on: April 15 2008 09:21 EDT - in response to Persistability Ltd Re: XJ: Extensible Java (a proposal)[ Go to top ] Who needs this stuff anyways? This cr*p is from the same guys who are promoting "super languages" unfortunately for which there are no takers... - Posted by: AD aa - Posted on: April 14 2008 18:28 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] Hmm... It reminds me of SQLJ (and how ugly it is...) If you really had to provide an extension, why can't you think of something that be Object Oriented? HQL could be a good start. Or perhaps a rule based approach ? I am writing a DSL to provide a declarative (rule based approach) processing engine. I would rather use Hibernate to populate my objects. -Nagraj TeleCommunication Systems,Inc - Posted by: Nagraj Chakravarty - Posted on: April 14 2008 18:29 EDT - in response to James Willans Nice as an add-on[ Go to top ] Looking at the example I think that does a rather decent job at what it does. I don't want it in the Java language itself though. And add me to the list of objectors to the delimiter for escaped code, man that is ugly! - Posted by: Denis Bredelet - Posted on: April 14 2008 19:15 EDT - in response to James Willans In the wrong hands..[ Go to top ] My problem with these types of extensions are the same as my problem with AspectJ: I see the benefit, I see how it can increase productivity and clarity, but I know that someday, somewhere, I will have to maintain a system that was developed by someone who had no clue how to use these things properly. With the Java language, at least there is an upper limit on what the syntax in front of me means, with arbitrary extensions I simply have no clue. - Posted by: Francois Swiegers - Posted on: April 15 2008 01:58 EDT - in response to James Willans Macros[ Go to top ] The idea is good but the implementation is too complex, IMHO. The end result is something like C++ macros but that operate on the AST. Better IMO would be to start smaller and just have compile-time typesafe macros that produce standard source code. Kit - Posted by: Kit Davies - Posted on: April 15 2008 04:23 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] This is a good idea, I have suggested it several times, but not for the DSL reason that is mentioned here. Basically what is being done here, IMHO, is not so much an extention of the language as well a stand way of extenting the compiler. And being able to extend the compiler is very interesting because certain frames, like AspectJ or JPA postprocessing, can finally be done in a way so that you can combine them. Has anyone ever tried to use a JPA agent and at the same time AspectJ? AFAIK these two conflict because they both need to do some processing on the classes. But it is very logical to use AspectJ on the business model... (Yes yes, preprocessing, but you'll always have that chance of missing an aspect.) To the point of DSL... I do not see the need to express domain spefic stuff in the syntax of a programming language. Please lets not mix apples and spaghetti. Domain specific info has to be modeled, not embedded, in a programming language. If you fancy a DSL, then IMHO it should be bolted on "on top" not "inside". Call it "separation of concern", if you please. The only thing that is interesting for this specific example are closures, so we can write that select more easily. - Posted by: Tom Eugelink - Posted on: April 15 2008 04:36 EDT - in response to James Willans Bad idea[ Go to top ] -10 Annotations should never be used for interpretation of the language. They should be used for attaching information to the code. And don't try to "fix" Java by adding all kinds of extensions to it. Use a language that is well thought out from the start, like Scala or Haskell. It would be trivial to implement your example in Scala (or Haskell) without the need for AST manipulation and it would be readable by anyone who knows the Scala syntax. - Posted by: Jesper Nordenberg - Posted on: April 15 2008 06:26 EDT - in response to James Willans Re: Bad idea[ Go to top ] - Posted by: Hans Prueller - Posted on: April 16 2008 03:41 EDT - in response to Jesper Nordenberg Annotations should never be used for interpretation of the language. They should be used for attaching information to the code.+1 Re: Bad idea[ Go to top ] - Posted by: Werner Punz - Posted on: April 17 2008 14:17 EDT - in response to Hans Prueller +! from me on this as well. Anyway back to the example, if you want rather similar behavior then use groovy and closures you would be amazed about the result you would get. Annotations should never be used for interpretation of the language. They should be used for attaching information to the code. +1 Re: XJ: Extensible Java (a proposal)[ Go to top ] Never seen anything more dangerous since long time. I agree with almost every negative comment. Looks like smoothing the highway to hell. In any case, I find more straightforward a traditional approach like: class CollectionSelector public Object select(Collection c, Predicate p) { .... } even if I understand that in a throw-away prototype the proposed approach might (read again, might) speed-up implementation. Unfortunately, the real system implementation never starts throwing away the prototype. Guido - Posted by: Guido Anzuoni - Posted on: April 15 2008 06:55 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] I've said this before, but a (IMO) better solution already exists -- Write it in Groovy :) Not all of your code. Just the parts that need the certain level of expressiveness that you've described there. Take for example: If you need expressiveness, use a dynamic language that integrates seamlessly with Java. Where you need raw performance, keep it in Java. Done. - Posted by: Time PassX - Posted on: April 15 2008 10:21 EDT - in response to James Willans Re: XJ: Extensible Java (a proposal)[ Go to top ] I'm all for DSLs but trying to bend java to support new kinds of constructs will result in people waiving their backward compatibility flags, and there by we ending up with a shitty implementation (type erasure anyone?). I'd rather this be a new extensible language on top of the JVM . Developers can decide if they want to use it or ignore it. Dushyanth Blogging at: Listen to Me!!! - Posted by: Dushyanth Inguva - Posted on: April 15 2008 11:25 EDT - in response to Time PassX Bad proposal[ Go to top ] In order to decide whether it's good or bad I'd propose to have a look at origins of Java. Java was invented as simple language in contrary to C++ (no multiple inheritance, no overloading of ops, all functions are virtual, etc). From this point of view XJ brings unnecessary complexity. - Posted by: Pavel Ivanov - Posted on: April 16 2008 01:16 EDT - in response to James Willans
http://www.theserverside.com/discussions/thread.tss?thread_id=49035
CC-MAIN-2013-48
refinedweb
3,295
60.75
.\" $NetBSD: title.urm,v 1.9 2010/12/16 17:42:28 wiz.urm 8.13 (Berkeley) 8/8/94 .\" .af % i .nr LL 6.5i .EH '''' .OH '''' .EF '''' .OF '''' \& .sp |1.5i .nr PS 36 .nr VS 39 .LP .ft B .ce 2 4.4BSD User's Reference Manual .nr PS 24 .nr VS 32 .LP .ft B .ce 1 (UR User's Reference Manual .nr PS 24 .nr VS 32 .LP .ft B .ce 1 (UR.75 adb.1, bc.1, compact.1, crypt.1, dc.1, deroff.1, expr.1, graph.1, ld.1, learn.1, m4.1, plot.1, ptx.1, spell.1, spline.1, struct.1, tar.1, units.1, uucp.1, uux.1, ching.6, eqnchar.7, man.7, ms.7, and term 4.4BSD Daemon used on the cover is copyright 1994 by Marshall Kirk McKusick and is reproduced with permission. .br The views and conclusions contained in this manual are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the Regents of the University of California. .sp 1 .LP75-9 .bp \& .sp |1.5i .nr PS 24 .nr VS 26 .LP .ce 1 \fBContents\fP .sp 1 .nr PS 14 .nr VS 17 .LP .TS expand; l r. The Computer Systems Research Group, 1979\-1993 vii Prefaces xi Introduction xvii List of Manual Pages xxiii Permuted Index xli Reference Manual Sections 1, 6, 7 tabbed pages List of Documents inside back cover .TE .if o .bp \& .bp .\" .\" The contributor list below is derived from the file that resides in .\" vangogh:~admin/contrib/contrib: .\" .\" @(#)contrib 5.55 (Berkeley) 4/18/94 .\" .\" This file should not be editted, rather the original contrib file .\" should be used to recrete this one following the directions at its top. .\" Contrib starts here and continues to the comment `END OF CONTRIB'. .\" \& .sp |1i .ps 24 .vs 27 .ce 2 \fBThe Computer Systems Research Group 1979 \- 1993\fP .sp 1.5 .nr PS 11 .nr VS 12 .LP .nf .in +0.5i \fBCSRG Technical Staff\fP .sp 1 .in +1i Jim Bloom Keith Bostic Ralph Campbell Kevin Dunlap William N. Joy Michael J. Karels Samuel J. Leffler Marshall Kirk McKusick Miriam Amos Nihart Keith Sklower Marc Teitelbaum Michael Toy .in -1i .sp 3 \fBCSRG Administration and Support\fP .sp 1 .in +1i Robert Fabry Domenico Ferrari Susan L. Graham Bob Henry Anne Hughes Bob Kridle David Mosher Pauline Schwartz Mark Seiden Jean Wood .in -1i .fi .sp 3 \fBOrganizations that funded the CSRG with grants, gifts, personnel, and/or hardware.\fP .sp 1 .nf .in +1i Center for Advanced Aviation System Development, The MITRE Corp. Compaq Computer Corporation Cray Research Inc. Department of Defense Advance Research Projects Agency (DARPA) Digital Equipment Corporation The Hewlett-Packard Company NASA Ames Research Center The National Science Foundation The Open Software Foundation UUNET Technologies Inc. .in -1.5i .fi .OH '\s10CSRG, 1979 \- 1993''- % -\s0' .EH '\s10- % -''CSRG, 1979 \- 1993\s0' .bp .nr PS 10 .nr VS 11 .LP \fBThe following are people and organizations that provided a large subsystem for the BSD releases.\fP .sp .TS l l. ANSI C library Chris Torek ANSI C prototypes Donn Seeley and John Kohl Autoconfiguration Robert Elz C library documentation American National Standards Committee X3 CCI 6/32 support Computer Consoles Inc. DEC 3000/5000 support Ralph Campbell Disklabels Symmetric Computer Systems Documentation Cynthia Livingston and The USENIX Association Franz Lisp Richard Fateman, John Foderaro, Keith Sklower, Kevin Layer GCC, GDB The Free Software Foundation Groff James Clark (The FSF) HP300 support Jeff Forys, Mike Hibler, Jay Lepreau, Donn Seeley and the Systems Programming Group; University of Utah Computer Science Department ISODE Marshall Rose Ingres Mike Stonebraker, Gene Wong, and the Berkeley Ingres Research Group Intel 386/486 support Bill Jolitz and TeleMuse Job control Jim Kulp Kerberos Project Athena and MIT Kernel support Bill Shannon and Sun Microsystems Inc. LFS Margo Seltzer, Mendel Rosenblum, Carl Staelin MIPS support Trent Hein Math library K.C. Ng, Zhishun Alex Liu, S. McDonald, P. Tang and W. Kahan NFS Rick Macklem NFS automounter Jan-Simon Pendry Network device drivers Micom-Interlan and Excelan Omron Luna support Akito Fujita and Shigeto Mochida Quotas Robert Elz RPC support Sun Microsystems Inc. Shared library support Rob Gingell and Sun Microsystems Inc. Sony News 3400 support Kazumasa Utashiro Sparc I/II support Computer Systems Engineering Group, Lawrence Berkeley Laboratory Stackable file systems John Heidemann Stdio Chris Torek System documentation The Institute of Electrical and Electronics Engineers, Inc. TCP/IP Rob Gurwitz and Bolt Beranek and Newman Inc. Timezone support Arthur David Olson Transport/Network OSI layers IBM Corporation and the University of Wisconsin Kernel XNS assistance William Nesheim, J. Q. Johnson, Chris Torek, and James O'Toole User level XNS Cornell University VAX 3000 support Mt. Xinu and Tom Ferrin VAX BI support Chris Torek VAX device support Digital Equipment Corporation and Helge Skrivervik Versatec printer/plotter support University of Toronto Virtual memory implementation Avadis Tevanian, Jr., Michael Wayne Young, and the Carnegie-Mellon University Mach project X25 University of British Columbia .TE .bp .LP \fBThe following are people and organizations that provided a specific item, program, library routine or program maintenance for the BSD system. (Their contribution may not be part of the final 4.4BSD release.)\fP .nr PS 9 .nr VS 10 .ps 9 .vs 10 .TS l l. 386 device drivers Carnegie-Mellon University Mach project 386 device drivers Don Ahn, Sean Fagan and Tim Tucker HCX device drivers Harris Corporation Kernel enhancements Robert Elz, Peter Ivanov, Ian Johnstone, Piers Lauder, John Lions, Tim Long, Chris Maltby, Greg Rose and John Wainwright ISO-9660 filesystem Pace Willisson, Atsushi Murai .TE .sp -0.4 .TS l l l l. adventure(6) Don Woods log(3) Peter McIlroy adventure(6) Jim Gillogly look(1) David Hitz adventure(6) Will Crowther ls(1) Elan Amir apply(1) Rob Pike ls(1) Michael Fischbein apply(1) Jan-Simon Pendry lsearch(3) Roger L. Snyder ar(1) Hugh A. Smith m4(1) Ozan Yigit arithmetic(6) Eamonn McManus mail(1) Kurt Schoens arp(8) Sun Microsystems Inc. make(1) Adam de Boor at(1) Steve Wall me(7) Eric Allman atc(6) Ed James mergesort(3) Peter McIlroy awk(1) Arnold Robbins mh(1) Marshall Rose awk(1) David Trueman mh(1) The Rand Corporation backgammon(6) Alan Char mille(6) Ken Arnold banner(1) Mark Horton mknod(8) Kevin Fall battlestar(6) David Riggle monop(6) Ken Arnold bcd(6) Steve Hayman more(1) Eric Shienbrood bdes(1) Matt Bishop more(1) Mark Nudelman berknet(1) Eric Schmidt mountd(8) Herb Hasler bib(1) Dain Samples mprof(1) Ben Zorn bib(1) Gary M. Levin msgs(1) David Wasley bib(1) Timothy A. Budd multicast Stephen Deering bitstring(3) Paul Vixie mv(1) Ken Smith boggle(6) Barry Brachman named/bind(8) Douglas Terry bpf(4) Steven McCanne named/bind(8) Kevin Dunlap btree(3) Mike Olson news(1) Rick Adams (and a cast of thousands) byte-range locking Scooter Morris nm(1) Hans Huebner caesar(6) John Eldridge pascal(1) Kirk McKusick caesar(6) Stan King pascal(1) Peter Kessler cal(1) Kim Letkeman paste(1) Adam S. Moskowitz cat(1) Kevin Fall patch(1) Larry Wall chess(6) Stuart Cracraft (The FSF) pax(1) Keith Muller ching(6) Guy Harris phantasia(6) C. Robertson cksum(1) James W. Williams phantasia(6) Edward A. Estes clri(8) Rich $alz ping(8) Mike Muuss col(1) Michael Rendell pom(6) Keith E. Brandt comm(1) Case Larsen pr(1) Keith Muller compact(1) Colin L. McMaster primes(6) Landon Curt Noll compress(1) James A. Woods qsort(3) Doug McIlroy compress(1) Joseph Orost qsort(3) Earl Cohen compress(1) Spencer Thomas qsort(3) Jon Bentley courier(1) Eric Cooper quad(3) Chris Torek cp(1) David Hitz quiz(6) Jim R. Oldroyd cpio(1) AT&T quiz(6) Keith Gabryelski crypt(3) Tom Truscott radixsort(3) Dan Bernstein csh(1) Christos Zoulas radixsort(3) Peter McIlroy csh(1) Len Shar rain(6) Eric P. Scott curses(3) Elan Amir ranlib(1) Hugh A. Smith curses(3) Ken Arnold rcs(1) Walter F. Tichy cut(1) Adam S. Moskowitz rdist(1) Michael Cooper cut(1) Marciano Pitargue regex(3) Henry Spencer dbx(1) Mark Linton robots(6) Ken Arnold dd(1) Keith Muller rogue(6) Timothy C. Stoehr dd(1) Lance Visser rs(1) John Kunze des(1) Jim Gillogly sail(6) David Riggle des(1) Phil Karn sail(6) Edward Wang des(1) Richard Outerbridge sccs(1) Eric Allman dipress(1) Xerox Corporation scsiformat(1) Lawrence Berkeley Laboratory disklabel(8) Symmetric Computer Systems sdb(1) Howard Katseff du(1) Chris Newcomb sed(1) Diomidis Spinellis dungeon(6) R.M. Supnik sendmail(8) Eric Allman ed(1) Rodney Ruddock setmode(3) Dave Borman emacs(1) Richard Stallman sh(1) Kenneth Almquist erf(3) Peter McIlroy, K.C. Ng slattach(8) Rick Adams error(1) Robert R. Henry slip(8) Rick Adams ex(1) Mark Horton spms(1) Peter J. Nicklin factor(6) Landon Curt Noll strtod(3) David M. Gay file(1) Ian Darwin swab(3) Jeffrey Mogul find(1) Cimarron Taylor sysconf(3) Sean Eric Fagan finger(1) Tony Nardo sysline(1) J.K. Foderaro fish(6) Muffy Barkocy syslog(3) Eric Allman fmt(1) Kurt Schoens systat(1) Bill Reeves fnmatch(3) Guido van Rossum systat(1) Robert Elz fold(1) Kevin Ruddy tail(1) Edward Sze-Tyan Wang fortune(6) Ken Arnold talk(1) Clem Cole fpr(1) Robert Corbett talk(1) Kipp Hickman fsdb(8) Computer Consoles Inc. talk(1) Peter Moore fsplit(1) Asa Romberger telnet(1) Dave Borman fsplit(1) Jerry Berkman telnet(1) Paul Borman gcc/groff integration UUNET Technologies, Inc. termcap(5) John A. Kunze gcore(1) Eric Cooper termcap(5) Mark Horton getcap(3) Casey Leedom test(1) Kenneth Almquist glob(3) Guido van Rossum tetris(6) Chris Torek gprof(1) Peter Kessler tetris(6) Darren F. Provine gprof(1) Robert R. Henry timed(8) Riccardo Gusella hack(6) Andries Brouwer (and a cast of thousands) timed(8) Stefano Zatti hangman(6) Ken Arnold tn3270(1) Gregory Minshall hash(3) Margo Seltzer tr(1) Igor Belchinskiy heapsort(3) Elmer Yglesias traceroute(8) Van Jacobson heapsort(3) Kevin Lew trek(6) Eric Allman heapsort(3) Ronnie Kon tset(1) Eric Allman hunt(6) Conrad Huang tsort(1) Michael Rendell hunt(6) Greg Couch unifdef(1) Dave Yost icon(1) Bill Mitchell uniq(1) Case Larsen icon(1) Ralph Griswold uucpd(8) Rick Adams indent(1) David Willcox uudecode(1) Mark Horton indent(1) Eric Schmidt uuencode(1) Mark Horton indent(1) James Gosling uuq(1) Lou Salkind indent(1) Sun Microsystems uuq(1) Rick Adams init(1) Donn Seeley uusnap(8) Randy King j0(3) Sun Microsystems, Inc. uusnap(8) Rick Adams j1(3) Sun Microsystems, Inc. vacation(1) Eric Allman jn(3) Sun Microsystems, Inc. vi(1) Steve Kirkendall join(1) David Goodenough which(1) Peter Kessler join(1) Michiro Hikida who(1) Michael Fischbein join(1) Steve Hayman window(1) Edward Wang jot(1) John Kunze worm(6) Michael Toy jove(1) Jonathon Payne worms(6) Eric P. Scott kermit(1) Columbia University write(1) Craig Leres kvm(3) Peter Shipley write(1) Jef Poskanzer kvm(3) Steven McCanne wump(6) Dave Taylor lam(1) John Kunze X25/Ethernet Univ. of Erlangen-Nuremberg larn(6) Noah Morgan X25/LLC2 Dirk Husemann lastcomm(1) Len Edmondson xargs(1) John B. Roll Jr. lex(1) Vern Paxson xneko(6) Masayuki Koba libm(3) Peter McIlroy XNSrouted(1) Bill Nesheim libm(3) UUNET Technologies, Inc. xroach(6) J.T. Anderson locate(1) James A. Woods yacc(1) Robert Paul Corbett lock(1) Bob Toxen .TE .\" .\" END OF CONTRIB: Contrib ends here. .\" .if o .bp \& .EH '''' .OH '''' .bp .OH '\s10Preface''- % -\s0' .EH '\s10- % -''Preface\s0' .nr PS 10 .nr VS 12 \& .sp |1.5i .EQ delim $$ .EN .LP .ce \fB\s24Preface\s0\fP .sp 3 .NH 1 Introduction .PP The major new facilities available in the 4.4BSD release are a new virtual memory system, the addition of ISO/OSI networking support, a new virtual filesystem interface supporting filesystem stacking, a freely redistributable implementation of NFS, a log-structured filesystem, enhancement of the local filesystems to support files and filesystems that are up to $2 sup 63$ bytes in size, enhanced security and system management support, and the conversion to and addition of the IEEE Std1003.1 (``POSIX'') facilities and many of the IEEE Std1003.2 facilities. In addition, many new utilities and additions have been made to the C-library.. .NH 1 Changes in the Kernel .PP This release includes several important structural kernel changes. The kernel uses a new internal system call convention; the use of global (``u-dot'') variables for parameters, which is used by the ps(1) program. The old sleep interface can be used only for non-interruptible sleeps. .PP Many data structures that were previously statically allocated are now allocated dynamically. These structures include mount entries, file entries, user open file descriptors, the process entries, the vnode table, the name cache, and the quota structures. .PP The 4.4BSD distribution adds support for several new architectures including SPARC-based Sparcstations 1 and 2, MIPS-based Decstation 3100 and 5000 and Sony NEWS, 68000-based Hewlett-Packard 9000/300 and Omron Luna, and 386-based Personal Computers. Both the HP300 and SPARC ports feature the ability to run binaries built for the native operating system (HP-UX or SunOS) by emulating their system calls.. .NH 2 Virtual memory changes .PP The new virtual memory implementation is derived from the MACH operating system developed at Carnegie-Mellon, and was ported to the BSD kernel at the University of Utah. The MACH virtual memory system call interface has been replaced with the ``mmap''-based interface described in the ``Berkeley Software Architecture Manual (4.4 Edition)'' (see the UNIX Programmer's Manual, Supplementary Documents, PSD:5). The interface is similar to the interfaces shipped by several commercial vendors such as Sun, USL, and Convex Computer Corp. The integration of the new virtual memory is functionally complete, but, like most MACH-based virtual memory systems, still has serious performance problems under heavy memory load. .NH 2 Networking additions and changes .PP The ISO/OSI Networking consists of a kernel implementation of transport class 4 (TP-4), connectionless networking protocol (CLNP), and 802.3-based link-level support (hardware-compatible with Ethernet*). .\" .\" ditroff screws up the environment for footnote. This restores it. .\" .ev 1 .ps 8 .vs 9 .ev .\" end of ditroff fix .FS *Ethernet is a trademark of the Xerox Corporation. .FE We also include support for ISO Connection-Oriented Network Service, X.25, and TP-0. The session and presentation layers are provided outside the kernel by the ISO development environment (ISODE). Included in this development environment are file transfer and management (FTAM), virtual terminals (VT), a directory services implementation (X.500), and miscellaneous other utilities. .PP Several important enhancements have been added to the TCP/IP protocols including TCP header prediction and serial line IP (SLIP) with header compression. The routing implementation has been completely rewritten to use a hierarchical routing tree with a mask per route to support the arbitrary levels of routing found in the ISO protocols. The routing table also stores and caches route characteristics to speed the adaptation of the throughput and congestion avoidance algorithms. .NH 2 Additions and changes to filesystems .PP The 4.4BSD distribution contains most of the interfaces specified in the IEEE Std1003.1 system interface standard. Filesystem additions include IEEE Std1003.1 FIFOs, byte-range file locking, and saved user and group identifiers. .PP different password file. .PP In addition to the local ``fast filesystem,'' we have added an implementation of the network filesystem (NFS) that fully interoperates with the NFS shipped by Sun and its licensees. Because our NFS implementation was implemented using only the publicly available NFS specification, it does not require a license from Sun to use in source or binary form. By default it runs over UDP to be compatible with Sun's implementation. However, it can be configured on a per-mount basis to run over TCP. Using TCP allows it to be used quickly and efficiently through gateways and over long-haul networks. Using an extended protocol, it supports Leases to allow a limited callback mechanism that greatly reduces the network traffic necessary to maintain cache consistency between the server and its clients. .PP A new log-structured filesystem has been added that provides near disk-speed output and fast crash recovery. It is still experimental in the 4.4BSD release, so we do not recommend it for production use. We have also added a memory-based filesystem that runs in pageable memory, allowing large temporary filesystems without requiring dedicated physical memory. .PP The local ``fast filesystem'' has been enhanced to do clustering which allows large pieces of files to be allocated contiguously resulting in near doubling of filesystem throughput. The filesystem interface has been extended to allow files and filesystems to grow to $2 sup 63$ bytes in size. The quota system has been rewritten to support both user and group quotas (simultaneously if desired). Quota expiration is based on time rather than the previous metric of number of logins over quota. This change makes quotas more useful on fileservers onto which users seldom log in. .PP The system security has been greatly enhanced by the addition of additional file flags that permit a file to be marked as immutable or append only. Once set, these flags can only be cleared by the super-user when the system is running single user. To protect against indiscriminate reading or writing of kernel memory, all writing and most reading of kernel data structures must be done using a new ``sysctl'' interface. The information to be accessed is described through an extensible ``Management Information Base'' (MIB). .EQ delim off .EN .NH 2 POSIX terminal driver changes .PP The biggest area of change is a new terminal driver. The terminal driver is similar to the System V terminal driver with the addition of the necessary extensions to get the functionality previously available in the 4.3BSD terminal driver. 4.4BSD also adds the IEEE Std1003.1 job control interface, which is similar to the 4.3BSD job control interface, but adds a security model that was missing in the 4.3BSD job control implementation. A new system call, \fIsetsid\fP, creates a job-control session consisting of a single process group with one member, the caller, that becomes a session leader. Only a session leader may acquire a controlling terminal. This is done explicitly via a \s-1TIOCSCTTY\s+1 \fIioctl\fP call, not implicitly by an \fIopen\fP call. The call fails if the terminal is in use. .PP For backward compatibility, both the old \fIioctl\fP calls and old options to \fIstty\fP are emulated. .NH 1 Changes to the utilities .PP, database interfaces to btree and hashing functions, a new, fast implementation of stdio, and a radix sort function. The additions to the utility suite include greatly enhanced versions of programs that display system status information, implementations of various traditional tools described in the IEEE Std1003.2 standard, and many others. .PP We have been tracking the IEEE Std1003.2 shell and utility work and have included prototypes of many of the proposed utilities. Most of the traditional utilities have been replaced with implementations conformant to the POSIX standards. Almost the entire manual suite has been rewritten to reflect the POSIX defined interfaces. In rewriting this software, we have generally been rewarded with significant performance improvements. Most of the libraries and header files have been converted to be compliant with ANSI C. The system libraries and utilities all compile with either ANSI or traditional C. .PP The Kerberos (version 4) authentication software has been integrated into much of the system (including NFS) to provide the first real network authentication on BSD. .PP A new implementation of the \fIex/vi\fP text editors is available in this release. It is intended as a bug-for-bug compatible version of the editors. It also has a few new features: 8-bit clean data, lines and files limited only by memory and disk space, split screens, tags stacks and left-right scrolling among them. \fINex/nvi\fP is not yet production quality; future versions of this software may be retrieved by anonymous ftp from, in the directory ucb/4bsd. .PP The \fIfind\fP utility has two new options that are important to be aware of if you intend to use NFS. The ``fstype'' and ``prune'' options can be used together to prevent find from crossing NFS mount points. .NH 2 Additions and changes to the libraries .PP The \fIcurses\fP library has been largely rewritten. Important additional features include support for scrolling and \fItermios\fP. .PP An application front-end editing library, named libedit, has been added to the system. .PP A superset implementation of the SunOS kernel memory interface library, \fIlibkvm\fP, has been integrated into the system. .PP Nearly the entire C-library has been rewritten. Some highlights of the changes to the 4.4BSD C-library: .IP \(bu The newly added \fIfts\fP functions will do either physical or logical traversal of a file hierarchy as well as handle essentially infinite depth filesystems and filesystems with cycles. All the utilities in 4.4BSD that traverse file hierarchies have been converted to use \fIfts\fP. The conversion has always resulted in a significant performance gain, often of four or five to one in system time. .IP \(bu The newly added \fIdbopen\fP functions are intended to be a family of database access methods. Currently, they consist of \fIhash\fP, an extensible, dynamic hashing scheme, \fIbtree\fP, a sorted, balanced tree structure (B+tree's), and \fIrecno\fP, a flat-file interface for fixed or variable length records referenced by logical record number. Each of the access methods stores associated key/data pairs and uses the same record oriented interface for access. Future versions of this software may be retrieved by anonymous ftp from, in the directory ucb/4bsd. .IP \(bu The \fIqsort\fP function has been rewritten for additional performance. In addition, three new types of sorting functions, \fIheapsort\fP, \fImergesort\fP, and \fIradixsort\fP have been added to the system. The \fImergesort\fP function is optimized for data with pre-existing order, in which case it usually significantly outperforms \fIqsort\fP. The \fIradixsort\fP functions are variants of most-significant-byte radix sorting. They take time linear to the number of bytes to be sorted, usually significantly outperforming \fIqsort\fP on data that can be sorted in this fashion. An implementation of the POSIX 1003.2 standard \fIsort\fP based on \fIradixsort\fP is included in 4.4BSD. .IP \(bu The floating point support in the C-library has been replaced and is now accurate. .IP \(bu The C functions specified by both ANSI C, POSIX 1003.1 and 1003.2 are now part of the C-library. This includes support for file name matching, shell globbing and both basic and extended regular expressions. .IP \(bu ANSI C multibyte and wide-character support has been integrated. The rune functionality from the Bell Labs' Plan 9 system is provided as well. .IP \(bu The \fItermcap\fP functions have been generalized and replaced with a general purpose interface named \fIgetcap\fP. .IP \(bu The \fIstdio\fP routines have been replaced, and are usually much faster. In addition, the \fIfunopen\fP interface permits applications to provide their own I/O stream function support. .NH 1 Acknowledgements .PP We were greatly assisted by the past employees of the Computer Systems Research Group: Mike Karels, Keith Sklower, and Marc Tietelbaum. Our distribution coordinator, Pauline Schwartz, has reliably managed the finances and the mechanics of shipping distributions for nearly the entire fourteen years of the group's existence. Without the help of lawyers Mary MacDonald, Joel Linzner, and Carla Shapiro, the 4.4BSD-Lite distribution would never have seen the light of day. Much help was provided by Chris Demetriou in getting bug fixes from NetBSD integrated back into the 4.4BSD-Lite distribution. .PP The vast majority of the 4.4BSD distribution comes from the numerous people in the UNIX community that provided their time and energy in creating the software contained in this release. We dedicate this distribution to them. .sp 0.6 .in 4i .nf M. K. McKusick K. Bostic .fi .in 0 .sp 3 .nr PS 9 .nr VS 10 .LP .ne 1i .ce \fIPreface to the 4.3 Berkeley distribution\fP .sp 1 .LP This update to the 4.2 distribution of August 1983 provides substantially improved performance, reliability, and security, the addition of Xerox Network System (NS) to the set of networking domains, and partial support for the VAX 8600 and MICROVAXII. .LP We were greatly assisted by the DEC UNIX Engineering group who provided two full time employees, Miriam Amos and Kevin Dunlap, to work at Berkeley. They were responsible for developing and debugging the distributed domain based name server and integrating it into the mail system. Mt Xinu provided the bug list distribution service as well as donating their MICROVAXII port to 4.3BSD. Drivers for the MICROVAXII were done by Rick Macklem at the University of Guelph. Sam Leffler provided valuable assistance and advice with many projects. Keith Sklower coordinated with William Nesheim and J. Q. Johnson at Cornell, and Chris Torek and James O'Toole at the University of Maryland to do the Xerox Network Systems implementation. Robert Elz at the University of Melbourne contributed greatly to the performance work in the kernel. Donn Seeley and Jay Lepreau at the University of Utah relentlessly dealt with a myriad of details; Donn completed the unfinished performance work on Fortran 77 and fixed numerous C compiler bugs. Ralph Campbell handled innumerable questions and problem reports and had time left to write rdist. George Goble was invaluable in shaking out the bugs on his production systems long before we were confident enough to inflict it on our users. Bill Shannon at Sun Microsystems has been helpful in providing us with bug fixes and improvements. Tom Ferrin, in his capacity as Board Member of Usenix Association, handled the logistics of large-scale reproduction of the 4.2BSD and 4.3BSD manuals. Mark Seiden helped with the typesetting and indexing of the 4.3BSD manuals. Special mention goes to Bob Henry for keeping ucbvax running in spite of new and improved software and an ever increasing mail, news, and uucp load. .LP Numerous others contributed their time and energy in creating the user contributed software for the release. As always, we are grateful to the UNIX user community for encouragement and support. .LP Once again, the financial support of the Defense Advanced Research Projects Agency is gratefully acknowledged. .sp 1 .in 4i .nf M. K. McKusick M. J. Karels J. M. Bloom .fi .in 0 .sp 1.5 .ne 2i .ce \fIPreface to the 4.2 Berkeley distribution\fP .sp 1 This update to the 4.1 distribution of June 1981 provides support for the VAX 11/730, full networking and interprocess communication support, an entirely new file system, and many other new features. It is certainly the most ambitious release of software ever prepared here and represents many man-years of work. Bill Shannon (both at DEC and at Sun Microsystems) and Robert Elz of the University of Melbourne contributed greatly to this distribution through new device drivers and painful debugging episodes. Rob Gurwitz of BBN wrote the initial version of the code upon which the current networking support is based. Eric Allman of Britton-Lee donated countless hours to the mail system. Bill Croft (both at SRI and Sun Microsystems) aided in the debugging and development of the networking facilities. Dennis Ritchie of Bell Laboratories also contributed greatly to this distribution, providing valuable advise and guidance. Helge Skrivervik worked on the device drivers which enabled the distribution to be delivered with a TU58 console cassette and RX01 console floppy disk, and rewrote major portions of the standalone i/o system to support formatting of non-DEC peripherals. .LP Numerous others contributed their time and energy in organizing the user software for release, while many groups of people on campus suffered patiently through the low spots of development. As always, we are grateful to the UNIX user community for encouragement and support. .LP Once again, the financial support of the Defense Advanced Research Projects Agency is gratefully acknowledged. .sp 1 .in 4i .nf S. J. Leffler W. N. Joy M. K. McKusick .fi .in 0 .sp 1.5 .ne 1i .ce \fIPreface to the 4.1 Berkeley distribution\fP .sp 1 This update to the fourth distribution of November 1980 provides support for the VAX 11/750 and for the full interconnect architecture of the VAX 11/780. Robert Elz of the University of Melbourne contributed greatly to this distribution especially in the boot-time system configuration code; Bill Shannon of DEC supplied us with the implementation of DEC standard bad block handling. The research group at Bell Laboratories and DEC Merrimack provided us with access to 11/750's in order to debug its support. .LP Other individuals too numerous to mention provided us with bug reports, fixes and other enhancements which are reflected in the system. We are grateful to the UNIX user community for encouragement and support. .LP The financial support of the Defense Advanced Research Projects Agency in support of this work is gratefully acknowledged. .sp 1 .in 4i .nf W. N. Joy R. S. Fabry K. Sklower .fi .in 0 .sp 1.5 .ne 1i .ce \fIPreface to the Fourth Berkeley distribution\fP .sp 1 This manual reflects the Berkeley system mid-October, 1980. A large amount of tuning has been done in the system since the last release; we hope this provides as noticeable an improvement for you as it did for us. This release finds the system in transition; a number of facilities have been added in experimental versions (job control, resource limits) and the implementation of others is imminent (shared-segments, higher performance from the file system, etc.). Applications which use facilities that are in transition should be aware that some of the system calls and library routines will change in the near future. We have tried to be conscientious and make it very clear where this is likely. .LP A new group has been formed at Berkeley, to assume responsibility for the future development and support of a version of UNIX on the VAX. The group has received funding from the Defense Advanced Research Projects Agency (DARPA) to supply a standard version of the system to DARPA contractors. The same version of the system will be made available to other licensees of UNIX on the VAX for a duplication charge. We gratefully acknowledge the support of this contract. .LP We wish to acknowledge the contribution of a number of individuals to the system. .LP We would especially like to thank Jim Kulp of IIASA, Laxenburg Austria and his colleagues, who first put job control facilities into UNIX; Eric Allman, Robert Henry, Peter Kessler and Kirk McKusick, who contributed major new pieces of software; Mark Horton, who contributed to the improvement of facilities and substantially improved the quality of our bit-mapped fonts, our hardware support staff: Bob Kridle, Anita Hirsch, Len Edmondson and Fred Archibald, who helped us to debug a number of new peripherals; Ken Arnold who did much of the leg-work in getting this version of the manual prepared, and did the final editing of sections 2-6, some special individuals within Bell Laboratories: Greg Chesson, Stuart Feldman, Dick Haight, Howard Katseff, Brian Kernighan, Tom London, John Reiser, Dennis Ritchie, Ken Thompson, and Peter Weinberger who helped out by answering questions; our excellent local DEC field service people, Kevin Althaus and Frank Chargois who kept our machine running virtually all the time, and fixed it quickly when things broke; and, Mike Accetta of Carnegie-Mellon University, Robert Elz of the University of Melbourne, George Goble of Purdue University, and David Kashtan of the Stanford Research Institute for their technical advice and support. .LP Special thanks to Bill Munson of DEC who helped by augmenting our computing facility and to Eric Allman for carefully proofreading the ``last'' draft of the manual and finding the bugs which we knew were there but couldn't see. .LP We dedicate this to the memory of David Sakrison, late chairman of our department, who gave his support to the establishment of our VAX computing facility, and to our department as a whole. .sp 1 .in 4i .nf W. N. Joy O\h'-.54m'\v'-.24m'\z\(..\v'.24m'\h'.54m'. Babao\o'~g'lu R. S. Fabry K. Sklower .fi .in 0 .sp 3 .ne 1i .ce \fIPreface to the Third Berkeley distribution\fP .sp 1 This manual reflects the state of the Berkeley system, December 1979. We would like to thank all the people at Berkeley who have contributed to the system, and particularly thank Prof. Richard Fateman for creating and administrating a hospitable environment, Mark Horton who helped prepare this manual, and Eric Allman, Bob Kridle, Juan Porcar and Richard Tuck for their contributions to the kernel. .LP The cooperation of Bell Laboratories in providing us with an early version of \s-2UNIX\s0/32V is greatly appreciated. We would especially like to thank Dr. Charles Roberts of Bell Laboratories for helping us obtain this release, and acknowledge T. B. London, J. F. Reiser, K. Thompson, D. M. Ritchie, G. Chesson and H. P. Katseff for their advice and support. .sp 1 .in 4i W. N. Joy .br O\h'-.54m'\v'-.24m'\z\(..\v'.24m'\h'.54m'. Babao\o'~g'lu .in 0 .sp 3 .ne 1i .ce \fIPreface to the UNIX/32V distribution\fP .sp 1 The UNIX operating system for the VAX*-11 .FS *VAX and PDP are Trademarks of Digital Equipment Corporation. .FE provides substantially the same facilities as the \s-2UNIX\s0 system for the PDP*-11. .LP We acknowledge the work of many who came before us, and particularly thank G. K. Swanson, W. M. Cardoza, D. K. Sharma, and J. F. Jarvis for assistance with the implementation for the VAX-11/780. .sp 1 .in 4i T. B. London .br J. F. Reiser .in 0 .sp 3 .ne 1i .ce \fIPreface to the Seventh Edition\fP .sp 1 .LP Although this Seventh Edition no longer bears their byline, Ken Thompson and Dennis Ritchie remain the fathers and preceptors of the \s-2UNIX\s0 time-sharing system. Many of the improvements here described bear their mark. Among many, many other people who have contributed to the further flowering of \s-2UNIX\s0, we wish especially to acknowledge the contributions of A. V. Aho, S. R. Bourne, L. L. Cherry, G. L. Chesson, S. I. Feldman, C. B. Haley, R. C. Haight, S. C. Johnson, M. E. Lesk, T. L. Lyon, L. E. McMahon, R. Morris, R. Muha, D. A. Nowitz, L. Wehr, and P. J. Weinberger. We appreciate also the effective advice and criticism of T. A. Dolotta, A. G. Fraser, J. F. Maranzano, and J. R. Mashey; and we remember the important work of the late Joseph F. Ossanna. .sp 1 .in 4i B. W. Kernighan .br M. D. McIlroy .in. .SH HOW TO GET STARTED .LP This section sketches the basic information you need to get started on \s-1UNIX\s+1; how to log in and log out, how to communicate through your terminal, and how to run a program. See ``\s-1UNIX\s+1 for Beginners'' in (USD:1) for a more complete introduction to the system. .LP .I Logging in.\ \ .R Almost any ASCII terminal capable of full duplex operation and generating the entire character set can be used. You must have a valid user name, which may be obtained from the system administration. If you will be accessing \s-1UNIX\s+1 remotely, you will also need to obtain the telephone number for the system that you will be using. .LP After a data connection is established, the login procedure depends on what type of terminal you are using and local system conventions. If your terminal is directly connected to the computer, it generally runs at 9600 or 19200 baud. If you are using a modem running over a phone line, the terminal must be set at the speed appropriate for the modem you are using, typically 1200, 2400, or 9600 baud. The half/full duplex switch should always be set at full-duplex. (This switch will often have to be changed since many other systems require half-duplex). .LP When a connection is established, the system types ``login:''; you type your user name, followed by the ``return'' key. If you have a password, the system asks for it and suppresses echo to the terminal so the password will not appear. After you have logged in, the ``return'', ``new line'', or ``linefeed'' keys will give exactly the same results. A message-of-the-day usually greets you before your first prompt. .LP If the system types out a few garbage characters after you have established a data connection (the ``login:'' message at the wrong speed), depress the ``break'' (or ``interrupt'') key. This is a speed-independent signal to \s-1UNIX\s+1 that a different speed terminal is in use. The system then will type ``login:,'' this time at another speed. Continue depressing the break key until ``login:'' appears clearly, then respond with your user name. .LP For all these terminals, it is important that you type your name in lower-case if possible; if you type upper-case letters, \s-1UNIX\s+1 will assume that your terminal cannot generate lower-case letters and will translate all subsequent lower-case letters to upper case. .LP The evidence that you have successfully logged in is that a shell program will type a prompt (``$'' or ``%'') to you. (The shells are described below under ``How to run a program.'') .LP For more information, consult .IR tset (1), and .IR stty (1), which tell how to adjust terminal behavior; .IR getty (8) discusses the login sequence in more detail, and .IR tty (4) discusses terminal I/O. .LP .I Logging out.\ \ .R There are three ways to log out: .IP By typing ``logout'' or an end-of-file indication (EOT character, control-D) to the shell. The shell will terminate and the ``login:'' message will appear again. .IP You can log in directly as another user by giving a .IR login (1) command. .IP If worse comes to worse, you can simply hang up the phone; but beware \- some machines may lack the necessary hardware to detect that the phone has been hung up. Ask your system administrator if this is a problem on your machine. .LP .I How to communicate through your terminal.\ \ .R When you type characters, a gnome deep in the system gathers your characters and saves them in a secret place. The characters will not be given to a program until you type a return (or newline), as described above in .I Logging in. .R .LP \s-1UNIX\s+1 terminal I/O is full-duplex. It has full read-ahead, which means that you can type at any time, even while a program is typing at you. Of course, if you type during output, the printed output will have the input characters interspersed. However, whatever you type will be saved up and interpreted in correct sequence. There is a limit to the amount of read-ahead, but it is generous and not likely to be exceeded unless the system is in trouble. When the read-ahead limit is exceeded, the system throws away all the saved characters (or beeps, if your prompt was a ``%''). .LP The ^U (control-U) character in typed input kills all the preceding characters in the line, so typing mistakes can be repaired on a single line. Also, the delete character (DEL) or sometimes the backspace character (control-H) erases the last character typed. .IR Tset (1) or .IR stty (1) can be used to change these defaults. Successive uses of delete (or backspace) erases characters back to, but not beyond, the beginning of the line. DEL and ^U (control-U) can be transmitted to a program by preceding them with ^V (control-V). (So, to erase ^V (control-V), you need two deletes or backspaces). .LP An .I interrupt signal .R is sent to a program by typing ^C (control-C) or the ``break'' key which is not passed to programs. This signal generally causes whatever program you are running to terminate. It is typically used to stop a long printout that you do not want. However, programs can arrange either to ignore this signal altogether, or to be notified when it happens (instead of being terminated). The editor, for example, catches interrupts and stops what it is doing, instead of terminating, so that an interrupt can be used to halt an editor printout without losing the file being edited. The interrupt character can also be changed with .IR tset (1) or .IR stty (1). .LP It is also possible to suspend output temporarily using ^S (control-S) and later resume output with ^Q (control-Q). Output can be thrown away without interrupting the program by typing ^O (control-O); see .IR tty (4). .LP The .IR quit "" signal is generated by typing the \s8ASCII\s10 FS character. (FS appears many places on different terminals, most commonly as control-\e or control-\^|\^.) It not only causes a running program to terminate but also generates a file with the core image of the terminated process. Quit is useful for debugging. .LP Besides adapting to the speed of the terminal, \s-1UNIX\s+1 tries to be intelligent about whether you have a terminal with the newline function or whether it must be simulated with carriage-return and line-feed. In the latter case, all input carriage returns are turned to newline characters (the standard line delimiter) and both a carriage return and a line feed are echoed to the terminal. If you get into the wrong mode, the .IR reset (1) command will rescue you. If the terminal does not appear to be echoing anything that you type, it may be stuck in ``no-echo'' or ``raw'' mode. Try typing ``(control-J)reset(control-J)'' to recover. .LP Tab characters are used freely in \s-1UNIX\s+1 source programs. If your terminal does not have the tab function, you can arrange to have them turned into spaces during output, and echoed as spaces during input. The system assumes that tabs are set every eight columns. Again, the .IR tset (1) or .IR stty (1) command can be used to change these defaults. .IR Tset (1) can be used to set the tab stops automatically when necessary. .LP .I How to run a program; the shells.\ \ .R When you have successfully logged in, a program called a shell is listening to your terminal. The shell reads typed-in lines, splits them up into a command name and arguments, and executes the command. A command is simply an executable program. The shell looks in several system directories to find the command. You can also place commands in your own directory and have the shell find them there. There is nothing special about system-provided commands except that they are kept in a directory where the shell can find them. .LP The command name is always the first word on an input line; it and its arguments are separated from one another by spaces. .LP When a program terminates, the shell will ordinarily regain control and type a prompt at you to show that it is ready for another command. .LP The shells have many other capabilities, that are described in detail in sections .IR sh (1) and .IR csh (1). If the shell prompts you with ``$'', then it is an instance of .IR sh (1), the original \s-1UNIX\s+1 shell. If it prompts with ``%'' then it is an instance of .IR csh (1), a shell written at Berkeley. The shells are different for all but the most simple terminal usage. Most users at Berkeley choose .IR csh (1) because of the .I history mechanism and the .I alias feature, that greatly enhance its power when used interactively. .I Csh also supports the job-control facilities; see .IR csh (1) or the Csh introduction in USD:4 for details. .LP You can change from one shell to the other by using the .I chpass (1) command, which takes effect at your next login. .LP .I The current directory.\ \ .R \s-1UNIX\s+1 has a file system arranged as a hierarchy of directories. When the system administrator gave you a user name, they also created a directory for you (ordinarily with the same name as your user name). When you log in, any file name you type is by default in this directory. Since you are the owner of this directory, you have full permission to read, write, alter, or destroy its contents. Permissions to have your will with other directories and files will have been granted or denied to you by their owners. As a matter of observed fact, few \s-1UNIX\s+1 users protect their files from perusal by other users. .LP To change the current directory (but not the set of permissions you were endowed with at login) use .IR cd (1). .LP .I Path names.\ \ .R To refer to files not in the current directory, you must use a path name. Full path names begin with ``/\|'', the name of the root directory of the whole file system. After the slash comes the name of each directory containing the next sub-directory (followed by a ``/\|'') until finally the file name is reached. For example, .I /\^var/\^tmp/\^filex .R refers to the file .I filex .R in the directory .I tmp; tmp .R is itself a subdirectory of .I var; var .R springs directly from the root directory. .LP If your current directory has subdirectories, the path names of files therein begin with the name of the subdirectory with no prefixed ``/\|''. .LP A path name may be used anywhere a file name is required. .LP Important commands that modify the contents of files are .IR cp (1), .IR mv (1), and .IR rm (1), which respectively copy, move (i.e. rename) and remove files. To find out the status of files or directories, use .IR ls (1). See .IR mkdir (1) for making directories and .IR rmdir (1) for destroying them. .LP For a fuller discussion of the file system, see ``A Fast File System for \s-1UNIX\s+1'' (SMM:5) by McKusick, Joy, Leffler, and Fabry. It may also be useful to glance through PRM section 2, that discusses system calls, even if you do not intend to deal with the system at that level. .LP .I Writing a program.\ \ .R To enter the text of a source program into a \s-1UNIX\s+1 file, use the standard display editor .IR vi (1) or its \s-1WYSIWYG\s+1 counterparts .IR jove (1) and .IR emacs (1). (The old standard editor .IR ed (1) is also available.) The principle language in \s-1UNIX\s+1 is provided by the C compiler .IR cc (1). User contributed software in the latest release of the system supports the programming languages perl and C++. After the program text has been entered through the editor and written to a file, you can give the file to the appropriate language processor as an argument. The output of the language processor will be left on a file in the current directory named ``a.out''. If the output is precious, use .IR mv (1) to move it to a less exposed name after successful compilation. .LP When you have finally gone through this entire process without provoking any diagnostics, the resulting program can be run by giving its name to the shell in response to the shell (``$'' or ``%'') prompt. .LP Your programs can receive arguments from the command line just as system programs do, see ``\s-1UNIX\s+1 Programming - Second Edition'' (PSD:4), or for a more terse description .IR execve (2). .LP .I Text processing.\ \ .R Almost all text is entered through an editor such as .IR vi (1), .IR jove (1), or .IR emacs (1). The commands most often used to write text on a terminal are: .IR cat (1), .IR more (1), and .IR nroff (1). .LP The .IR cat (1) command simply dumps \s8ASCII\s10 text on the terminal, with no processing at all. .IR More (1) is useful for preventing the output of a command from scrolling off the top of your screen. It is also well suited to perusing files. .IR Nroff (1) is an elaborate text formatting program. Used naked, it requires careful forethought, but for ordinary documents it has been tamed; see .IR me (7) and .IR ms (7). .LP .IR Groff (1) converts documents to postscript for output to a Laserwriter or Phototypesetter. It is similar to .IR nroff (1), and often works from exactly the same source text. It was used to produce this manual. .LP .IR Script (1) lets you keep a record of your session in a file, which can then be printed, mailed, etc. It provides the advantages of a hard-copy terminal even when using a display terminal. .LP .I Status inquiries.\ \ .R Various commands exist to provide you with useful information. .IR w (1) prints a list of users currently logged in, and what they are doing. .IR date (1) prints the current time and date. .IR ls (1) will list the files in your directory or give summary information about particular files. .LP .I Surprises.\ \ .R Certain commands provide inter-user communication. Even if you do not plan to use them, it would be well to learn something about them, because someone else may aim them at you. .LP To communicate with another user currently logged in, .IR write (1) or .IR talk (1) is used; .IR mail (1) will leave a message whose presence will be announced to another user when they next log in. The write-ups in the manual also suggest how to respond to these commands if you are a target. .LP If you use .IR csh (1) the key ^Z (control-Z) will cause jobs to ``stop''. If this happens before you learn about it, you can simply continue by saying ``fg'' (for foreground) to bring the job back. .LP We hope that you will come to enjoy using the BSD system. Although it is very large and contains many commands, you can become very productive using only a small subset of them. As your needs expand to doing new tasks, you will almost always find that the system has the facilities that you need to accomplish them easily and quickly. .LP Most importantly, the source code to the BSD system is cheaply available to anyone that wants it. On many BSD systems, it can be found in the directory .IR /\|usr/\|src . You may simply want to find out how something works or fix some important bug without waiting months for your vendor to respond. It is also particularly useful if you want to grab another piece of code to bootstrap a new project. Provided that you retain the copyrights and acknowledgements at the top of each file, you are free to redistribute your work for fun or profit. Naturally, we hope that you will allow others to also redistribute your code, though you are not required to do so unless you use copyleft code (which is primarily found in the software contributed from the Free Software Foundation and is clearly identified). .LP Good luck and enjoy BSD. .if o .bp \& .EH '''' .OH '''' .bp .OH '\s10Manual Pages''- % -\s0' .EH '\s10- % -''Manual Pages\s0' .EF '\s10\\\\*(Dt''\\\\*(Ed\s0' .OF '\s10\\\\*(Ed''\\\\*(Dt\s0' .nr PS 10 .nr VS 11.5
http://cvsweb.netbsd.org/bsdweb.cgi/src/share/man/man0/title.urm?rev=1.9&content-type=text/x-cvsweb-markup&sortby=author&only_with_tag=netbsd-6-0-2-RELEASE
CC-MAIN-2020-50
refinedweb
8,741
64.71
Graphics. Warning It is particularly important in Microsoft Windows with the graphics not to open Idle from the Start menu. Graphics will fail. Use one of the following two methods. Warning To work on the most systems, this version of graphics.py cannot be used from the Idle shell. There is an issue with the use of multiple threads of execution. The video for this revised section was uploaded Aug 17, 2012: Any earlier version is completely out of date. In Microsoft Windows, have Python version 3.4 or greater and be sure to start Idle in one of two ways: Note You will just be a user of the graphics.py code, so you do not need to understand the inner workings! It uses all sorts of features of Python that are way beyond these tutorials. There is no particular need to open graphics.py in the Idle editor. Load into Idle and start running example graphIntroSteps.py, or start running from the operating system folder. Each time you press return, look at the screen and read the explanation for the next line(s). Press return: from graphics import * win = GraphWin() Zelle’s graphics are not a part of the standard Python distribution. For the Python interpreter to find Zelle’s module, it must be imported. The first line above makes all the types of object of Zelle’s module accessible, as if they were already defined like built-in types str or list. Look around on your screen, and possibly underneath other windows: There should be a new window labeled “Graphics Window”, created by the second line. Bring it to the top, and preferably drag it around to make it visible beside your Shell window. A GraphWin is a type of object from Zelle’s graphics package that automatically displays a window when it is created. The assignment statement remembers the window object as win for future reference. (This will be our standard name for our graphics window object.) A small window, 200 by 200 pixels is created. A pixel is the smallest little square that can by displayed on your screen. Modern screen usually have more than 1000 pixels across the whole screen. Press return: pt = Point(100, 50), has a method [1] draw. Press return: pt.draw(win) Now you should see the Point if you look hard in the Graphics Window - it shows as a single, small, black pixel. Graphics windows have a Cartesian (x,y) coordinate system. The dimensions are initially measured in pixels. The first coordinate is the horizontal coordinate, measured from left to right, so 100 is about half way across the 200 pixel wide window. The second coordinate, for the vertical direction, increases going down from the top of the window by default, not up as you are likely to expect from geometry or algebra class. The coordinate 50 out of the total 200 vertically should be about 1/4 of the way down from the top. We will see later that we can reorient the coordinate system to fit our taste. Henceforth you will see a draw method call after each object is created, so there is something to see. Press return: cir = Circle(pt, 25) cir.draw(win) The first line creates a Circle object with center at the previously defined pt and with radius 25. This object is remembered with the name cir. As with all graphics objects that may be drawn within a GraphWin, it is only made visible by explicitly using its draw method. So far, everything has been drawn in the default color black. Graphics objects like a Circle have methods to change their colors. Basic color name strings are recognized. You can choose the color for the circle outline as well as filling in the inside. Press return: cir.setOutline('red') cir.setFill('blue') Note the method names. They can be used with other kinds of Graphics objects, too. (We delay a discussion of fancier colors until Color Names and Custom Colors.) Press return: line = Line(pt, Point(150, 100)) line.draw(win) A Line object is constructed with two Points as parameters. In this case we use the previously named Point, pt, and specify another Point directly. Technically the Line object is a segment between the the two points. Warning In Python (150, 100) is a tuple, not a Point. To make a Point, you must use the full constructor: Point(150, 100). Points, not tuples, must be used in the constructors for all graphics objects. A rectangle is also specified by two points. The points must be diagonally opposite corners. Press return: rect = Rectangle(Point(20, 10), pt) rect.draw(win) In this simple system, a Rectangle is restricted to have horizontal and vertical sides. A Polygon, introduced: line.move(10, 40) Did you remember that the y coordinate increases down the screen? Take your last look at the Graphics Window, and make sure that all the steps make sense. Then destroy the window win with the GraphWin method close. Press return: win.close() The example program graphIntro.py starts with the same graphics code as graphIntoSteps.py, but without the need for pressing returns. An addition I have made to Zelle’s package is the ability to print a string value of graphics objects for debugging purposes. If some graphics object isn’t visible because it is underneath something else of off the screen, temporarily adding this sort of output might be a good reality check. At the end of graphIntro.py, I added print lines to illustrate the debugging possibilites: print('cir:', cir) print('line:', line) print('rect:', rect) You can load graphIntro.py into Idle, run it, and add further lines to experiment if you like. Of course you will not see their effect until you run the whole program! In graphIntro.py, a prompt to end the graphics program appeared in the Shell window, requiring you to pay attention to two windows. Instead consider a very simple example program, face.py, where all the action takes place in the graphics window. The only interaction is to click the mouse to close the graphics window. In Windows, have a directory window open to the Python examples folder containing face.py, where your operating system setup may allow you be just double click on the icon for face.py to run it. below. The whole program is shown first; smaller pieces of it are discussed later: '''A simple graphics example constructs a face from basic shapes. ''' from graphics import * def main(): win = GraphWin('Face', 200, 150) # give title and dimensions win.yUp() # make right side up coordinates! head = Circle(Point(40,100), 25) # set center and radius head.setFill("yellow") head.draw(win) eye1 = Circle(Point(30, 105), 5) eye1.setFill('blue') eye1.draw(win) eye2 = Line(Point(45, 105), Point(55, 105)) # set endpoints eye2.setWidth(3) eye2.draw(win) mouth = Oval(Point(30, 90), Point(50, 85)) # set corners of bounding box mouth.setFill("red") mouth.draw(win) label = Text(Point(100, 120), 'A face') label.draw(win) message = Text(Point(win.getWidth()/2, 20), 'Click anywhere to quit.') message.draw(win) win.getMouse() win.close() main() Let us look at individual parts. Until further notice the set-off code is for you to read and have explained. from graphics import * Immediately after the documentation string, always have the import line in your graphics program, to allow easy access to the graphics.py module. win = GraphWin('Face', 200, 150) # give title and dimensions win.yUp() # make right side up coordinates! The first line shows the more general parameters for constructing a new GraphWin, a window title plus width and height in pixels. The second line shows how to turn the coordinate system right-side-up, so the y coordinate increases up the screen, using the yUp method. (This is one of my additions to Zelle’s graphics.) Thereafter, all coordinates are given in the new coordinate system. All the lines of code up to this point in the program are my standard graphics program starting lines (other than the specific values for the title and dimensions). You will likely start your programs with similar code. head = Circle(Point(40,100), 25) # set center and radius head.setFill('yellow') head.draw(win) eye1 = Circle(Point(30, 105), 5) eye1.setFill('blue') eye1.draw(win) The lines above create two circles, in each case specifying the centers directly. They are filled in and made visible. Also note, that because the earlier win.yUp call put the coordinates in the normal orientation, the y coordinate, 100 and 105, are above the middle of the 150 pixel high window. eye2 = Line(Point(45, 105), Point(55, 105)) # set endpoints eye2.setWidth(3) eye2.draw(win) The code above draws and displays a line, and illustrates another method available to graphics object, setWidth, making a thicker line. mouth = Oval(Point(30, 90), Point(50, 85)) # set corners of bounding box mouth.setFill('red') mouth.draw....) label = Text(Point(100, 120), 'A face') label.draw(win) The code above illustrates how a Text object is used to place text on the window. The parameters to construct the Text object are the point at the center of the text, and the text string itself. The exact coordinates for the parts were determined by a number of trial-and-error refinements to the program. An advantage of graphics is that you can see the results of your programming, and make changes if you do not like the results! The final action is to have the user signal to close the window. Just as with waiting for keyboard input from input, it is important to prompt the user before waiting for a response! In a GraphWin, that means using prompt must be made with a Text object displayed explicitly before the response is expected. message = Text(Point(win.getWidth()/2, 20), 'Click anywhere to quit.') message.draw(win) win.getMouse() win.close() The new addition to the Text parameters here is win.getWidth() to obtain the window width. (There is also a win.getHeight().) Using win.getWidth()/2 means the horizontal position is set up to be centered, half way across the window’s width. After the first two lines draw the prompting text, the line win.getMouse() waits for a mouse click. In this program, the position of the mouse click is not important. (In the next example the position of this mouse click will be used.) As you have seen before, win.close() closes the graphics window. While our earlier text-based Python programs have automatically terminated after the last line finishes executing, that is not true for programs that create new windows: The graphics window must be explicitly closed. The win.close() is necessary. We will generally want to prompt the user to finally close the graphics window. Because such a sequence is so common, I have added another method for Zelle’s GraphWin objects, promptClose, so the last four lines can be reduced to win.promptClose(win.getWidth()/2, 20) where the only specific data needed is the coordinates of the center of the prompt. The modified program is in face2.py. You can copy the form of this program for other simple programs that just draw a picture. The size and title on the window will change, as well as the specific graphical objects, positions, and colors. Something like the last line can be used to terminate the program. Warning If you write a program with a bug, and the program bombs out while there is a GraphWin on the screen, a dead GraphWin lingers. The best way to clean things up is to make the Shell window be the current window and select from the menu Shell ‣ Restart Shell. Another simple drawing example is balloons.py. Feel free to run it and look at the code in Idle. Note that the steps for the creation of all three balloons are identical, except for the location of the center of each balloon, so a loop over a list of the centers makes sense. The next example, triangle.py, illustrates similar starting and ending code. In addition it explicitly interacts with the user. Rather than the code specifying literal coordinates for all graphical objects, the program remembers the places where the user clicks the mouse, and uses them as the vertices of a triangle. Return to the directory window for the Python examples. In Windows you can double click on the icon for triangle.py to run it. While running the program, follow the prompts in the graphics window and click with the mouse as requested. After you have run the program, you can examine the program in Idle or look below: '''Program: triangle.py or triangle.pyw (best name for Windows) Interactive graphics program to draw a triangle, with prompts in a Text object and feedback via mouse clicks. ''' from graphics import * def main(): win = GraphWin('Draw a Triangle', 350, 350) win.yUp() # right side up coordinates win.setBackground('yellow') message = Text(Point(win.getWidth()/2, 30), 'Click on three points') message.setTextColor('red') message.setStyle('italic') message.setSize(20) message.draw(win) # Get and draw three vertices of triangle p1 = win.getMouse() p1.draw(win) p2 = win.getMouse() p2.draw(win) p3 = win.getMouse() p3.draw(win) vertices = [p1, p2, p3] # Use Polygon object to draw the triangle triangle = Polygon(vertices) triangle.setFill('gray') triangle.setOutline('cyan') triangle.setWidth(4) # width of boundary line triangle.draw(win) message.setText('Click anywhere to quit') # change text message win.getMouse() win.close() main() Let us look at individual parts. The lines before the new iine: win.setBackground('yellow') are standard starting lines (except for the specific values chosen for the width, height, and title). The background color is a property of the whole graphics window that you can set. message = Text(Point(win.getWidth()/2, 20), 'Click on three points') message.setTextColor('red') message.setStyle('italic') message.setSize(20) message.draw(win) Again a Text object is created. This is the prompt for user action. These lines illustrate most of the ways the appearance of a Text object may be modified, with results like in most word processors. The reference pages for graphics.py give the details. This reference is introduced shortly in The Documentation for graphics.py. After the prompt, the program looks for a response: # Get and draw three vertices of triangle p1 = win.getMouse() p1.draw(win) p2 = win.getMouse() p2.draw(win) p3 = win.getMouse() p3.draw(win) The win.getMouse() method (with no parameters), waits for you to click the mouse inside win. Then the Point where the mouse was clicked is returned. In this code three mouse clicks are waited for, remembered in variables p1, p2, and p3, and the points are drawn. Next we introduce a very versatile type of graphical object, a Polygon, which may have any number of vertices specified in a list as its parameter. We see that the methods setFill and setOutline that we used earlier on a Circle, and the setWidth method we used for a Line, also apply to a Polygon, (and also to other graphics objects). vertices = [p1, p2, p3] triangle = Polygon(vertices) triangle.setFill('gray') triangle.setOutline('cyan') triangle.setWidth(4) triangle.draw(win) Besides changing the style of a Text object, the text itself may be changed: message.setText('Click anywhere to quit') Then lines responding to this prompt: win.getMouse() win.close() If you want to use an existing Text object to display the quitting prompt, as I did here, I provide a variation on my window closing method that could replace the last three lines: win.promptClose(message) An existing Text object may be given as parameter rather than coordinates for a new text object. The complete code with that substitution is in triangle2.py. If you want to make regular polygons or stars, you need some trigonometry, not required for this tutorial, but you can see its use in example polygons.py. This Windows-specific section is not essential. It does describe how to make some Windows graphical programs run with less clutter. If you ran the triangle.py program by double clicking its icon under Windows, you might have noticed a console window first appearing, followed by the graphics window. For this program, there was no keyboard input or screen output through the console window, so the console window was unused and unnecessary. In such cases, under Windows, you can change the source file extension from .py to .pyw, suppressing the display of the console window. If you are using Windows, change the filename triangle.py to tringle.pyw, double click on the icon in its directory folder, and check it out. The distinction is irrelevant inside Idle, which always has its Shell window. This optional section only looks forward to more elaborate graphics systems than are used in this tutorial. One limitation of the graphics.py module is that it is not robust if a graphics window is closed by clicking on the standard operating system close button on the title bar. If you close a graphics window that way, you are likely to get a Python error message. On the other hand, if your program creates a graphics window and then terminates abnormally due to some other error, the graphics window may be left orphaned. In this case the close button on the title bar is important: it is the easiest method to clean up and get rid of the window! This lack of robustness is tied to the simplification designed into the graphics module. Modern graphics environments are event driven. The program can be interrupted by input from many sources including mouse clicks and key presses. This style of programming has a considerable learning curve. In Zelle’s graphics package, the complexities of the event driven model are pretty well hidden. If the programmer wants user input, only one type can be specified at a time (either a mouse click in the graphics window via the getMouse method, or via the input keyboard entry methods into the Shell window). Thus far various parts of Zelle’s graphics package have been introduced by example. A systematic reference to Zelle’s graphics package with the form of all function calls is at. We have introduced most of the important concepts and methods. One special graphics input object type, Entry, will be discussed later. You might skip it for now. Another section of the reference that will not be pursued in the tutorials is the Image class. Meanwhile you can look at. It is important to pay attention to the organization of the reference: Most graphics object share a number of common methods. Those methods are described together, first. Then, under the headings for specific types, only the specialized additional methods are discussed. The version for this Tutorial has a few elaborations. Here is all their documentation together: When you first create a GraphWin, the y coordinates increase down the screen. To reverse to the normal orientation use my GraphWin yUp method. win = Graphwin('Right side up', 300, 400) win.yUp() You generally want to continue displaying your graphics window until the user chooses to have it closed. The GraphWin promptClose method posts a prompt, waits for a mouse click, and closes the GraphWin. There are two ways to call it, depending on whether you want to use an existing Text object, or just specify a location for the center of the prompt. win.promptClose(win.getWidth()/2, 30) # specify x, y coordinates of prompt or msg = Text(Point(100, 50), 'Original message...') msg.draw(win) # ... # ... just important that there is a drawn Text object win.promptClose(msg) # use existing Text object Each graphical type can be converted to a string or printed, and a descriptive string is produced (for debugging purposes). It only shows position, not other parts of the state of the object. >>> pt = Point(30, 50) >>> print(pt) Point(30, 50) >>> ln = Line(pt, Point(100, 150)) >>> print(ln) Line(Point(30, 50), Point(100, 150)) Make a program scene.py creating a scene with the graphics methods. You are likely to need to adjust the positions of objects by trial and error until you get the positions you want. Make sure you have graphics.py in the same directory as your program. Elaborate the scene program above so it becomes changeScene.py, and changes one or more times when you click the mouse (and use win.getMouse()). You may use the position of the mouse click to affect the result, or it may just indicate you are ready to go on to the next view. Zelle chose to have the constructor for a Rectangle take diagonally opposite corner points as parameters. Suppose you prefer to specify only one corner and also specify the width and height of the rectangle. You might come up with the following function, makeRect, to return such a new Rectangle. Read the following attempt: def makeRect(corner, width, height): '''Return a new Rectangle given one corner Point and the dimensions.''' corner2 = corner corner2.move(width, height) return Rectangle(corner, corner2) The second corner must be created to use in the Rectangle constructor, and it is done above in two steps. Start corner2 from the given corner and shift it by the dimensions of the Rectangle to the other corner. With both corners specified, you can use Zelle’s version of the Rectangle constructor. Unfortunately this is an incorrect argument. Run the example program makeRectBad.py: '''Program: makeRectBad.py Attempt a function makeRect (incorrectly), which takes a takes a corner point and dimensions to construct a Rectangle. ''' from graphics import * def makeRect(corner, width, height): # Incorrect! '''Return a new Rectangle given one corner Point and the dimensions.''' corner2 = corner corner2.move(width, height) return Rectangle(corner, corner2) def main(): win = GraphWin('Draw a Rectangle (NOT!)', 300, 300) win.yUp() rect = makeRect(Point(20, 50), 250, 200) rect.draw(win) win.promptClose(win.getWidth()/2, 20) main() By stated design, this program should draw a rectangle with one corner at the point (20,50) and the other corner at (20+250,50+200) or the point (270,250), and so the rectangle should take up most of the 300 by 300 window. When you run it however that is not what you see. Look carefully. You should just see one Point toward the upper right corner, where the second corner should be. Since a Rectangle was being drawn, it looks like it is the tiniest of Rectangles, where the opposite corners are at the same point! Hm, well the program did make the corners be the same initially. Recall we set corner2 = corner What happens after that? Read and follow the details of what happens. We need to take a much more careful look at what naming an object means. A good way to visualize this association between a name and an object is to draw an arrow from the name to the object associated with it. The object here is a Point, which has an x and y coordinate describing its state, so when the makeRect method is started the parameter name corner is associated with the actual parameter, a Point with coordinates (20, 50). Next, the assignment statement associates the name corner2 with the same object. It is another name, or alias, for the original Point. The next line, corner2.move(width, height) internally changes or mutates the Point object, and since in this case width is 250 and height is 200, the coordinates of the Point associated with the name corner2 change to 20+250=270 and 50+200=250: Look! The name corner is still associated with the same object, but that object has changed internally! That is the problem: we wanted to keep the name corner associated with the point with original coordinates, but it has been modified. The solution is to use the clone method that is defined for all the graphical objects in graphics.py. It creates a separate object, which is a copy with an equivalent state. We just need to change the line corner2 = corner to corner2 = corner.clone() A diagram of the situation after the cloning is: Though corner and corner2 refer to points with equivalent coordinates, they do not refer to the same object. Then after corner2.move(width, height) we get: No conflict: corner and corner2 refer to the corners we want. Run the corrected example program, makeRectangle.py. Read this section if you want a deeper understanding of the significance of mutable and immutable objects. This alias problem only came up because a Point is mutable. We had no such problems with the immutable types int or str. Read and follow the discussion of the following code. Just for comparison, consider the corresponding diagrams for code with ints that looks superficially similar: a = 2 b = a b = b + 3 After the first two lines we have an alias again: The third line does not change the int object 2. The result of the addition operation refers to a different object, 5, and the name b is assigned to it: Hence a is still associated with the integer 2 - no conflict. It is not technically correct to think of b as being the number 2, and then 5, but a little sloppiness of thought does not get you in trouble with immutable types. With mutable types, however, be very careful of aliases. Then it is very important to remember the indirectness: that a name is not the same thing as the object it refers to. Another mutable type is list. A list can be cloned with the slice notation: [:]. Try the following in the Shell: [2] nums = [1, 2, 3] numsAlias = nums numsClone = nums[:] nums.append(4) numsAlias.append(5) nums numsAlias numsClone Run the example program, backAndForth0.py. The whole program is shown below for convenience. Then each individual new part of the code is discussed individually: '''Test animation and depth. ''' from graphics import * import time def main(): win = GraphWin('Back and Forth', 300, 300) win.yUp() # make right side up coordinates! rect = Rectangle(Point(200, 90), Point(220, 100)) rect.setFill("blue") rect.draw(win) cir1 = Circle(Point(40,100), 25) cir1.setFill("yellow") cir1.draw(win) cir2 = Circle(Point(150,125), 25) cir2.setFill("red") cir2.draw(win) for i in range(46): cir1.move(5, 0) time.sleep(.05) for i in range(46): cir1.move(-5, 0) time.sleep(.05) win.promptClose(win.getWidth()/2, 20) main() Read the discussion below of pieces of the code from the program above. Do not try to execute fragments alone. There are both an old and a new form of import statement: from graphics import * import time The program uses a function from the time module. The syntax used for the time module is actually the safer and more typical way to import a module. As you will see later in the program, the sleep function used from the time module will be referenced as time.sleep(). This tells the Python interpreter to look in the time module for the sleep function. If we had used the import statement from time import * then the sleep function could just be referenced with sleep(). This is obviously easier, but it obscures the fact that the sleep function is not a part of the current module. Also several modules that a program imports might have functions with the same name. With the individual module name prefix, there is no ambiguity. Hence the form import moduleName is actually safer than from moduleName import *. You might think that all modules could avoid using any of the same function names with a bit of planning. To get an idea of the magnitude of the issue, have a look at the number of modules available to Python. Try the following in the in the Shell (and likely wait a number of seconds): help('modules') Without module names to separate things out, it would be very hard to totally avoid name collisions with the enormous number of modules you see displayed, that are all available to Python! Back to the current example program: The main program starts with standard window creation, and then makes three objects: rect = Rectangle(Point(200, 90), Point(220, 100)) rect.setFill('blue') rect.draw(win) cir1 = Circle(Point(40,100), 25) cir1.setFill('yellow') cir1.draw(win) cir2 = Circle(Point(150,125), 25) cir2.setFill('red') cir2.draw(win) Zelle’s reference pages do not mention the fact that the order in which these object are first drawn is significant. If objects overlap, the ones which used the draw method later appear on top. Other object methods like setFill or move do not alter which are in front of which. This becomes significant when cir1 moves. The moving cir1 goes over the rectangle and behind cir2. (Run the program again if you missed that.) The animation starts with the code for a simple repeat loop: for i in range(46): # animate cir1 to the right cir1.move(5, 0) time.sleep(.05) This very simple loop animates cir1 moving in a straight line to the right. As in a movie, the illusion of continuous motion is given by jumping only a short distance each time (increasing the horizontal coordinate by 5). The time.sleep function, mentioned earlier, takes as parameter a time in seconds to have the program sleep, or delay, before continuing with the iteration of the loop. This delay is important, because modern computers are so fast, that the intermediate motion would be invisible without the delay. The delay can be given as a decimal, to allow the time to be a fraction of a second. The next three lines are almost identical to the previous lines, and move the circle to the left (-5 in the horizontal coordinate each time). for i in range(46): # animate cir1 to the left cir1.move(-5, 0) time.sleep(.05) The next example program, backAndForth1.py, it just a slight variation, looking to the user just like the last version. Only the small changes are shown below. This version was written after noticing how similar the two animation loops are, suggesting an improvement to the program: Animating any object to move in a straight line is a logical abstraction well expressed via a function. The loop in the initial version of the program contained a number of arbitrarily chosen constants, which make sense to turn into parameters. Also, the object to be animated does not need to be cir1, it can be any of the drawable objects in the graphics package. The name shape is used to make this a parameter: def moveOnLine(shape, dx, dy, repetitions, delay): for i in range(repetitions): shape.move(dx, dy) time.sleep(delay) Then in the main function the two similar animation loops are reduced to a line for each direction: moveOnLine(cir1, 5, 0, 46, .05) moveOnLine(cir1, -5, 0, 46, .05) Make sure you see these two lines with function calls behave the same way as the two animation loops in the main program of the original version. Run the next example version, backAndForth2.py. The changes are more substantial here, and the display of the whole program is followed by display and discussion of the individual changes: '''Test animation of a group of objects making a face. ''' main(): win = GraphWin('Back and Forth', 300, 300) win.yUp() # make right side up coordinates! rect = Rectangle(Point(200, 90), Point(220, 100)) rect.setFill("blue") rect.draw(win) head = Circle(Point(40,100), 25) head.setFill("yellow") head.draw(win) eye1 = Circle(Point(30, 105), 5) eye1.setFill('blue') eye1.draw(win) eye2 = Line(Point(45, 105), Point(55, 105)) eye2.setWidth(3) eye2.draw(win) mouth = Oval(Point(30, 90), Point(50, 85)) mouth.setFill("red") mouth.draw(win) faceList = [head, eye1, eye2, mouth] cir2 = Circle(Point(150,125), 25) cir2.setFill("red") cir2.draw(win) moveAllOnLine(faceList, 5, 0, 46, .05) moveAllOnLine(faceList, -5, 0, 46, .05) win.promptClose(win.getWidth()/2, 20) main() Read the following discussion of program parts. Moving a single elementary shape is rather limiting. It is much more interesting to compose a more complicated combination, like the face from the earlier example face.py. To animate such a combination, you cannot use the old moveOnLine function, because we want all the parts to move together, not one eye all the way across the screen and then have the other eye catch up! A variation on moveOnLine is needed where all the parts move together. We need all the parts of the face to move one step, sleep, and all move again, .... This could all be coded in a single method, but there are really two ideas here: This suggests two functions. Another issue is how to handle a group of elementary graphics objects. The most basic combination of objects in Python is a list, so we assume a parameter shapeList, which is a list of elementary graphics objects. For the first function, moveAll, just move all the objects in the list one step. Since we assume a list of objects and we want to move each, this suggests a for-each loop: def moveAll(shapeList, dx, dy): ''' Move all shapes in shapeList by (dx, dy).''' for shape in shapeList: shape.move(dx, dy) Having this function, we can easily write the second function moveAllOnLine, with a simple change from the moveOnLine function, substituting the moveAll function for the line with the move method:) The code in main to construct the face is the same as in the earlier example face.py. Once all the pieces are constructed and colored, they must be placed in a list, for use in moveAllOnLine: faceList = [head, eye1, eye2, mouth] Then, later, the animation uses the faceList to make the face go back and forth: moveAllOnLine(faceList, 5, 0, 46, .05) moveAllOnLine(faceList, -5, 0, 46, .05) This version of the program has encapsulated and generalized the moving and animating by creating functions and adding parameters that can be substituted. Again, make sure you see how the functions communicate to make the whole program work. This is an important and non-trivial use of functions. In fact all parts of the face do not actually move at once: The moveAll loop code moves each part of the face separately, in sequence. Depending on your computer setup, all the parts of the face may appear to move together. Again, the computer is much faster than our eyes. On a computer that repaints the screen fast enough, the only images we notice are the ones on the screen when the animation is sleeping. Note On a fast enough computer you can make many consecutive changes to an image before the next sleep statement, and they all appear to happen at once in the animation. Optional refinement: Not all computers are set up for the same graphics speed in Python. One machine that I use animates backAndForth2.py quite well. Another seems to have the mouth wiggle. On the latter sort of machine, during animation it is useful not to have visible screen changes for every individual move. Instead you can explicitly tell the computer when it is the right time to redraw the screen. The computer can store changes and then flush them to the screen. Withholding updates is controlled by win.autoflush. It starts as True, but can be changed to False before animation. When set to False, you must call win.flush() every time you want the screen refreshed. That is going to be just before the time.sleep() in an animation. In backAndForth2Flush.py this is illustrated, with moveAllOnLine replaced by moveAllOnLineFlush: #NEW Flush version with win parameter def moveAllOnLineFlush(shapeList, dx, dy, repetitions, delay, win): '''Animate the shapes in shapeList along a line in win. Move by (dx, dy) each time. Repeat the specified number of repetitions. Have the specified delay (in seconds) after each repeat. ''' win.autoflush = False # NEW: set before animation for i in range(repetitions): moveAll(shapeList, dx, dy) win.flush() # NEW needed to make all the changes appear time.sleep(delay) win.autoflush = True # NEW: set after animation Run the next example program backAndForth3.py. The final version, backAndForth3.py and its variant, backAndForth3Flush.py, use the observation that the code to make a face embodies one unified idea, suggesting encapsulation inside a function. Once you have encapsulated the code to make a face, we can make several faces! Then the problem with the original code for the face is that all the positions for the facial elements are hard-coded: The face can only be drawn in one position. The full listing of backAndForth3.py below includes a makeFace function with a parameter for the position of the center of the face. Beneath the listing of the whole program is a discussion of the individual changes: '''Test animation of a group of objects making a face. Combine the face elements in a function, and use it twice. Have an extra level of repetition in the animation. This version may be wobbly and slow on some machines: Then see backAndForthFlush.py. ''' makeFace(center, win): '''display face centered at center in window win. Return a list of the shapes in the face. ''' head = Circle(center, 25) head.setFill("yellow") head.draw(win) eye1Center = center.clone() # face positions are relative to the center eye1Center.move(-10, 5) # locate further points in relation to others = GraphWin('Back and Forth', 300, 300) win.yUp() # make right side up coordinates! rect = Rectangle(Point(200, 90), Point(220, 100)) rect.setFill("blue") rect.draw(win) faceList = makeFace(Point(40, 100), win) faceList2 = makeFace(Point(150,125), win) stepsAcross = 46 dx = 5 dy = 3 wait = .05 for i in range(3): moveAllOnLine(faceList, dx, 0, stepsAcross, wait) moveAllOnLine(faceList, -dx, dy, stepsAcross//2, wait) moveAllOnLine(faceList, -dx, -dy, stepsAcross//2, wait) win.promptClose(win.getWidth()/2, 20) main() Read the following discussion of program parts. As mentioned above, the face construction function allows a parameter to specify where the center of the face is. The other parameter is the GraphWin that will contain the face. def makeFace(center, win): then the head is easily drawn, using this center, rather than the previous cir1 with its specific center point (40, 100): head = Circle(center, 25) head.setFill('yellow') head.draw(win) For the remaining Points used in the construction there is the issue of keeping the right relation to the center. This is accomplished much as in the creation of the second corner point in the makeRectangle function in Section Issues with Mutable Objects. A clone of the original center Point is made, and then moved by the difference in the positions of the originally specified Points. For instance, in the original face, the center of the head and first eye were at (40, 110) and (30, 115) respectively. That means a shift between the two coordinates of (-10, 5), since 30-40 = -10 and 130-110 = 20. eye1Center = center.clone() # face positions are relative to the center eye1Center.move(-10, 5) # locate further points in relation to others eye1 = Circle(eye1Center, 5) eye1.setFill('blue') eye1.draw(win) The only other changes to the face are similar, cloning and moving Points, rather than specifying them with explicit coordinates.) Finally, the list of elements for the face must be returned to the caller: return [head, eye1, eye2, mouth] Then in the main function, the program creates a face in exactly the same place as before, but using the makeFace function, with the original center of the face Point(40, 100). Now with the makeFace function, with its center parameter, it is also easy to replace the old cir2 with a whole face! faceList = makeFace(Point(40, 100), win) faceList2 = makeFace(Point(150,125), win) The animation section is considerably elaborated in this version. stepsAcross = 46 dx = 5 dy = 3 wait = .01 for i in range(3): moveAllOnLine(faceList, dx, 0, stepsAcross, wait) moveAllOnLine(faceList, -dx, dy, stepsAcross//2, wait) moveAllOnLine(faceList, -dx, -dy, stepsAcross//2, wait) The unidentified numeric literals that were used before are replaced by named values that easily identify the meaning of each one. This also allows the numerical values to be stated only once, allowing easy modification. The whole animation is repeated three times by the use of a simple repeat loop. The animations in the loop body illustrate that the straight line of motion does not need to be horizontal. The second and third lines use a non-zero value of both dx and dy for the steps, and move diagonally. Make sure you see now how the whole program works together, including all the parameters for the moves in the loop. By the way, the documentation of the functions in a module you have just run in the Shell is directly available. Try in the Shell: help(moveAll) * Save backAndForth3.py or backAndForth3Flush.py to the new name backAndForth4.py. Add a triangular nose in the middle of the face in the makeFace function. Like the other features of the face, make sure the position of the nose is relative to the center parameter. Make sure the nose is included in the final list of elements of the face that get returned! * Make a program faces.py that asks the user to click the mouse, and then draws a face at the point where the user clicked. Copy the makeFace function definition from the previous exercise, and use it! Elaborate this with a Simple Repeat Loop, so this is repeated six times: After each of 6 mouse clicks, a face immediately appears at the location of the latest click. Think how you can reuse your code each time through the loop! * Animate two faces moving in different = GraphWin("Greeting", 300, 300) win.yUp() instructions = Text(Point(win.getWidth()/2, 40), "Enter your name.\nThen click the mouse.") instructions.draw(win) entry1 = Entry(Point(win.getWidth()/2, 200),10) entry1.draw(win) Text(Point(win.getWidth()/2, 230),'Name:').draw(win) # label for the Entry win.getMouse() # To know the user is finished with the text. name = entry1.getText() greeting1 = 'Hello, ' + name + '!' Text(Point(win.getWidth()/3, 150), greeting1).draw(win) greeting2 = 'Bonjour, ' + name + '!' Text(Point(2*win.getWidth()/3, 100), greeting2).draw(win) win.promptClose(instructions) main() The only part of this with new ideas is: entry1 = Entry(Point(win.getWidth()/2, 200),10) entry1.draw(win) Text(Point(win.getWidth()/2, 230),'Name:').draw(win) # label for the Entry win.getMouse() # To know the user is finished with the text. name = entry1.getText() The first and read the Entry text. The method name getText is the same as that used with a Text object. Run the next example, addEntries.py, also copied below: """Example with two Entry objects and type conversion. Do addition. """ from graphics import * def main(): win = GraphWin("Addition", 300, 300) win.yUp() instructions = Text(Point(win.getWidth()/2, 30), "Enter two numbers.\nThen click the mouse.") instructions.draw(win) entry1 = Entry(Point(win.getWidth()/2, 250),25) entry1.setText('0') entry1.draw(win) Text(Point(win.getWidth()/2, 280),'First Number:').draw(win) entry2 = Entry(Point(win.getWidth()/2, 180),25) entry2.setText('0') entry2.draw(win) Text(Point(win.getWidth()/2, 210),'Second Number:').draw(win) win.getMouse() # To know the user is finished with the text. numStr1 = entry1.getText() num1 = int(numStr1) numStr2 = entry2.getText() num2 = int(numStr2) sum = num1 + num2 result = "The sum of\n{num1}\nplus\n{num2}\nis {sum}.".format(**locals()) Text(Point(win.getWidth()/2, 110), result).draw(win) win.promptClose(instructions) main() As with the input statement, you only can read strings from an Entry. effort. Since I do not refer later to the Text object, I do not bother to name it, but just draw it immediately. Then the corresponding change in the main function is just two calls to this function: entry1 = makeLabeledEntry(Point(win.getWidth()/2, 250), 25, '0', 'First Number:', win) entry2 = makeLabeledEntry(Point(win.get first()) Thus far we have only used common color names. In fact there are a very large number of allowed color names, and also the ability to draw with custom colors. First, the graphics package is built on an underlying graphics system, Tkinter, which has a large number of color names defined. Each of the names can be used by itself, like ‘red’, ‘salmon’ or ‘aquamarine’ or with a lower intensity by specifying with a trailing number 2, 3, or 4, like ‘red4’ for a dark red. Though the ideas for the coding have not all been introduced, it is still informative to run the example program colors.py. As you click the mouse over and over, you see the names and appearances of a wide variety of built-in color names. The names must be place in quotes, but capitalization is ignored. Custom colors can also be created. To do that requires some understanding of human eyes and color (and the Python tools). The only colors detected directly by the human eyes are red, green, and blue. Each amount is registered by a different kind of cone cell in the retina. As far as the eye is concerned, all the other colors we see are just combinations of these three colors. This fact is used in color video screens: they only directly display these three colors. A common scale to use in labeling the intensity of each of the basic colors (red, green, blue) is from 0 to 255, with 0 meaning none of the color, and 255 being the most intense. Hence a color can be described by a sequence of red, green, and blue intensities (often abbreviated RGB). The graphics package has a function, color_rgb, to create colors this way. For instance a color with about half the maximum red intensity, no green, and maximum blue intensity would be aColor = color_rgb(128, 0, 255) Such a creation can be used any place a color is used in the graphics, (i.e. circle.setFill(aColor)). Another interesting use of the color_rgb function is to create random colors. Run example program randomCircles.py. The code also is here: """Draw random circles. """ from graphics import * import random, time def main(): win = GraphWin("Random Circles", 300, 300) for i in range(75): r = random.randrange(256) b = random.randrange(256) g = random.randrange(256) color = color_rgb(r, g, b) radius = random.randrange(3, 40) x = random.randrange(5, 295) y = random.randrange(5, 295) circle = Circle(Point(x,y), radius) circle.setFill(color) circle.draw(win) time.sleep(.05) win.promptClose(win.getWidth()/2, 20) main() Read the fragments of this program and their explanations: To do random things, the program needs a function from the random module. This example shows that imported modules may be put in a comma separated list: import random, time You have already seen the built-in function range. To generate a sequence of all the integers 0, 1, ... 255, you would use range(256) This is the full list of possible values for the red, green or blue intensity parameter. For this program we randomly choose any one element from this sequence. Instead of the range function, use the random module’s randrange function, as in r = random.randrange(256) b = random.randrange(256) g = random.randrange(256) color = color_rgb(r, g, b) This gives randomly selected values to each of r, g, and b, which are then used to create the random color. I want a random circle radius, but I do not want a number as small as 0, making it invisible. The range and randrange functions both refer to a possible sequence of values starting with 0 when a single parameter is used. It is also possible to add a different starting value as the first parameter. You still must specify a value past the end of the sequence. For instance range(3, 40) would refer to the sequence 3, 4, 5, ... , 39 (starting with 3 and not quite reaching 40). Similarly random.randrange(3, 40) randomly selects an arbitrary element of range(3, 40). I use the two-parameter version to select random parameters for a Circle: radius = random.randrange(3, 40) x = random.randrange(5, 295) y = random.randrange(5, 295) circle = Circle(Point(x,y), radius) What are the smallest and largest values I allow for x and y? [3] Random values are often useful in games. Write a program ranges.py in three parts. (Test after each added part.) This problem is not a graphics program. It is just a regular text program to illustrate your understanding of ranges and loops. For simplicity each of the requested number sequences can just be printed with one number per line. Print a label for each number sequence before you print the sequence, like Numbers 1-4, Numbers 1-n, Five random numbers in 1-n. * Write a program texttriangle.py. This, too, is not a graphics program. Prompt the user for a small positive integer value, that I’ll call n. Then use a for-loop with a range function call to make a triangular arrangement of ‘#’characters, with n ‘#’ characters in the last line. Hint: [5] Then leave a blank line. Then make a similar triangle, except start with the line with n ‘#’ characters. To make the second triangle, you can use a for-loop of the form discussed so far, but that is trickier than looking ahead to The Most General range Function and using a for-loop where a range function call has a negative step size. Here is the screen after a posible run with user input 4: Enter a small positive integer: 4 # ## ### #### #### ### ## # And another possible run with user input 2: Enter a small positive integer: 2 # ## ## #
http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/graphics.html
CC-MAIN-2016-44
refinedweb
8,331
66.23
Imagine, one day you have an amazing idea for your machine learning project. You write down all the details on a piece of paper- the model architecture, the optimizer, the dataset. And now you just have code it up and do some hyperparameter tuning to put it to application. So, you light up your machine and start coding. But suddenly it hits you, you need to go through the hard work of creating batches out of the data, writing loops to iterate over batches and epochs, debugging any issues that may arise while doing so, repeating the same for the validation set and the list goes on. It turns out to be a headache before it even started. But not anymore. PyTorch Lightning is here to save your day. Not only does it automatically do the hard work for you but it also structures your code to make it more scalable. It comes fully packed with awesome features that will enhance your machine learning experience. Beginners should definitely give it a go. Throughout this article we will learn how can Lightning be used along with PyTorch to make development easy and reproducible. Roadmap With this post, I aim to help people get to know PyTorch Lightning. From now on I will be referring to PyTorch Lightning as Lightning. I will begin with a brief introduction to the new library and its underlying principles so that you can build research-friendly neural network models from scratch. This tutorial assumes that you have prior knowledge of how a neural network works. It also assumes you are familiar with the PyTorch framework. Even if you are not familiar, you will be alright. For PyTorch users, this tutorial may serve as a medium to encourage them to include Lightening in their PyTorch code. Let us start with some basic introduction. What is PyTorch? Based on the Torch library, PyTorch is an open-source machine learning library. PyTorch is imperative, which means computations run immediately, and the user need not wait to write the full code before checking if it works or not. We can efficiently run a part of the code and inspect it in real-time. The library is python based and built for providing flexibility as a deep learning development platform. PyTorch is extremely “pythonic” in nature. It is basically a NumPy substitute that utilizes the computation benefits of powerful GPUs PyTorch enables the support of dynamic computational graphs that allows us to change the network on the fly. The Catch PyTorch is an excellent framework, great for researchers. But after a certain point, it involves more engineering than researching. As I mentioned in the introduction, the hard work starts taking over the research work. The focus shifts from training and tuning the model to correctly implementing the following features - Re-coding a training loop - Multi-cluster training - 16-bit precision - Early-stopping - Model loading/saving - etc… Even though they may be simple to implement, we would still end up losing precious time and might risk a chance of making a mistake while coding these up leading to time being wasted in debugging. Consider an example. We are training a model. We want that after 100 epochs it stops and saves the trained model into a .pth file. But we made a mistake in writing the model-saving code. The thing about python is that it does not show an error until it runs into one. So, after 10 hours of training, we run into an error. and our model did not save. And just like that, the 10 hours go down the drain. How frustrating would this be? Enter Lightning Lightning is a very lightweight wrapper on PyTorch. This means you don’t have to learn a new library. It defers the core training and validation logic to you and automates the rest. It guarantees tested and correct code with the best modern practices for the automated parts. So we can actually save those 10 hours by carefully organizing our code in Lightning modules. As the name suggests, Lightning is related to closely PyTorch: not only do they share their roots at Facebook but also Lightning is a wrapper for PyTorch itself. In fact, the core foundation of PyTorch Lightning is built upon PyTorch. In its true sense, Lightning is a structuring tool for your PyTorch code. You just have to provide the bare minimum details (Eg. number of epoch, optimizer, etc). The rest will be automated by Lightning. By using Lightning, you make sure that all the tricky pieces of code work for you and you can focus on the real research: - Hyperparameter tuning - Finding the best model for a problem - Visualizing results Lightning ensures that when your network becomes complex your code doesn’t It ensures that you focus on the real deal and not worry about how to run your model on multiple GPUs or speeding up the code. Lightning will handle that for you. But what does this mean for you? It means that this framework is designed to be extremely extensible while making state of the art AI research techniques (like multi-GPU training) trivial. Quick MNIST Classifier on Google Colab I will be showing you exactly how you can build a MNIST classifier using Lightning. I will be walking you through a very small network with 99.4% accuracy on MNIST Validation set using <8k trainable parameters. I tried re-implementing the code using PyTorch-Lightening and added my own intuitions and explanations. We shall do this as quickly as possible so that we can move on to even more interesting details of Lightning The Main Aspects of a Lightning Model The basic and essential chunks of a Neural Network in Lightning are the following - Model architecture — Restructuring - Data — Restructuring - Forward pass — Restructuring - Optimizer — Restructuring - Training Step — Restructuring - Training and Validation Loops (Lightning Trainer) — Abstraction We can clearly see that they are contained in 2 categories: Restructuring and Abstraction Restructuring Restructuring refers to keeping code in its respective place in the Lightning Module. It has just been arranged in the functions of Lightning Module known as Callbacks. They have a special meaning to the Lightning because it helps it understand the functionality of the function It is to be noted that there is no change in the PyTorch code during the restructuring Abstraction The boilerplate code is abstracted by the Lightning trainer. It automates most of the code for us. Now there is no need to write separate code for saving your model or iterating over batches. Its is now abstracted into the Trainer What does it contain? Lightning provides us with the following methods of its class pl.LightningModule that help in structuring the code. They refer to them as Callbacks: forward— This is the good old forward method that we have in nn.Module in PyTorch. It remains exactly the same in Lightning. training_step— This contains the commands that are to be executed when we begin training. We usually call for a forward pass in here for the training data. Its sister functions are testing_stepand validation_step training_epoch_end— As the name suggests, this callback determines what will be done with the results (the outcome of a forward pass) at the end of an epoch. Its sister functions are testing_epoch_endand validation_epoch_end train_dataloader— This method allows us to set-up the dataset for training and returns a Dataloader object from torch.utils.data module. Its sister functions are test_dataloaderand val_dataloader configure_optimizers— It sets up the optimizers that we might want to use, such as Adam, SGD, etc. We can even return 2 optimizers (in case of a GAN) training_end— It contains the piece of code that will be executed when training ends - and many more such amazing callbacks here Coding an MNIST Classifier Now let’s dive right into coding so that we can get a hands on experience with Lightning Installing Lightning Run the following to install Lightning on Google Colab !pip install pytorch_lightning You will have to restart the runtime for some new changes to be reflected Do not forget to select the GPU. Go to Edit->Notebook Settings->Hardware Accelerator and select GPU in Google Colab Notebook Import Libraries import torch from torch.nn import functional as F from torch.utils.data import DataLoader, random_split from torchvision.datasets import MNIST from torchvision import transforms import pytorch_lightning as pl 1. The Model We will be defining our own class called smallAndSmartClassifier and we will be inheriting pl.LightningModule from Lightning Let’s start building the model class smallAndSmartModel(pl.LightningModule): def __init__(self): super(smallAndSmartModel, self).__init__() self.layer1 = torch.nn.Sequential( torch.nn.Conv2d(1,28,kernel_size=5), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2)) self.layer2 = torch.nn.Sequential( torch.nn.Conv2d(28,10,kernel_size=2), torch.nn.ReLU(), torch.nn.MaxPool2d(kernel_size=2)) self.dropout1=torch.nn.Dropout(0.25) self.fc1=torch.nn.Linear(250,18) self.dropout2=torch.nn.Dropout(0.08) self.fc2=torch.nn.Linear(18,10) 2. Data Loading class smallAndSmartModel(pl.LightningModule): #This contains the manupulation on data that needs to be done only once such as downloading it def prepare_data(self): MNIST(os.getcwd(), train=True, download =True) MNIST(os.getcwd(), train=False, download =True) def train_dataloader(self): #This is an essential function. Needs to be included in the code #See here i have set download to false as it is already downloaded in prepare_data mnist_train=MNIST(os.getcwd(), train=True, download =False,transform=transforms.ToTensor()) #Dividing into validation and training set self.train_set, self.val_set= random_split(mnist_train,[55000,5000]) return DataLoader(self.train_set,batch_size=128) def val_dataloader(self): # OPTIONAL return DataLoader(self.val_set, batch_size=128) def test_dataloader(self): # OPTIONAL return DataLoader(MNIST(os.getcwd(), train=False, download=False, transform=transforms.ToTensor()), batch_size=128) The train_dataloader, test_dataloader and val_dataloader are reserved functions in pl.LightningModule. We use them as wrappers for loading our data. It is necessary to write the code in these functions just because they have a special meaning in Lightning, just like how forward has in nn.module Each of these is responsible for returning the appropriate data split. Lightning structures it in a way so that it is very clear how the data is being manipulated. If you ever read someone else’s code that isn’t structured like this (like most GitHub codes), you won’t be able to figure out how they manipulated their data. Lightning even allows multiple data loaders for testing or validating. 3. Forward Pass class smallAndSmartModel(pl.LightningModule): def forward(self,x): x=self.layer1(x) x=self.layer2(x) x=self.dropout1(x) x=torch.relu(self.fc1(x.view(x.size(0), -1))) x=F.leaky_relu(self.dropout2(x)) return F.softmax(self.fc2(x)) This is the forward pass — where the calculation process takes place and we generate the values for the output layers from the inputs data. Users of PyTorch may notice that there is no change in its implementation 4. Optimizer class smallAndSmartModel(pl.LightningModule): def configure_optimizers(self): # Essential fuction #we are using Adam optimizer for our model return torch.optim.Adam(self.parameters()) This required function returns the kind of optimizer we require. Interestingly Lightning provides us with the wrapper configure_optimizers, which allows us to even return multiple optimizers with ease (for example in GANs) 5. Training Step (The interesting part) class smallAndSmartModel(pl.LightningModule): def training_step(self,batch,batch_idx): #extracting input and output from the batch x,labels=batch #doing a forward pass pred=self.forward(x) #calculating the loss loss = F.nnl_loss(pred, labels) #logs logs={"train_loss": loss} output={ #REQUIRED: It ie required for us to return "loss" "loss": loss, #optional for logging purposes "log": logs } return output This step is called for every batch in our dataset. Some key operations that occur in this function are: - The actual forward pass is made on the input to get the outcome predfrom the model - The loss is calculated on the batch lossdictionary is prepared - An outputdictionary is returned It is essential for training_step to return a dictionary containing loss. Any other data returned is optional 6. The Lightning Trainer ( Where Magic Happens) Obviously, there is no magic. But when I tell you what Lightning Trainer is capable of, you won’t refrain from claiming that indeed, it is charming and exquisite. #abstracts the training, val and test loops #using one gpu given to us by google colab for max 40 epochs myTrainer=pl.Trainer(gpus=1,max_nb_epochs=100) model=smallAndSmartModel() myTrainer.fit(model) The Trainer is the heart of PyTorch Lightning. This is where all the abstractions take place. It abstracts the most obvious pieces of code such as: - The batch iteration - The epoch iteration - Calling of optimizer.step() - The validation loop Now you don’t have to worry about engineering these steps. The Trainer does that for you. You just have to make sure that your code is well structured as explained in the above sections. Lightning Trainer Flags The trainer provides some very helpful flags. We can assign values to these flags to configure our classifier’s behavior. gpus— Number of GPUs you want to train on max_epochs— Stop training once this number of epochs is reached min_epoch— Force training for at least these many epochs weights_save_path— Directory of where to save weights if specified. precision— Full(32 bit) or half(16 bit) - and many more Perks of Lightning Trainer By using the Trainer, you automatically get the following tools and features: - Training and validation loop - Tensorboard logging - Early-stopping - Model checkpointing - The ability to resume training from wherever you left Why should I start using PyTorch Lightning? That’s the question you should be asking me after I told you so much about Pytorch Lightning. I will answer this by letting you in on my love for Lightning 1. Peace of Mind (Structured Code) When I look at how the code is structured in Lightning, it feels almost natural and intuitive to put it there. The structuring ensures that I have a step-by-step strategy of developing my classifier from scratch. It is as if it makes me more confident in developing my models. 2. Simplistic The steps to make solution for machine learning are now very simple and intuitive. Now, to come up with a solution using Lightning, I know that I need to proceed by preparing data, adding optimizers, add the training step, and so on. This helps me in moving along with the flow of ideas in my mind. 3. Grouping the relevant together The best thing about Lightning is that each process is separated from the other in the Lightningmodule. That’s the benefit of structuring. training_step contains information about the training step and not about the validation step or about the optimizer. It makes things more clear for me 4. No True Change in Code required Since Lightning is a wrapper for PyTorch, I did not have to learn a new language. Also, if I want to make very complex training steps I can easily do that without compromising on the flexibility of PyTorch. Those who are familiar with PyTorch will find the transition to be extremely smooth. 5. The Lightning Trainer — Automation The Trainer just wins it all. It automates most of the complex tasks for me. In the case of GPUs, I don’t have to worry about converting my tensors to tensor.to(device=cuda). It automatically figures out the details. I just have to set a few flags. With this, I can even enable 16-bit precision, auto-cluster saving, auto-learning-rate-finder, Tensorboard visualization, etc. By using the Trainer, I’m not only getting some very neat algorithms but I am also getting the guarantee that they will work correctly. Now that’s one less thing for me to worry about. And I can focus on my real research. My personal favorite is Tensorboard logging and resuming training from where I left it. Whom does Lightning caters to? Lightning is best for scholars and researchers who are working on developing the best strategies to tackle a problem. Lightning takes away the unnecessary engineering from them and provides with a clean environment to perform relevant research. I also believe that early PyTorch users should start using Lightning so that their thinking process becomes structured and more intuitive. Also, they might find it amazing to have so many perks at their disposal, ready to be exploited. Congratulations Now that you are acquainted with PyTorch Lightning, I hope you will start using Lightning (especially if you are a researcher) and fall in love with its amazing features. That’s all from me. If you liked my little introduction to Lightning do share feedback Keep learning and have fun!!
https://learnopencv.com/getting-started-with-pytorch-lightning/
CC-MAIN-2022-21
refinedweb
2,794
55.64
Your feedback on migrating applications to Jakarta EE and switch from javax.* to jakarta.* namespace ...Scott Marlow May 7, 2019 2:43 PM Dear community, An important discussion is occurring regarding planning for Eclipse Jakarta EE, for which your feedback is greatly valued. The discussion is happening on the jakartaee-platform-dev mailing list. A link to the email thread is here, which you should be able to read from your HTML browser. Please join the Jakarta-platform-dev mailing list, so you can give your feedback. In summary, a few different proposals are mentioned, to deal with transitioning from the Java EE class namespace "javax.*", over to a new namespace "jakarta.*". Nothing is yet cast in stone, but decisions will be made based on feedback shared on the jakarta-platform-dev mailing list, so please help out. If you also want to share feedback here as well, that is appreciated also! Thanks, Scott 1. Re: Your feedback on migrating applications to Jakarta EE and switch from javax.* to jakarta.* namespace ...Richard DiCroce May 7, 2019 3:47 PM (in response to Scott Marlow)1 of 1 people found this helpful Speaking as an application developer, my preference would be for a big bang rename. Yes, this will be annoying... but so will any of the other options. At least a big bang concentrates all the annoyance into a one-time switchover and avoids creating any gotchas later on. Potentially, someone could even build a tool that developers could run against their applications to "Jakartize" them. It could go through source files and replace imports, rename ServiceLoader files, update string constants, etc. That would automate most of the work and alleviate most of the pain for application developers. I imagine this would also make things easier for the WildFly (and other app server) developers, since it would be relatively easy to perform the same conversions at runtime using bytecode enhancers. That translates to less potential for bugs in WildFly itself, which is good for everyone involved. I'm definitely against having any kind of mixture of javax and jakarta packages within the same spec. IMO that is likely to create a lot of confusion, and probably app server bugs too. Annotations are a particular problem there since you can't subclass them, as others have pointed out when discussing this issue. 2. Re: Your feedback on migrating applications to Jakarta EE and switch from javax.* to jakarta.* namespace ...Cody Lerum May 8, 2019 11:35 AM (in response to Scott Marlow)1 of 1 people found this helpful As an developer of an application under active development which started as a Jboss Seam 2 project and has evolved to a Java EE8 project (CDI, JPA, JSF, JMS, JAX-RS, JAX-WS running Wildfly 16. This application has 2400+ java class files and over 500 JSF pages and has been migrated from Seam 2 -> EE6 -> EE7 -> EE8. I'm in favor of Proposal 1 (Big Bang). With modern IDEs and tooling replacing out imports is a simple process that shouldn't take more than a day on an application my size. If this was spread out over multiple releases I would imagine the time commitment to hunt down the new Jakarta implementations as they change would easily be 2x. Furthermore I think it should happen as soon as possible. Some implementations such as Mojarra (JSF) feel severely stunted after the move to ee4j. Many issues and pull requests have had little to no activity and this appears to be due to the lack of clarity in the path forward with regards to javax. (Fix for #4500 by kalgon · Pull Request #4554 · eclipse-ee4j/mojarra · GitHub )
https://developer.jboss.org/message/989279
CC-MAIN-2019-43
refinedweb
614
61.67
pygame.mask - pygame module for image masks. Useful for fast pixel perfect collision detection. A mask uses 1 bit per-pixel to store which parts collide. New in pygame 1.8. pygame.mask. from_surface()¶ - Returns a Mask from the given surface.from_surface(Surface, threshold = 127) -> Mask Makes the transparent parts of the Surface not set, and the opaque parts set. The alpha of each pixel is checked to see if it is greater than the given threshold. If the Surface is color-keyed, then threshold is not used. pygame.mask. from_threshold()¶ - Creates a mask by thresholding Surfacesfrom_threshold(Surface, color, threshold = (0,0,0,255), othersurface = None, palette_colors = 1) -> Mask This is a more-featureful method of getting a Mask from a Surface. If supplied with only one Surface, all pixels within the threshold of the supplied color are set in the Mask. If given the optional othersurface, all pixels in Surface that are within the threshold of the corresponding pixel in othersurface are set in the Mask. pygame.mask. Mask¶ - pygame object for representing 2d bitmasksMask((width, height)) -> Mask get_at()¶ - Returns nonzero if the bit at (x,y) is set.get_at((x,y)) -> int Coordinates start at (0,0) is top left - just like Surfaces. overlap()¶ - Returns the point of intersection if the masks overlap with the given offset - or None if it does not overlap.overlap(othermask, offset) -> x,y The overlap tests uses the following offsets (which may be negative): +----+----------.. |A | yoffset | +-+----------.. +--|B |xoffset | | : : overlap_area()¶ - Returns the number of overlapping 'pixels'.overlap_area(othermask, offset) -> numpixels You can see how many pixels overlap with the other mask given. This can be used to see in which direction things collide, or to see how much the two masks collide. An approximate collision normal can be found by calculating the gradient of the overlap area through the finite difference. dx = Mask.overlap_area(othermask,(x+1,y)) - Mask.overlap_area(othermask,(x-1,y)) dy = Mask.overlap_area(othermask,(x,y+1)) - Mask.overlap_area(othermask,(x,y-1)) overlap_mask()¶ - Returns a mask of the overlapping pixelsoverlap_mask(othermask, offset) -> Mask Returns a Mask the size of the original Mask containing only the overlapping pixels between Mask and othermask. invert()¶ - Flips the bits in a Maskinvert() -> None Flips all of the bits in a Mask, so that the set pixels turn to unset pixels and the unset pixels turn to set pixels. scale()¶ - Resizes a maskscale((x, y)) -> Mask Returns a new Mask of the Mask scaled to the requested size. draw()¶ - Draws a mask onto anotherdraw(othermask, offset) -> None Performs a bitwise OR, drawing othermask onto Mask. erase()¶ - Erases a mask from anothererase(othermask, offset) -> None Erases all pixels set in othermask from Mask. count()¶ - Returns the number of set pixelscount() -> pixels Returns the number of set pixels in the Mask. centroid()¶ - Returns the centroid of the pixels in a Maskcentroid() -> (x, y) Finds the centroid, the center of pixel mass, of a Mask. Returns a coordinate tuple for the centroid of the Mask. In the event the Mask is empty, it will return (0,0). angle()¶ - Returns the orientation of the pixelsangle() -> theta Finds the approximate orientation of the pixels in the image from -90 to 90 degrees. This works best if performed on one connected component of pixels. It will return 0.0 on an empty Mask. outline()¶ - list of points outlining an objectoutline(every = 1) -> [(x,y), (x,y) ...] Returns a list of points of the outline of the first object it comes across in a Mask. For this to be useful, there should probably only be one connected component of pixels in the Mask. The every option allows you to skip pixels in the outline. For example, setting it to 10 would return a list of every 10th pixel in the outline. convolve()¶ - Return the convolution of self with another mask.convolve(othermask, outputmask = None, offset = (0,0)) -> Mask Returns a mask with the (i-offset[0],j-offset[1]) bit set if shifting othermask so that its lower right corner pixel is at (i,j) would cause it to overlap with self. If an outputmask is specified, the output is drawn onto outputmask and outputmask is returned. Otherwise a mask of size self.get_size()+ othermask.get_size()- (1,1) is created. connected_component()¶ - Returns a mask of a connected region of pixels.connected_component((x,y) = None) -> Mask This uses the SAUFalgorithm to find a connected component in the Mask. It checks 8 point connectivity. By default, it will return the largest connected component in the image. Optionally, a coordinate pair of a pixel can be specified, and the connected component containing it will be returned. In the event the pixel at that location is not set, the returned Mask will be empty. The Mask returned is the same size as the original Mask. connected_components()¶ - Returns a list of masks of connected regions of pixels.connected_components(min = 0) -> [Masks] Returns a list of masks of connected regions of pixels. An optional minimum number of pixels per connected region can be specified to filter out noise.
https://www.pygame.org/docs/ref/mask.html
CC-MAIN-2018-30
refinedweb
842
56.76
Developers often overlook basic programming options in favor of new or cool ways to deliver results. A good example is the ASP.NET TextBox Web control, which offers plenty of options for building applications. the TextBox control. Used everywhere It is hard to find an ASP.NET Web application that does not use a TextBox control. The venerable HTML input and textarea elements provide alternatives, but they include fewer features and ASP.NET integration. The basic premise of the TextBox control is accepting user input. This may be as simple as a user entering a username or typing a multiline paragraph. The control provides plenty of methods and properties to work with it via code. Programmatic control Programmatic access is provided by the TextBox class located in the System.Web.UI.WebControls namespace. TextBox appearance and behavior may be manipulated via its properties. The following list provides a sampling of the properties: - AutoPostBack: Boolean value signaling whether the AutoPostBack feature is enabled for the control. This determines if the form will automatically be posted back to the Web server when the contents of the field change. - BackColor: The background color of the control. This may be controlled with CSS as well. - BorderColor: The border color for the border (if used) displayed around the control. - BorderWidth: The width of the border around the control on the page. - CausesValidation: Boolean value signaling whether validation is performed when postback of form is executed. - Columns: The display width of the TextBox control. - CssClass: Allows you to assign a CSS class to the control. - Font: The various font attributes associated with the control. The preferred approach is to use CSS, but the Font class contains properties for signaling whether the text appears in bold, italic, strikeout, or underline, as well as font size, name, and more. - ForeColor: The color of the text displayed in the field. The preferred technique is CSS. - Height: Get or set the height of the control. - MaxLength: The maximum number of characters allowed in the field. - ReadOnly: Boolean value signaling whether control may be edited by user or just read only. - Rows: The number of rows for a TextBox using the multiple line setting. - SkinId: Allows you to assign a skin to the TextBox to control its appearance. - Text: The text contained within the field. - TextMode: The text mode of the control. The legal values are found in the TextBoxMode class with values of single-line, multiline, or password. The password setting masks the text entered, so the password is not displayed — it is available in the code for manipulation and storage. - Visible: Boolean value signaling whether the control is visible or hidden. - Wrap: Boolean value signaling whether text wraps within the field. The following C# example demonstrates the properties used during a page load. The containing page for the code contains one TextBox control named TextBox1. The code sets the TextBox to multiline and setsfont properties and colors:.";} The equivalent VB code follows: Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load."End Sub The ASP.NET source is another option for working with the properties of the TextBox control. The following example mimics the functionality of the previous code snippet by using attributes of the <asp:Textbox> element. <asp:TextBox</asp:TextBox> In the previous listing, you'll notice the OnTextChanged attribute, which allows you to respond to text changing within the control. The OnTextChanged attribute points to the method that will handle the change event. The OnTextChanged event fires when the text in the control changes, and the control loses focus. Focus is lost when a user clicks outside of the TextBox control or tabs out of it. The following snippet shows the TextBox1_TextChanged method tied to the OnTextChanged event in the previous listing. It displays the contents of the TextBox control at the top of the page via the Response object. protected void TextBox1_TextChanged(object sender, EventArgs e) { Response.Write(TextBox1.Text);} The next example uses the TextChanged event to trigger validation of the page. The page contains a RequiredFieldValidator control that is tied to the TextBox control, so it is required. The TextChanged code triggers validation if the field is empty; otherwise, it moves along. The ASP.NET page is listed first. <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Start.aspx.cs" Inherits="WebApplication1.Start" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" > <head runat="server"> <title>Working with TextBox control</title> </head><body> <form id="frmTextBoxTest" runat="server"> <asp:TextBox</asp:TextBox> <asp:RequiredFieldValidator </asp:RequiredFieldValidator></form></body></html> The C# codebehind for the page follows: public partial class Start : System.Web.UI.Page {; } protected void TextBox1_TextChanged(object sender, EventArgs e) { if (TextBox1.Text.Length > 0) TextBox1.CausesValidation = true; else TextBox1.CausesValidation = false;} The previous snippet could be coded without the validation control, but it provides a simple example of using the CausesValidation property. Also, you could go a step further by using AJAX to avoid a full page postback to improve performance. As previously stated, you should use CSS to control the presentation characteristics of the TextBox control. For example, the following ASP.NET page source creates a CSS class to format the TextBox as previously demonstrated with the properties of the TextBox class. The code uses the CssClass property of the TextBox control. <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Start.aspx.cs" Inherits="WebApplication1.Start" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" > <head runat="server"> <title>TextBox control styled with CSS</title> <style media="all" type="text/css"> .textbox { background: lightblue; color: Black; font-size: larger; height: 117px; width: 389px; font-weight: bold; border: 4; } </style> </head> <body> <form id="frmTextBoxCSS" runat="server"> <asp:TextBox</asp:TextBox></form></body></html> Valuable feature The TextBox control is a valuable piece of the ASP.NET system. It allows you to easily accept and work with user input — simple or multiline text. Like most of its features, ASP.NET places plenty of options in the hands of the developer to control TextBox appearance and behavior. Additionally, you can extend the TextBox class to create a custom control. Tell the Visual Studio Developer community about your experiences of working with the TextBox control. What other standard controls do use regularly? Tony Patton began his professional career as an application developer earning Java, VB, Lotus, and XML certifications to bolster his knowledge. ———————————————————————————————————————————-Get weekly develoment tips in your inbox TechRepublic's free .NET newsletter, delivered each Wednesday, contains useful tips and coding examples on topics such as Web services, ASP.NET, ADO.NET, and Visual Studio .NET. Automatically subscribe today!
https://www.techrepublic.com/blog/software-engineer/aspnet-basics-working-with-the-textbox-control/
CC-MAIN-2019-22
refinedweb
1,106
50.43
Download presentation Presentation is loading. Please wait. Published byDaisy Hutchcroft Modified about 1 year ago 1 ADA: 15. Basic Math1 Objective o a reminder about dot product and cross product, and their use for analyzing line segments (e.g. do two segments intersect and the distance of a point to a line) Algorithm Design and Analysis (ADA) , Semester Some Basic Maths Used in CG 2 ADA: 15. Basic Math2 1.Simple Geometry 2.Dot (Scalar) Product 3.Cross (Vector) Product 4.The CCW() Function 5.Line Segment Intersection 6.Distance Between a Point and a Line 7.Area of a Convex PolygonOverview 3 ADA: 15. Basic Math3 Points o (x, y) Cartesian coordinates o (r, Θ) Polar coordinates Line Segments o Positioned using their end points Lines o 2-tuple (m, c) using y = mx + c o 3-tuple (a, b, c) using ax + by = c o Two points P 0, P 1 on the line using P(t) = (1-t)P 0 + t P 1 Polygons o Ordered list of points 1. Simple Geometry 4 Line Segments & Vectors p = (x, y ) 1 2 O = (0, 0)x y 1 2 Points (vectors): p, p, p p = p p p p = (x x, y y ) 1 2 Line segment: p p = p p 5 ADA: 15. Basic Math5 Given the two vectors a = and b = then the dot product is a · b = a1*b1 + a2*b2 + a3*b3 Example a =, b = a · b = 0*2 + -3*3 + 7*1 = 9 – 7 = 2 2. Dot (Scalar) Product 6 Dot Product as Geometry a b = |a| |b| cos θ The dot product is the product of the magnitudes of the two vectors a and b and the cosine of the angle between them. The angle between a and b is: Θ = cos -1 ((a b) / |a||b|) The angle between a and b is: Θ = cos -1 ((a b) / |a||b|) 7 ADA: 15. Basic Math7 Angle Example 8 ADA: 15. Basic Math8 It can be used to get the projection (length) of vector a onto vector b. Uses of Dot Product 9 ADA: 15. Basic Math9 3. Cross (Vector) Product 10 ADA: 15. Basic Math10 Cross Product Example 11 ADA: 15. Basic Math11 The cross product a × b is defined as a vector c that is perpendicular to both a and b, with a direction given by the right-hand rule (the unit vector n ) and a magnitude equal to the area of the parallelogram that the vectors span. Cross Product as Geometry the area of the parallelogram. a b = |a| |b | sin θ n 12 ADA: 15. Basic Math12 Angle Example 13 ADA: 15. Basic Math13 You can use the cross product of the vectors a and b to generate a perpendicular vector, which is the normal to the plane defined by a and b. Normals are very useful when calculating things like lighting in 3D computer games, and up/down in FPSs. Uses of Cross Product 14 ADA: 15. Basic Math14 Suppose we have three vectors a, b, c which form a 3D figure like so: Geometric Properties b a c 15 ADA: 15. Basic Math15 Area of the parallelogram of the front face of the object is: area = | a x b | Volume of the parallelepiped (the entire object) is: volume = | a · ( b x c ) | If the volume == 0 then it must mean that all three vectors lie in the same plane o in terms of the box diagram, it means that |a| is 0 units above the b x c face Area and Volume 16 ADA: 15. Basic Math16Example 17 ADA: 15. Basic Math17 A basic geometric function: o Returns 1 if the points are in counter-clockwise order (ccw) o Returns -1 if the points are in clockwise order (cw) o Returns 0 if the points are collinear CCW() utilises cross product. 4. The CCW() Function ccw(p0, p1, p2) returns 1 18 Turning of Consecutive Segments Counterclockwise Clockwise No turn (collinear) p p p p p p pp p 0 12 p p p p > p p p p < p p p p = Segments p p and p p. Move from p to p then to p 19 ADA: 15. Basic Math19 5. Line Segment Intersection p4 p3 p1 p2 Boundary cases must be considered. 20 ADA: 15. Basic Math20 21 Using Cross Products p p 1 2 p p 3 4 Two line segments intersect iff each of the two pairs of cross products below have different signs (or one cross product in the pair is 0). (p p ) ( p p ) and ( p p ) ( p p ) p 1 p 2 p 3 p // the line through p p // intersects p p // the line through p p // intersects p p. 4 12 22 ADA: 15. Basic Math22 SEGMENTS-INTERSECT(p 1, p 2, p 3, p 4 ) 1 d 1 = CCW(p 3, p 4, p 1 ) 2 d 2 = CCW(p 3, p 4, p 2 ) 3 d 3 = CCW(p 1, p 2, p 3 ) 4 d 4 = CCW(p 1, p 2, p 4 ) 5 if ((d 1 > 0 and d 2 0)) && ((d 3 > 0 and d 4 0)) 6 return true 7 elseif d 1 = 0 and ON-SEGMENT(p 3, p 4, p 1 ) 8 return true 9 elseif d 2 = 0 and ON-SEGMENT(p 3, p 4, p 2 ) 10 return true 11 elseif d 3 = 0 and ON-SEGMENT(p 1, p 2, p 3 ) 12 return true 13 elseif d 4 = 0 and ON-SEGMENT(p 1, p 2, p 4 ) 14 return true 15 else return falseCode lines straddle check boundary cases 23 ADA: 15. Basic Math23 CCW(p i, p j, p k ) // called DIRECTION in CLRS return (p k - p i ) × (p j - p i ) ON-SEGMENT(p i, p j, p k ) 1 if min(x i, x j ) ≤ x k ≤ max(x i, x j ) && min(y i, y j ) ≤ y k ≤ max(y i, y j ) 2 return true 3 else return false 24 ADA: 15. Basic Math24 Intersection Cases 25 ADA: 15. Basic Math25 26 ADA: 15. Basic Math26 Given a point and a line or line segment, determine the shortest distance between them. 6. Distance Between a Point and a Line P3 P1 P2 h The equation of a line defined through two points P1 (x1,y1) and P2 (x2,y2) is P = P1 + u (P2 - P1) using the dot product 27 ADA: 15. Basic Math27 The point P3 (x3, y3) is closest to the line at the tangent to the line which passes through P3, that is, the dot product of the tangent and line is 0, thus ( P3 - P ) ( P2 - P1 ) = 0 Substituting the equation of the line gives: [ P3 - P1 - u( P2 - P1 )] ( P2 - P1 ) = 0 Solving this gives the value of u: Uses a (b+c) = (a b) + (a c) 28 ADA: 15. Basic Math28 Substituting this into the equation of the line gives the point of intersection (x, y) of the tangent as: x = x1 + u (x2 - x1) y = y1 + u (y2 - y1) The distance therefore between the point P3 and the line is the distance between (x, y) and P3. 29 ADA: 15. Basic Math29 double distToSegment(Point2D p1, Point2D p2, Point2D p3) { double xDelta = p2.getX() - p1.getX(); double yDelta = p2.getY() - p1.getY(); if ((xDelta == 0) && (yDelta == 0)) return -1; // p1 and p2 are the same point double u = ((p3.getX() - p1.getX()) * xDelta + (p3.getY() - p1.getY()) * yDelta) / (xDelta * xDelta + yDelta * yDelta); Point2D closePt; // the (x,y) of the previous slide if (u < 0) closePt = p1; else if (u > 1) closePt = p2; else closePt = new Point2D.Double(p1.getX() + u * xDelta, p1.getY() + u * yDelta); return closePt.distance(p3); }Code 30 ADA: 15. Basic Math30 The coordinates (x1, y1), (x2, y2), (x3, y3),..., (xn, yn) of a convex polygon are arranged as shown. They must be in counter-clockwise order around the polygon, beginning and ending at the same point. 7. Area of a Convex Polygon (x 1, y 1 ) (x n, y n ) (x 2, y 2 ) … … 31 ADA: 15. Basic Math31 Example Can be proved using induction on area defined in terms of triangles. Similar presentations © 2016 SlidePlayer.com Inc.
http://slideplayer.com/slide/3293714/
CC-MAIN-2016-50
refinedweb
1,383
76.35
I would like to write a snippet to make value classes like the one below: class Person { private var _name: String; private var _age: int; public function get name(): String { return _name; } public function get age(): String { return age; } public function Person(name: String, age: int) { _name = name; _age = age; } } a snippet to make value classes with one parameter would look something like the following: public class ${1:MyClass} { private var _$2: $3; public var get $2(): $3 { return _$2; } public function $1($2: $3) { _$2 = $2; } } But I would like to extend this snippet template to handle infinite fields. I can see no system that facilitates this in ST, as a system like this needs to repeat blocks of text. I imagine the system would need to adapt the $ token to indicate a repeating block.The {} brackets would define the repeating block(s) A repeat trigger character would also need to be defined to indicate when another repetition should be started.In the case of the value class example the repeat character would be a comma ',' Does this feature exist in ST.If not, has it been discussed? Thanks for your timeBrian public class ${1:MyClass} { // the first repeating block, the * char defines a repeating block ${2*: private var _$2.1: $2.2; public var get $2.1(): $2.2 { return _$2.1; } } // constructor argument are the second repating block public function $1(${2*',':$2.1: $2.2}) // the repeat character ',' is defined here { // third repeating block ${2*: _$2.1 = $2.2; } } } Hey Brian, Did you find any solution or workaround? Can you share it here?I'm looking for something similar to generate a snippet with flexible number of fields in YAML schema for Symfony 2. Not exactly what you are asking for, but ... The way this is usually handled is to make a snippet that inserts a new instance of it's own snippet trigger as last tab target. In that way you just press tab again to continue expanding it. Something like this: private var _$1: $2; public function get $1(): $2 { return _$1; } ${3:getter} Now, this will not update the constructor, but it should get you most of what you need here. It will probably be worth your while digging through the emmett and/or zencoding packages which allow you to specify how many of a certain element you want, so for instance: ul#menu>li*5 will output to: <ul id="menu"> <li></li> <li></li> <li></li> <li></li> <li></li> </ul> There might be something in there which will set you along the right path to duplicate blocks of code. I am a front-end markup and UX guy so the Python and JavaScript code is too heavy going for me to steer you any further. If you do use this, and find a solution please post back. It would certainly come in useful for some of the advanced markup snippets I have written to be able to replicate code 'X' amount of times on the fly. Best of luck.
https://forum.sublimetext.com/t/snippet-loop-blocks/6665/4
CC-MAIN-2016-44
refinedweb
517
67.38
I'm still new to python and need some help writing a program. I need to write one that runs a *.exe and then checks some timings in it. There are 3 numbers ranging from 0<1000 that are hidden in the *.exe(app.exe). If the script runs the app.exe and then runs the numbers 0-1000, when the right number is reached, the app.exe slows and then moves on to the next series of numbers. This sequence is repeated 3 times until all 3 numbers are revealed. I'm thinking something like this? point to app.exe run app.exe check times between numbers: 123 (.08 secs) 444 ( 1.2 secs) 369 ( 1.8 secs) So I'm trying to start with this: import os,sys pyPath=r"C:\python25\python.exe" appPath= r"C:\python25\app.exe" os.spawnv(P_NOWAIT,pyPath,["python",appPath]) When I run that, I get a NameError: name 'MZP' is not defined. I'm not sure where that comes from. If I switch pyPath & appPath in os.spawnv, the app.exe runs but I don't think that's the right placement for those. Any suggestions on other functions I need to write the whole program? Wed Feb 07, 2007 8:13 am[IMG][/IMG] [IMG][/IMG]
https://www.daniweb.com/programming/software-development/threads/69375/opening-a-exe-with-python-script
CC-MAIN-2018-26
refinedweb
216
88.94
Image Classification with CreateML This is a minimal example to show you how CreateML works. We will create a CoreML model that is able to detect bananas and tomatoes. First, we need data to train the model with. CreateML makes labeling very easy: Create a new folder for your images. In this folder, create two new folders: one named banana, the other one named tomato. Now download a bunch of images from the internet (search for “banana” or “tomato”) and put them into their according folders. The result should look like this: Now, got to Xcode and select File ➡️ New ➡️ Playground ➡️ macOS ➡️ blank. Please note: Xcode 10 and macOS 10.14 Mojave are required to run CreateML. To show the CreateML drag and drop user interface we need to write some very minimal code: import CreateMLUI let builder = MLImageClassifierBuilder() builder.showInLiveView() This imports the framework and generates the user interface ( .showInLiveView). Now, run your code! A new UI should appear in the assistant editor / live view. You can now drag the image folder from earlier into this UI – CreateML will start training with your data immediately. After a few seconds the process is complete and you can see the results in the console: +------------------+--------------+------------------+ | Images Processed | Elapsed Time | Percent Complete | +------------------+--------------+------------------+ | 1 | 7.64s | 5.75% | | 2 | 7.69s | 11.75% | | 3 | 7.74s | 17.5% | | 4 | 7.79s | 23.5% | | 5 | 7.83s | 29.25% | | 10 | 8.19s | 58.75% | | 17 | 8.48s | 100% | +------------------+--------------+------------------+ Skipping automatic creation of validation set; training set has fewer than 50 points. Beginning model training on processed features. Calibrating solver; this may take some time. +-----------+--------------+-------------------+ | Iteration | Elapsed Time | Training-accuracy | +-----------+--------------+-------------------+ | 1 | 0.033924 | 1.000000 | +-----------+--------------+-------------------+ SUCCESS: Optimal solution found. Trained model successfully saved at /var/folders/77/d8w7sf5n1wxd9g33wtjq9wy40000gn/T//ImageClassifier.mlmodel. To see how well the model performs we can test it with some images that it has never seen before. This is important to verify that the model is not overfitted on our training data! Just download some more banana and tomato images and drag them onto the model: As you can see, both bananas and tomatoes are classified correctly…yaayy!
https://enlight.nyc/projects/create-ml/
CC-MAIN-2018-47
refinedweb
356
68.77
Ah, excellent - using g++ `pkg-config --cflags sword` ciphercng.cpp`pkg-config --libs sword` solved it. Changing the file /usr/lib/pkgconfig/sword.pc as noted at was also necessary. On 2/10/07, Troy A. Griffitts <scribe at crosswire.org> wrote: > > Chad, > The error: > > swmgr.h: No such file or directory > > tells me that you have not installed sword to your include path: > > I/usr/include:/usr/include/sword > > Actually, I'm not sure that syntax is legal. Try: > > I/usr/include -I/usr/include/sword > > Or better yet, try: > > g++ `pkg-config --cflags sword` ciphercng.cpp `pkg-config --libs sword` > > I've just updated the examples/classes directory in svn to work with the > latest sword code and for the Makefile to use this build syntax, which > should allow building outside the sword build system. My apologies for > these not being up to date. > > -Troy. > > > Chad Johnson wrote: > > I'm trying to compile the following program and I get the corresponding > > output. I tried adding using namespace sword; but that did not help. > > I was able to compile the examples using the makefile generated from > > ./autogen.sh and /usrinst.sh, but what must I do to get this program to > > work? What must my makefile look like? Can I use bakefile? I'd really > > like it working soon as I am integrating the SWORD modules with Aletheia > > ( aletheia.sourceforge.net <>). > > > > chad at chadjohnson:/tmp/sword/examples/cmdline$ g++ -L/usr/lib > > -I/usr/include:/usr/include/sword ciphercng.cpp > > ciphercng.cpp:10:19: error: swmgr.h: No such file or directory > > ciphercng.cpp: In function 'int main(int, char**)': > > ciphercng.cpp:22: error: 'sword' has not been declared > > ciphercng.cpp:22: error: expected `;' before 'manager' > > ciphercng.cpp :23: error: 'ModMap' has not been declared > > ciphercng.cpp:23: error: expected `;' before 'it' > > ciphercng.cpp:24: error: 'it' was not declared in this scope > > ciphercng.cpp:24: error: 'manager' was not declared in this scope > > ciphercng.cpp:31: error: 'SWModule' was not declared in this scope > > ciphercng.cpp:31: error: 'module' was not declared in this scope > > > > > > > > > > > /****************************************************************************** > > > > * > > * This example demonstrates how to change the cipher key of a module > > * The change is only in effect for this run. This DOES NOT change > the > > * cipherkey in the module's .conf file. > > * > > */ > > > > #include <stdio.h> > > #include <swmgr.h> > > #include <iostream> > > > > using namespace std; > > > > int main(int argc, char **argv) { > > > > if (argc != 2) { > > fprintf(stderr, "usage: %s <modName>\n", *argv); > > exit(-1); > > } > > > > sword::SWMgr manager; // create a default manager that looks > > in the current directory for mods.conf > > ModMap::iterator it; > > it = manager.Modules.find(argv[1]); > > > > if (it == manager.Modules.end()) { > > fprintf(stderr, "%s: couldn't find module: %s\n", *argv, > argv[1]); > > exit(-1); > > } > > > > SWModule *module = (*it).second; > > string key; > > > > cout << "\nPress [CTRL-C] to end\n\n"; > > while (true) { > > cout << "\nModule text:\n"; > > module->setKey("1jn 1:9"); > > cout << "[ " << module->KeyText() << " ]\n"; > > cout << (const char *)*module; > > cout << "\n\nEnter new cipher key: "; > > cin >> key; > > cout << "\nSetting key to: " << key; > > manager.setCipherKey(argv[1], (unsigned char *)key.c_str()); > > } > > > > > > } > > > > > > ------------------------------------------------------------------------ > > > > _______________________________________________ > >:
http://www.crosswire.org/pipermail/sword-devel/2007-February/025068.html
CC-MAIN-2016-40
refinedweb
508
52.66
Introduction¶ Symbolic Here we got the exact answer — 9 is a perfect square — but usually it will be an approximate result >>> math.sqrt(8) 2.8284271247461903 This is where symbolic computation first comes in: with a symbolic computation system like Diofant, square roots of numbers that are not perfect squares are left unevaluated by default >>> import diofant >>> diofant.sqrt(3) sqrt(3) Furthermore — and this is where we start to see the real power of symbolic computation — results can be symbolically simplified. >>> diofant.sqrt(8) 2*sqrt(2) Yet we can also approximate this number with any precision >>> _.evalf(20) 2.8284271247461900976 The above example starts to show how we can manipulate irrational numbers exactly using Diofant. Now we introduce symbols. Let us define a symbolic expression, representing the mathematical expression \(x + 2y\). >>> x, y = diofant.symbols('x y') >>> expr = x + 2*y >>> expr x + 2*y Note Unlike many symbolic manipulation systems you may have used, in Diofant, symbols are not defined automatically. To define symbols, we must use symbols(), that takes a string of symbol names separated by spaces or commas, and creates Symbol instances out of them. Note that we wrote x + 2*y, using Python’s mathematical syntax,. Tip Use evaluate() context or evaluate flag to prevent automatic evaluation, for example: >>> diofant.sqrt(8, evaluate=False) sqrt(8) >>> _.doit() 2*sqrt(2) This isn’t always the case in Diofant, however: >>> x*expr x*(x + 2*y) Here, we might have expected \(x(x + 2y)\) to transform into \(x^2 + 2xy\), but instead we see that the expression was left alone. This is a common theme in Diofant. Diofant, there are functions to go from one form to the other >>> diofant.expand(x*expr) x**2 + 2*x*y >>> diofant.factor(_) x*(x + 2*y) The real power of a symbolic computation system (which by the way, are also often called computer algebra systems, or just CASs) such as Diofant is the ability to do all sorts of computations symbolically: simplify expressions, compute derivatives, integrals, and limits, solve equations, work with matrices, and much more. Diofant includes modules for plotting, printing (like 2D pretty printed output of math formulas, or \(\LaTeX\)), code generation, statistics, combinatorics, number theory, logic, and more. Here is a small sampling of the sort of symbolic power Diofant is capable of, to whet your appetite. Note From here on in this tutorial we assume that these statements were executed: >>> from diofant import * >>> x, y, z = symbols('x y z') >>> init_printing(pretty_print=True, use_unicode=True) Last one will make all further examples pretty print with unicode characters. import * has been used here to aid the readability of the tutorial, but is best to avoid such wildcard import statements in production code, as they make it unclear which names are present in the namespace.)) ___ ___ ╲╱ 2 ⋅╲╱ π ─────────── 2 Find \(\lim_{x\to 0^+}\frac{\sin{(x)}}{x}\). >>> limit(sin(x)/x, x, 0) 1 Solve \(x^2 - 2 = 0\). >>> solve(x**2 - 2, x) ⎡⎧ ___⎫ ⎧ ___⎫⎤ ⎢⎨x: -╲╱ 2 ⎬, ⎨x: ╲╱ 2 ⎬⎥ ⎣⎩ ⎭ ⎩ ⎭⎦ Solve the differential equation \(f'' - f = e^x\). >>> f = symbols('f', cls=Function) >>> dsolve(Eq(f(x).diff(x, 2) - f(x), exp(x)), f(x)) x ⎛ x⎞ -x f(x) = ℯ ⋅⎜C₂ + ─⎟ + ℯ ⋅C₁ ⎝ 2⎠ Find the eigenvalues of \(\left[\begin{smallmatrix}1 & 2\\2 & 2\end{smallmatrix}\right]\). >>> Matrix([[1, 2], [2, 2]]).eigenvals() ⎧ ____ ____ ⎫ ⎪3 ╲╱ 17 ╲╱ 17 3 ⎪ ⎨─ + ──────: 1, - ────── + ─: 1⎬ ⎪2 2 2 2 ⎪ ⎩ ⎭ Rewrite the Bessel function \(J_{n}\left(z\right)\) in terms of the spherical Bessel function \(j_n(z)\). >>> n = symbols('n') >>> besselj(n, z).rewrite(jn) ___ ___ ╲╱ 2 ⋅╲╱ z ⋅jn(n - 1/2, z) ────────────────────────── ___ ╲╱ π Print \(\int_{0}^{\pi} \cos^{2}{\left (x \right )}\, dx\) using \(\LaTeX\). >>> latex(Integral(cos(x)**2, (x, 0, pi))) '\\int_{0}^{\\pi} \\cos^{2}{\\left (x \\right )}\\, dx'
https://diofant.readthedocs.io/en/latest/tutorial/intro.html
CC-MAIN-2019-13
refinedweb
644
54.32
Introduction: Wanhao Duplicator I3 Plus Auto Bed Leveling This instructable will help you establishing the so called "auto bed leveling" on your Wanhao Duplicator i3 Plus printer. It will use a modified version of the original Wanhao firmware and an optical sensor with some rewiring. There are a lot of prerequisites and changes - so I suggest this only to experienced tinkerers. Step 1: Warnings and Disclaimer Ok folks. We'll be doing some serious tinkering of electrical and mechanical nature. Opening the printer exposes areas of lethal voltages. You will likely loose your printer's warranty when altering the hardware. This Instructable is provided "as is". Unless there are no major design flaws, please do not ask for modifications that fit your needs. Instead feel free to enhance the design and post it in the comments. I cannot be held responsible for any harms you or your printer may encounter. Also I cannot be held responsible for poor English writing since im am German and this is my first instructable. Step 2: Prerequisites Since we will be using an optical sensor that works on light reflections the original build surface has to go away. Tests showed that the sensor works on dark surfaces, but gets irritated by white symbols and letters in the buildtak. Instead it works excellent on glas build plates. But again unwanted reflections are bad - this time from the aluminum heated bed. Therefore the glas plate has to be painted black on the lower side. Use some high temperature black spray paint and give it at least three layers of paint. This instructable incorporates a sensor mount that only fits if you are already using a ciiCooler mod. Feel free to design a new sensor mount if you do not have this cooler in place or if you prefer to reposition the sensor. Step 3: The Sensor The sensor we are using is made and sold by David Crocker.... It has an analog and digital output mode which it determines by detecting the pullup resistor on the output pin. The DI3+ is using pullups on the inputs but they seem to be wrong in value for the sensor. Therefore we add an additional 10k resistor between +5V and output of the sensor. Once powered up the sensor confirms digital output by blinking the led twice. Four blinks indicate analog output. Step 4: Sensor Mount I've created a very basic sensor mount that you can find on Thingiverse. It's far away from being shiny and nice, but does the job. The mount is screwed to the screw holes that originally were used by the stock fan duct. Step 5: Wiring Now this is the hard part. Let's look at the sensor connector. Three pins. OUT - GND - VCC Basically you could use any of the free i/o pins on the ext connector of the mother board. It will be later configured as new z-min sensor in the firmware. Of course you need to solder in a connector first. I was successfully using Arduino pin #35 for it. In this case you will need to wire as following: Ext connector pin 9 (GND) to center pin of sensor (GND) Ext connector pin 10 (VCC) to right pin of sensor (VCC) Ext connector pin 6 (#35) to center pin of sensor (OUT). For the wiring I was using a three wire servo cable and routed it along the existing flat cable of the extruder. Once you have wired it up you can already test if the sensor works by holding something underneath and check if the sensor's LED lights up. Also check for the double blink at power up. Step 6: Arduino IDE Download and install Arduino IDE 1.0.6:... Under Tools > Board > select “Arduino Mega 2560 or Mega ADK” Connect your PC to your printer using a printer cable. Under Tools > Serial Port select the correct COM port. It is important that you use this version of the Arduino IDE. Newer versions may not compile at all. Step 7: Firmware & Mods Download zip file of firmware with auto-leveling enabled: Unzip firmware into folder and open Marlin.ino I have added a #define to activate auto bed leveling and it's related settings. In Configuration.h you'll find an entry called #define ABLSUPPORT If commented out // #define ABLSUPPORT your printer will work with the original z-min home switch and no auto bed leveling. If not commented out #define ABLSUPPORT you will be using the optical sensor connected to #35 as described in step "Wiring". The following changes have been made to the stock firmware: Configuration.h #ifdef ABLSUPPORT const bool Z_MIN_ENDSTOP_INVERTING = false; // set to true to invert the logic of the endstop. #else const bool Z_MIN_ENDSTOP_INVERTING = true; // set to true to invert the logic of the endstop. #endif Required because of inverted logic compared to the original home switch. #ifdef ABLSUPPORT #define ENABLE_AUTO_BED_LEVELING // Delete the comment to enable (remove // at the start of the line) #endif #ifdef ENABLE_AUTO_BED_LEVELING // these are the positions on the bed to do the probing #define LEFT_PROBE_BED_POSITION 35 #define RIGHT_PROBE_BED_POSITION 165 #define BACK_PROBE_BED_POSITION 165 #define FRONT_PROBE_BED_POSITION 35 // these are the offsets to the prob relative to the extruder tip (Hotend - Probe) #define X_PROBE_OFFSET_FROM_EXTRUDER 0 #define Y_PROBE_OFFSET_FROM_EXTRUDER -34 #define Z_PROBE_OFFSET_FROM_EXTRUDER 0 //screw this setting - it's not working #define Z_RAISE_BEFORE_HOMING 8 // (in mm) Raise Z before homing (G28) for Probe Clearance. // Be sure you have this distance over your Z_MAX_POS in case #define XY_TRAVEL_SPEED 3600 // X and Y axis travel speed between probes, in mm/min #define Z_RAISE_BEFORE_PROBING 2 //How much the extruder will be raised before traveling to the first probing point. #define Z_RAISE_BETWEEN_PROBINGS 2 //How much the extruder will be raised when traveling from between next probing points ... #define ACCURATE_BED_LEVELING_POINTS 3 The probe bed positions have been adjusted to match the optical sensor's mechanical position. The x and y probe offsets from extruder have been adjusted to match the optical sensor's mechanical position. If you choose to not use the sensor in line with the nozzle but shifted sideways adjust x-probe offset accordingly. The y probe offset matches the sensor mount provided. Also I have lowered the xy travel speed to 3600. The number of probing points has been changed from 2 to 3 (4 to 9). pins.h #ifdef ABLSUPPORT #define Z_MIN_PIN 35 #else #define Z_MIN_PIN 23 #endif If auto bed leveling is enabled we are using pin 35, else we use pin 23 which is the original z home switch. Note: If you are planning to use a different pin than #35 change it accordingly. Now compile and upload the firmware. to your printer. Step 8: Testing the Sensor With the ABLSUPPORT enabled firmware and connected sensor make sure your x-carriage is high enough so the sensor's LED is off. In the terminal type M119 and check the z-min status. It should report "z_min: open". Block the sensor (LED on) and enter M119 again. It should now report "z_min: TRIGGERED". If you don't get this result look for your screw-up before continuing. Now if this is ok try homing all axes by entering G28. Have your finger ready on the power switch - just in case. It will now home x and y axes and move to the bed's center to home the z-axis. If G28 worked out well enter G29. Note: Only use G29 after a previous G28 - nothing else or you will get bogus results. Watch it probing your bed and throwing messages about the measured positions in the terminal. You may play around with the bed leveling screws and re-probe the bed to see how the results are changing. Good time to adjust your bed for minimal offsets. Step 9: The Damned Z-Probe Offset If you would start a print now, you will get nothing but mid-air printing. #define Z_PROBE_OFFSET_FROM_EXTRUDER 0 In Configuration.h this setting should incorporate the z-offset of your sensor since it is slightly higher than your nozzle (which is good). BUT IT DOES NOT! Obviously the early version of Marlin(1.0.x) that Wanhao has been using contains a z-probe offset bug. You can find details here:... No chance to fix that in my eyes. Since we cannot solve it this way we use a workaround in the slicer's start script. Step 10: Changes in the Slicer Startup Script Look at your startup script. Where ist says G28 ... replace G28 with the following: G28 ; home axes G29 ; auto bed level G1 Z0 ; move z to zero G92 Z1.25 ; apply offset of 1.25 mm to z The G92 part is what applies the z-offset so you are printing on the bed and not mid-air. You will have to figure out the right value by test printing and adjusting until your results are fine. From now on only use this script - but only if you are using the auto bed level feature! Better make a new profile for the ABL prints. I figured that printing a 0.2mm high "Saturn Ring" was a good help for determining the offset. You want the first print layer to be somewhat half compressed on the bed. The three rings I printed were already using the ABL feature and all came out perfect despite I was playing around with the bed level screws big time. So it's alive! Attachments Step 11: Wrap Up Hope this Instructable gave you a good start in using ABL on your DI3+. If you run into issues please don't ask, but find out and contribute in the comments. I just don't have the time for it ... ... and if you fry your motherboard have 100 bucks ready for a replacement. Be the First to Share Recommendations 10 Comments 4 years ago Is this possible with the non-plus version of the wanhao duplicator i3? Reply 4 years ago Never tried as I don't posess one. Sorry 4 years ago I have add a inductiv sensor and solder him to z-probe port of my i3 plus board can someone explain me what i need to change at the Plus+ Firmware ? 4 years ago is it possible to use the normal z-max and z-probe on the plus board? i try to connect a sensor to it but stucking at the wiring of the board 4 years ago Here is my entire start gcode ... 4 years ago Can you provide your entire start.gcode? 4 years ago Here is a modified firmware for the i3 plus that use newer marlin firmware it can be a future option to have z offset in firmware working. Reply 4 years ago @éloiT1: I'm aware of Peter's work and I hope you guys can make it work. That'll be the next logical level. 4 years ago @Swansong: Your wish is my command. Njoy the video 4 years ago Great writeup, I'd love to see a video of it in use :)
https://www.instructables.com/Wanhao-Duplicator-I3-Plus-Auto-Bed-Leveling/
CC-MAIN-2022-05
refinedweb
1,845
72.76
Multiple distinct objects You are encouraged to solve this task according to the task description, using any language you may know. Create 'dynamic' languages). See also: Closures/Value capture Contents - 1 Ada - 2 Aime - 3 ALGOL 68 - 4 AppleScript - 5 AutoHotkey - 6 BBC BASIC - 7 Brat - 8 C - 9 C++ - 10 C# - 11 Clojure - 12 Common Lisp - 13 D - 14 Delphi - 15 E - 16 EchoLisp - 17 Elixir - 18 Erlang - 19 Factor - 20 Forth - 21 Fortran - 22 F# - 23 Go - 24 Groovy - 25 Haskell - 26 Icon and Unicon - 27 J - 28 Java - 29 JavaScript - 30 Julia - 31 jq - 32 Kotlin - 33 Logtalk - 34 Lua - 35 Mathematica - 36 Maxima - 37 Modula-3 - 38 NGS - 39 Nim - 40 OCaml - 41 Oforth - 42 ooRexx - 43 Oz - 44 Pascal - 45 Perl - 46 Perl 6 - 47 Phix - 48 PicoLisp - 49 PowerShell - 50 PureBasic - 51 Python - 52 R - 53 Racket - 54 Ruby - 55 Scala - 56 Scheme - 57 Seed7 - 58 Sidef - 59 Smalltalk - 60 Swift - 61 Tcl - 62 XPL0 - 63 zkl Ada[edit] A : array (1..N) of T; Here N can be unknown until run-time. T is any constrained type. In Ada all objects are always initialized, though some types may have null initialization. When T requires a non-null initialization, it is done for each array element. For example, when T is a task type, N tasks start upon initialization of A. Note that T can be a limited type like task. Limited types do not have predefined copy operation. Arrays of non-limited types can also be initialized by aggregates of: A : array (1..N) of T := (others => V); Here V is some value or expression of the type T. As an expression V may have side effects, in that case it is evaluated exactly N times, though the order of evaluation is not defined. Also an aggregate itself can be considered as a solution of the task: (1..N => V) Aime[edit] void show_sublist(list l) { integer i, v; for (i, v in l) { o_space(sign(i)); o_integer(v); } } void show_list(list l) { integer i; list v; for (i, v in l) { o_text(" ["); show_sublist(v); o_text("]"); } o_byte('\n'); } list multiple_distinct(integer n, object o) { list l; call_n(n, l_append, l, o); return l; } integer main(void) { list l, z; # create a list of integers - `3' will serve as initializer l = multiple_distinct(8, 3); l_clear(l); # create a list of distinct lists - `z' will serve as initializer l_append(z, 4); l = multiple_distinct(8, z); # modify one of the sublists l_q_list(l, 3)[0] = 7; # display the list of lists show_list(l); return 0; } - Output: [4] [4] [4] [7] [4] [4] [4] [4] ALGOL 68[edit] MODE FOO = STRUCT(CHAR u,l); INT n := 26; [n]FOO f; # Additionally each item can be initialised # FOR i TO UPB f DO f[i] := (REPR(ABS("A")-1+i), REPR(ABS("a")-1+i)) OD; print((f, new line)) Output: AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz AppleScript[edit] -- MULTIPLE DISTINCT OBJECTS ------------------------------------------------- -- nObjects Constructor -> Int -> [Object] on nObjects(f, n) map(f, enumFromTo(1, n)) end nObjects -- TEST ---------------------------------------------------------------------- on run -- someConstructor :: a -> Int -> b script someConstructor on |λ|(_, i) {index:i} end |λ| end script nObjects(someConstructor, 6) --> {{index:1}, {index:2}, {index:3}, {index:4}, {index:5}, {index:6}} end run -- GENERIC FUNCTIONS --------------------------------------------------------- -- -- Lift 2nd class handler function into 1st class script wrapper -- mReturn :: Handler -> Script on mReturn(f) if class of f is script then f else script property |λ| : f end script end if end mReturn - Output: {{index:1}, {index:2}, {index:3}, {index:4}, {index:5}, {index:6}} AutoHotkey[edit] a := [] Loop, %n% a[A_Index] := new Foo() BBC BASIC[edit] REM Determine object count at runtime: n% = RND(1000) REM Declare an array of structures; all members are initialised to zero: DIM objects{(n%) a%, b$} REM Initialise the objects to distinct values: FOR i% = 0 TO DIM(objects{()},1) objects{(i%)}.a% = i% objects{(i%)}.b$ = STR$(i%) REM This is how to create an array of pointers to the same object: DIM objects%(n%), object{a%, b$} FOR i% = 0 TO DIM(objects%(),1) objects%(i%) = object{} Brat[edit] The wrong way, which creates an array of n references to the same new foo: n.of foo.new The right way, which calls the block n times and creates an array of new foos: n.of { foo.new } C[edit] foo *foos = malloc(n * sizeof(*foos)); for (int i = 0; i < n; i++) init_foo(&foos[i]); (Or if no particular initialization is needed, skip that part, or use calloc.) C++[edit]. C#[edit] using System; using System.Linq; using System.Collections.Generic; List<Foo> foos = Enumerable.Range(1, n).Select(x => new Foo()).ToList(); Clojure[edit] An example using pseudo-random numbers: user> (take 3 (repeat (rand))) ; repeating the same random number three times (0.2787011365537204 0.2787011365537204 0.2787011365537204) user> (take 3 (repeatedly rand)) ; creating three different random number (0.8334795669220695 0.08405601245793926 0.5795448744634744) user> Common Lisp[edit] The mistake is often written as one of these: (make-list n :initial-element (make-the-distinct-thing)) (make-array n :initial-element (make-the-distinct-thing)) which are incorrect since the form (make-the-distinct-thing) is only evaluated once and the single object is put in every position of the sequence. A commonly used correct version is: (loop repeat n collect (make-the-distinct-thing)) which evaluates (make-the-distinct-thing) n times and collects each result in a list. It is also possible to use map-into, the destructive map operation, to do this since it may take zero input sequences; this method can produce any sequence type, such as a vector (array) rather than a list, and takes a function rather than a form to specify the thing created: (map-into (make-list n) #'make-the-distinct-thing) (map-into (make-array n) #'make-the-distinct-thing) D[edit] For reference types (classes): auto fooArray = new Foo[n]; foreach (ref item; fooArray) item = new Foo(); For value types: auto barArray = new Bar[n]; barArray[] = initializerValue; Delphi[edit] Same object accessed multiple times (bad) var i: Integer; lObject: TMyObject; lList: TObjectList<TMyObject>; begin lList := TObjectList<TMyObject>.Create; lObject := TMyObject.Create; for i := 1 to 10 do lList.Add(lObject); // ... Distinct objects (good) var i: Integer; lList: TObjectList<TMyObject>; begin lList := TObjectList<TMyObject>.Create; for i := 1 to 10 do lList.Add(TMyObject.Create); // ... E[edit]E needs development of better map/filter/stream facilities. The easiest way to do this so far is with the accumulator syntax, which is officially experimental because we're not satisfied with it as yet. pragma.enable("accumulator") ... accum [] for _ in 1..n { _.with(makeWhatever()) } EchoLisp[edit] ;; wrong - make-vector is evaluated one time - same vector (define L (make-list 3 (make-vector 4))) L → (#(0 0 0 0) #(0 0 0 0) #(0 0 0 0)) (vector-set! (first L ) 1 '🔴) ;; sets the 'first' vector L → (#(0 🔴 0 0) #(0 🔴 0 0) #(0 🔴 0 0)) ;; right - three different vectors (define L(map make-vector (make-list 3 4))) L → (#(0 0 0 0) #(0 0 0 0) #(0 0 0 0)) (vector-set! (first L ) 1 '🔵) ;; sets the first vector L → (#(0 🔵 0 0) #(0 0 0 0) #(0 0 0 0)) ;; OK Elixir[edit] randoms = for _ <- 1..10, do: :rand.uniform(1000) Erlang[edit] List comprehension that will create 20 random integers between 1 and 1000. They will only be equal by accident. Randoms = [random:uniform(1000) || _ <- lists:seq(1,10)]. Factor[edit] clone is the important word here to have distinct objects. This creates an array of arrays. 1000 [ { 1 } clone ] replicate Forth[edit] Works with any ANS Forth Needs the FMS-SI (single inheritance) library code located here: include FMS-SI.f include FMS-SILib.f \ create a list of VAR objects the right way \ each: returns a unique object reference o{ 0 0 0 } dup p: o{ 0 0 0 } dup each: drop . 10774016 dup each: drop . 10786896 dup each: drop . 10786912 \ create a list of VAR objects the wrong way \ each: returns the same object reference var x object-list2 list x list add: x list add: x list add: list p: o{ 0 0 0 } list each: drop . 1301600 list each: drop . 1301600 list each: drop . 1301600 Fortran[edit] program multiple ! Define a simple type type T integer :: a = 3 end type T ! Define a type containing a pointer type S integer, pointer :: a end type S type(T), allocatable :: T_array(:) type(S), allocatable :: S_same(:) integer :: i integer, target :: v integer, parameter :: N = 10 ! Create 10 allocate(T_array(N)) ! Set the fifth one to b something different T_array(5)%a = 1 ! Print them out to show they are distinct write(*,'(10i2)') (T_array(i),i=1,N) ! Create 10 references to the same object allocate(S_same(N)) v = 5 do i=1, N allocate(S_same(i)%a) S_same(i)%a => v end do ! Print them out - should all be 5 write(*,'(10i2)') (S_same(i)%a,i=1,N) ! Change the referenced object and reprint - should all be 3 v = 3 write(*,'(10i2)') (S_same(i)%a,i=1,N) end program multiple F#[edit] The wrong way: >List.replicate 3 (System.Guid.NewGuid());; val it : Guid list = [485632d7-1fd6-4d9e-8910-7949d7b2b485; 485632d7-1fd6-4d9e-8910-7949d7b2b485; 485632d7-1fd6-4d9e-8910-7949d7b2b485] The right way: > List.init 3 (fun _ -> System.Guid.NewGuid());; val it : Guid list = [447acb0c-092e-4f85-9c3a-d369e4539dae; 5f41c04d-9bc0-4e96-8165-76b41fe8cd93; 1086400c-72ff-4763-9bb9-27e17bd4c7d2] Go[edit] Useful: func nxm(n, m int) [][]int { d2 := make([][]int, n) for i := range d2 { d2[i] = make([]int, m) } return d2 } Probably not what the programmer wanted: func nxm(n, m int) [][]int { d1 := make([]int, m) d2 := make([][]int, n) for i := range d2 { d2[i] = d1 } return d2 } Groovy[edit] Correct Solution: def createFoos1 = { n -> (0..<n).collect { new Foo() } } Incorrect Solution: // Following fails, creates n references to same object def createFoos2 = {n -> [new Foo()] * n } Test: [createFoos1, createFoos2].each { createFoos -> print "Objects distinct for n = " (2..<20).each { n -> def foos = createFoos(n) foos.eachWithIndex { here, i -> foos.eachWithIndex { there, j -> assert (here == there) == (i == j) } } print "${n} " } println() } Output: Objects distinct for n = 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Objects distinct for n = Caught: Assertion failed: assert (here == there) == (i == j) | | | | | | | | | | | 0 | 1 | | | false false | | [email protected] | true [email protected] Haskell[edit] Below, we are assuming that makeTheDistinctThing is a monadic expression (i.e. it has type m a where m is some monad, like IO or ST), and we are talking about distinctness in the context of the monad. Otherwise, this task is pretty meaningless in Haskell, because Haskell is referentially transparent (so two values that are equal to the same expression are necessarily not distinct) and all values are immutable. replicateM n makeTheDistinctThing in an appropriate do block. If it is distinguished by, say, a numeric label, one could write mapM makeTheDistinctThing [1..n] An incorrect version: do x <- makeTheDistinctThing return (replicate n x) Icon and Unicon[edit] An incorrect approach uses, e.g., the list constructor procedure with an initial value: items_wrong := list (10, []) # prints '0' for size of each item every item := !items_wrong do write (*item) # after trying to add an item to one of the lists push (items_wrong[1], 2) # now prints '1' for size of each item every item := !items_wrong do write (*item) A correct approach initialises each element separately: items := list(10) every i := 1 to 10 do items[i] := [] J[edit] i. Example use: i. 4 0 1 2 3 J almost always uses pass-by-value, so this topic is not very relevant to J. Note also that J offers a variety of other ways of generating multiple distinct objects. This just happens to be one of the simplest of them. In essence, though: generating multiple distinct objects is what J *does* - this is an elemental feature of most of the primitives. Java[edit] simple array: Foo[] foos = new Foo[n]; // all elements initialized to null for (int i = 0; i < foos.length; i++) foos[i] = new Foo(); // incorrect version: Foo[] foos_WRONG = new Foo[n]; Arrays.fill(foos, new Foo()); // new Foo() only evaluated once simple list: List<Foo> foos = new ArrayList<Foo>(); for (int i = 0; i < n; i++) foos.add(new Foo()); // incorrect: List<Foo> foos_WRONG = Collections.nCopies(n, new Foo()); // new Foo() only evaluated once Generic version for class given at runtime: It's not pretty but it gets the job done. The first method here is the one that does the work. The second method is a convenience method so that you can pass in a String of the class name. When using the second method, be sure to use the full class name (ex: "java.lang.String" for "String"). InstantiationExceptions will be thrown when instantiating classes that you would not normally be able to call new on (abstract classes, interfaces, etc.). Also, this only works on classes that have a no-argument constructor, since we are using newInstance(). public static <E> List<E> getNNewObjects(int n, Class<? extends E> c){ List<E> ans = new LinkedList<E>(); try { for(int i=0;i<n;i++) ans.add(c.newInstance());//can't call new on a class object } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } return ans; } public static List<Object> getNNewObjects(int n, String className) throws ClassNotFoundException{ return getNNewObjects(n, Class.forName(className)); } JavaScript[edit] ES5[edit] var a = new Array(n); for (var i = 0; i < n; i++) a[i] = new Foo(); ES6[edit] (n => { let nObjects = n => Array.from({ length: n + 1 }, (_, i) => { // optionally indexed object constructor return { index: i }; }); return nObjects(6); })(6); - Output: [{"index":0}, {"index":1}, {"index":2}, {"index":3}, {"index":4}, {"index":5}, {"index":6}] Julia[edit] A potential mistake would be writing: foo() = rand() # repeated calls change the result with each call repeat([foo()], outer=5) # but this only calls foo() once, clones that first value If the effect of calling foo() with every iteration is desired, better to use: [foo() for i in 1:5] # Code this to call the function within each iteration jq[edit]jq does not have mutable data types, and therefore in the context of jq, the given task is probably of little interest. However, it is possible to fulfill the task requirements for jq types other than "null" and "boolean": def Array(atype; n): if atype == "number" then [ range(0;n) ] elif atype == "object" then [ range(0;n)| {"value": . } ] elif atype == "array" then [ range(0;n)| [.] ] elif atype == "string" then [ range(0;n)| tostring ] elif atype == "boolean" then if n == 0 then [] elif n == 1 then [false] elif n==2 then [false, true] else error("there are only two boolean values") end elif atype == "null" then if n == 0 then [] elif n == 1 then [null] else error("there is only one null value") end else error("\(atype) is not a jq type") end; # Example: Array("object"; 4) Kotlin[edit] // version 1.1.2 class Foo { val id: Int init { id = ++numCreated // creates a distict id for each object } companion object { private var numCreated = 0 } } fun main(args: Array<String>) { val n = 3 // say /* correct approach - creates references to distinct objects */ val fooList = List(n) { Foo() } for (foo in fooList) println(foo.id) /* incorrect approach - creates references to same object */ val f = Foo() val fooList2 = List(n) { f } for (foo in fooList2) println(foo.id) } - Output: 1 2 3 4 4 4 Logtalk[edit] Using prototypes, we first dynamically create a protocol to declare a predicate and then create ten prototypes implementing that protocol, which one with a different definition for the predicate: | ?- create_protocol(statep, [], [public(state/1)]), findall( Id, (integer::between(1, 10, N), create_object(Id, [implements(statep)], [], [state(N)])), Ids ). Ids = [o1, o2, o3, o4, o5, o6, o7, o8, o9, o10]. Using classes, we first dynamically create a class (that is its own metaclass) to declare a predicate (and define a default value for it) and then create ten instances of the class, which one with a different definition for the predicate: | ?- create_object(state, [instantiates(state)], [public(state/1)], [state(0)]), findall( Id, (integer::between(1, 10, N), create_object(Id, [instantiates(state)], [], [state(N)])), Ids ). Ids = [o1, o2, o3, o4, o5, o6, o7, o8, o9, o10]. Lua[edit] -- This concept is relevant to tables in Lua local table1 = {1,2,3} -- The following will create a table of references to table1 local refTab = {} for i = 1, 10 do refTab[i] = table1 end -- Instead, tables should be copied using a function like this function copy (t) local new = {} for k, v in pairs(t) do new[k] = v end return new end -- Now we can create a table of independent copies of table1 local copyTab = {} for i = 1, 10 do copyTab[i] = copy(table1) end Mathematica[edit] The mistake is often written as: {x, x, x, x} /. x -> Random[] Here Random[] can be any expression that returns a new value which is incorrect since Random[] is only evaluated once. e.g. {0.175125, 0.175125, 0.175125, 0.175125} A correct version is: {x, x, x, x} /. x :> Random[] which evaluates Random[] each time e.g. ->{0.514617, 0.0682395, 0.609602, 0.00177382} Maxima[edit] a: [1, 2]$ b: makelist(copy(a), 3); [[1,2],[1,2],[1,2]] b[1][2]: 1000$ b; [[1,1000],[1,2],[1,2]] Modula-3[edit] Similar to the Ada version above: VAR a: ARRAY OF T This creates an open array (an array who's size is not known until runtime) of distinct elements of type T. Modula-3 does not define what values the elements of A have, but it does guarantee that they will be of type T. NGS[edit] Incorrect, same object n times: { [foo()] * n } Correct: { foo * n } Nim[edit] The simplest form of initialization works, but is a bit cumbersome to write: proc foo(): string = echo "Foo()" "mystring" let n = 100 var ws = newSeq[string](n) for i in 0 .. <n: ws[i] = foo() If actual values instead of references are stored in the sequence, then objects can be initialized like this. Objects are distinct, but the initializer foo() is called only once, then copies of the resulting object are made: proc newSeqWith[T](len: int, init: T): seq[T] = result = newSeq[T] len for i in 0 .. <len: result[i] = init var xs = newSeqWith(n, foo()) To get the initial behaviour, where foo() is called to create each object, a template can be used: template newSeqWith2(len: int, init: expr): expr = var result {.gensym.} = newSeq[type(init)](len) for i in 0 .. <len: result[i] = init result var ys = newSeqWith2(n, foo()) OCaml[edit] For arrays: Incorrect: Array.make n (new foo);; (* here (new foo) can be any expression that returns a new object, record, array, or string *) which is incorrect since new foo is only evaluated once. A correct version is: Array.init n (fun _ -> new foo);; Oforth[edit] The right way : the block sent as parameter is performed n times : ListBuffer init(10, #[ Float rand ]) println - Output: [0.281516067014556, 0.865269004241814, 0.101437334065733, 0.924166132625347, 0.88135127712 167, 0.176233635448137, 0.963837773505447, 0.570264579328023, 0.385577832707742, 0.9086026 42741616] The "wrong" way : the same value is stored n times into the list buffer ListBuffer initValue(10, Float rand) println - Output: [0.314870762000671, 0.314870762000671, 0.314870762000671, 0.314870762000671, 0.31487076200 0671, 0.314870762000671, 0.314870762000671, 0.314870762000671, 0.314870762000671, 0.314870 762000671] ooRexx[edit] -- get an array of directory objects array = fillArrayWith(3, .directory) say "each object will have a different identityHash" say loop d over array say d~identityHash end ::routine fillArrayWith use arg size, class array = .array~new(size) loop i = 1 to size -- Note, this assumes this object class can be created with -- no arguments array[i] = class~new end return array Oz[edit] With lists, it is difficult to do wrong. declare Xs = {MakeList 5} %% a list of 5 unbound variables in {ForAll Xs OS.rand} %% fill it with random numbers (CORRECT) {Show Xs} With arrays on the other hand, it is easy to get wrong: declare Arr = {Array.new 0 10 {OS.rand}} %% WRONG: contains ten times the same number in %% CORRECT: fill it with ten (probably) different numbers for I in {Array.low Arr}..{Array.high Arr} do Arr.I := {OS.rand} end Pascal[edit] Perl[edit] incorrect: (Foo->new) x $n # here Foo->new can be any expression that returns a reference representing # a new object which is incorrect since Foo->new is only evaluated once. A correct version is: map { Foo->new } 1 .. $n; which evaluates Foo->new $n times and collects each result in a list. Perl 6[edit] Unlike in Perl 5, the list repetition operator evaluates the left argument thunk each time, so my @a = Foo.new xx $n; produces $n distinct objects. Phix[edit] Phix uses shared reference counts with copy-on-write semantics. Creating n references to the same mutable object is in fact the norm, but does not cause any of the issues implicitly feared in the task description. In fact, it is not possible to create shared references such that when one is updated they all are, instead store an index to another table that stores the object, rather than the object itself. Also, apart from low-level trickery and interfacing to shared libraries, there are no pointers to normal hll objects. Sequences need not be homogeneous, they can contain any type-mix of elements. sequence s = repeat("x",3*rand(3)) ?s s[rand(length(s))] = 5 ?s s[rand(length(s))] &= 'y' ?s s[rand(length(s))] = s ?s - Output: {"x","x","x","x","x","x"} {"x","x","x","x","x",5} {"xy","x","x","x","x",5} {"xy",{"xy","x","x","x","x",5},"x","x","x",5} Note that the last statement did not create a circular structure, something that is not possible in Phix, except via index-emulation. I suppose it is possible that someone could write sequence s = repeat(my_func(),5) and expect my_func() to be invoked 5 times, but for that you need a loop sequence s = repeat(0,5) for i=1 to length(s) do s[i] = my_func() end for PicoLisp[edit] Create 5 distinct (empty) objects: : (make (do 5 (link (new)))) -> ($384717187 $384717189 $384717191 $384717193 $384717195) Create 5 anonymous symbols with the values 1 .. 5: : (mapcar box (range 1 5)) -> ($384721107 $384721109 $384721111 $384721113 $384721115) : (val (car @)) -> 1 : (val (cadr @@)) -> 2 PowerShell[edit] Do some randomization that could easily return three equal values (but each value is a separate value in the array): 1..3 | ForEach-Object {((Get-Date -Hour ($_ + (1..4 | Get-Random))).AddDays($_ + (1..4 | Get-Random)))} | Select-Object -Unique | ForEach-Object {$_.ToString()} - Output: 11/18/2016 3:32:16 AM 11/21/2016 3:32:16 AM 11/22/2016 7:32:16 AM Run the same commands a few times and the Select-Object -Unique command filters equal (but separate values): 1..3 | ForEach-Object {((Get-Date -Hour ($_ + (1..4 | Get-Random))).AddDays($_ + (1..4 | Get-Random)))} | Select-Object -Unique | ForEach-Object {$_.ToString()} - Output: 11/18/2016 4:32:17 AM 11/21/2016 5:32:17 AM PureBasic[edit] n=Random(50)+25 Dim A.i(n) ; Creates a Array of n [25-75] elements depending on the outcome of Random(). ; Each element will be initiated to zero. For i=0 To ArraySize(A()) A(i)=2*i Next i ; Set each individual element at a wanted (here 2*i) value and ; automatically adjust accordingly to the unknown length of the Array. NewList *PointersToA() For i=0 To ArraySize(A()) AddElement(*PointersToA()) *PointersToA()=@A(i) ; Create a linked list of the same length as A() above. ; Each element is then set to point to the Array element ; of the same order. ForEach *PointersToA() Debug PeekI(*PointersToA()) ; Verify by sending each value of A() via *PointersToA() ; to the debugger's output. Python[edit] The mistake is often written as: [Foo()] * n # here Foo() can be any expression that returns a new object which is incorrect since Foo() is only evaluated once. A common correct version is: [Foo() for i in range(n)] which evaluates Foo() n times and collects each result in a list. This last form is also discussed here, on the correct construction of a two dimensional array. R[edit] The mistake is often written as: rep(foo(), n) # foo() is any code returning a value A common correct version is: replicate(n, foo()) which evaluates foo() n times and collects each result in a list. (Using simplify=TRUE lets the function return an array, where possible.) Racket[edit] #lang racket ;; a list of 10 references to the same vector (make-list 10 (make-vector 10 0)) ;; a list of 10 distinct vectors (build-list 10 (λ (n) (make-vector 10 0))) Ruby[edit] The mistake is often written as one of these: [Foo.new] * n # here Foo.new can be any expression that returns a new object Array.new(n, Foo.new) which are incorrect since Foo.new is only evaluated once, and thus you now have n references to the same object. A common correct version is: Array.new(n) { Foo.new } which evaluates Foo.new n times and collects each result in an Array. This last form is also discussed here, on the correct construction of a two dimensional array. Scala[edit] Yielding a normal class instance here (rather than a case class instance), as case objects are identical if created with the same constructor arguments. for (i <- (0 until n)) yield new Foo() Scheme[edit] There is a standard function make-list which makes a list of size n, but repeats its given value. sash[r7rs]> (define-record-type <a> (make-a x) a? (x get-x)) #<unspecified> sash[r7rs]> (define l1 (make-list 5 (make-a 3))) #<unspecified> sash[r7rs]> (eq? (list-ref l1 0) (list-ref l1 1)) #t In SRFI 1, a function list-tabulate is provided which instead calls a function to create a fresh value each time. sash[r7rs]> (define l2 (list-tabulate 5 (lambda (i) (make-a i)))) #<unspecified> sash[r7rs]> (eq? (list-ref l2 0) (list-ref l2 1)) #f sash[r7rs]> (map get-x l2) (0 1 2 3 4) Seed7[edit] The example below defines the local array variable fileArray. The times operator creates a new array value with a specified size. Finally multiple distinct objects are assigned to the array elements. $ include "seed7_05.s7i"; const func array file: openFiles (in array string: fileNames) is func result var array file: fileArray is 0 times STD_NULL; # Define array variable local var integer: i is 0; begin fileArray := length(fileNames) times STD_NULL; # Array size computed at run-time for key i range fileArray do fileArray[i] := open(fileNames[i], "r"); # Assign multiple distinct objects end for; end func; const proc: main is func local var array file: files is 0 times STD_NULL; begin files := openFiles([] ("abc.txt", "def.txt", "ghi.txt", "jkl.txt")); end func; Sidef[edit] [Foo.new] * n; # incorrect (only one distinct object is created) n.of {Foo.new}; # correct Smalltalk[edit] |c| "Create an ordered collection that will grow while we add elements" c := OrderedCollection new. "fill the collection with 9 arrays of 10 elements; elements (objects) are initialized to the nil object, which is a well-defined 'state'" 1 to: 9 do: [ :i | c add: (Array new: 10) ]. "However, let us show a way of filling the arrays with object number 0" c := OrderedCollection new. 1 to: 9 do: [ :i | c add: ((Array new: 10) copyReplacing: nil withObject: 0) ]. "demonstrate that the arrays are distinct: modify the fourth of each" 1 to: 9 do: [ :i | (c at: i) at: 4 put: i ]. "show it" c do: [ :e | e printNl ]. Swift[edit] class Foo { } var foos = [Foo]() for i in 0..<n { foos.append(Foo()) } // incorrect version: var foos_WRONG = [Foo](count: n, repeatedValue: Foo()) // Foo() only evaluated once Tcl[edit] Tcl values are implemented using copy-on-write reference semantics with no (exposed) mechanism for determining whether two values are really references to the same value, which makes this task relatively moot. However, in the case where there is a collection of objects it becomes important to perform the construction correctly (i.e., repeatedly) otherwise it is just the name of the object that will be copied when it is written to.or package require TclOO # The class that we want to make unique instances of set theClass Foo # Wrong version; only a single object created set theList [lrepeat $n [$theClass new]] # Right version; objects distinct set theList {} for {set i 0} {$i<$n} {incr i} { lappend theList [$theClass new] } XPL0[edit] code Reserve=3, IntIn=10; char A; int N, I; [N:= IntIn(8); \get number of items from command line A:= Reserve(N); \create array of N bytes for I:= 0 to N-1 do A(I):= I*3; \initialize items with different values for I:= 0 to N-1 do A:= I*3; \error: "references to the same mutable object" ] zkl[edit] The pump and partial application methods are useful tools for creating initialized lists. n:=3; n.pump(List) //-->L(0,1,2) n.pump(List,List) //-->L(0,1,2), not expected because the second list can be used to describe a calculation n.pump(List,List(Void,List)) //--> L(L(),L(),L()) all same List(Void,List) means returns List, which is a "known" value n.pump(List,List.fpM("-")) //--> L(L(),L(),L()) all distinct fpM is partial application: call List.create() n.pump(List,(0.0).random.fp(1)) //--> 3 [0,1) randoms L(0.902645,0.799657,0.0753809) n.pump(String) //-->"012", default action is id function class C{ var n; fcn init(x){n=x} } n.pump(List,C) //--> L(C,C,C) n.pump(List,C).apply("n") //-->L(0,1,2) ie all classes distinct - Programming Tasks - Solutions by Programming Task - Ada - Aime - ALGOL 68 - AppleScript - AutoHotkey - BBC BASIC - Brat - C - C++ - C sharp - Clojure - Common Lisp - D - Delphi - E - E examples needing attention - EchoLisp - Elixir - Erlang - Factor - Forth - Fortran - F Sharp - Go - Groovy - Haskell - Icon - Unicon - J - Java - JavaScript - Julia - Jq - Kotlin - Logtalk - Lua - Mathematica - Maxima - Modula-3 - Modula-3 examples needing attention - Examples needing attention - NGS - Nim - OCaml - Oforth - OoRexx - Oz - Pascal - Perl - Perl 6 - Phix - PicoLisp - PowerShell - PureBasic - Python - R - Racket - Ruby - Scala - Scheme - Scheme/SRFIs - Seed7 - Sidef - Smalltalk - Swift - Tcl - TclOO - XPL0 - Zkl - GUISS/Omit - PARI/GP/Omit - TI-83 BASIC/Omit - TI-89 BASIC/Omit
https://rosettacode.org/wiki/Multiple_distinct_objects
CC-MAIN-2017-47
refinedweb
5,124
59.03
For experienced users: 1. conda create -n DAVE DAVE -c conda-forge 2. Optional: install Blender from blender.org 3. python -m DAVE.run_gui Index Prefer a video together with these written instructions? Then see: youtube. Install python DAVE runs on python. So the first step is to install python. The easiest way of doing this is by downloading and installing the Miniconda distribution. Miniconda can be downloaded from. Pick python version 3.7 or higher. During the installation you can select the “just for me” option. This means that admin-rights are not needed. It is not needed to register or add anaconda to the path, although registering doesn’t hurt if this is the only python installation on your machine. The installation should create a menu entry called “Anaconda Prompt (miniconda3). This can be used to start a command prompt with the just installed python registered as default. Install required packages DAVE needs a number of common python packages to work. All of these are available in the conda and conda-forge channels and will be installed automatically together with DAVE. Start the command prompt via the “Anaconda Prompt (miniconda3)” as above. First thing to do is to make everything up-to-date. This is done by running the following command: conda update conda press y [enter] to proceed this will now display a list of packages that will be downloaded and installed. Press y [enter] to continue. Then install DAVE: Miniconda supports the creation of multiple “environments” which can exist next to each-other. A benefit of this is that it avoids conflicting packages (info: <link>). Own environment To create an environment “DAVE” and install DAVE in it (recommended): conda create -n DAVE DAVE -c conda-forge activate DAVE Next to other python packages To install DAVE in the active python environment: conda install -c conda-forge DAVE This takes enough time to go for a cup of coffee. Enjoy. Note: many of these packages can also be installed using pip, however the pip version of vtk (8.1.2) is not compatible with pyside2 and will not work. So USE CONDA Start the GUI The GUI is not the main part of DAVE. But it is the easiest place to start. To start the gui directly from the anaconda prompt use: activate DAVE (<-- only needed if you installed DAVE in its own environment) python -m DAVE.run_gui Alternatively the gui can be started from python as well. In that case use: from DAVE.run_gui import run run() You may also want to create a batch-file to start DAVE. In that case the contents of that file should be: CALL "C:\Users\DAVE\Miniconda3\Scripts\activate.bat" DAVE python -m DAVE.run_gui pause where “C:\Users\DAVE\Miniconda3\Scripts\activate.bat” should be modified to reflect the location where you installed miniconda. Use DAVE via Jupyter Lab JupyerLab is a great way to document your model and results. See for motivation. To make jupyter ready for DAVE we need to introduce DAVE to jupyter. Register kernel for Jupyter If you wish to use DAVE through Jupyter Lab then the DAVE environment needs to be registered with ipykernel. activate DAVE conda install ipykernel ipykernel install --user --name DAVE --display-name "Python (DAVE)" Optional: Install Blender Blender is an excellent and free tool for 3d modeling, the artist way. It can be used for creating visuals checking meshes and for rendering DAVE scenes. Blender can be obtained from After installing blender, make sure that windows is configured to open .blend files using blender. For example by downloading this blender file, double-clicking it, and, if it doesn’t automatically open with blender, select blender and program to open .blend files with. Configuration See this section in the documentation Troubleshooting If you are having troubles installing DAVE then let us know, either by leaving a comment on this page or by opening an issue on the github page.
https://www.open-ocean.org/dave-installation/
CC-MAIN-2021-17
refinedweb
659
66.94
How to manually stop video recording? Hey guys, I am new to the world of programming, I am not from the computer science background. Currently, I am working on a project in which I am using raspberry pi 3 and a USB webcam (I am not using the Pi camera). My objective is to record a video using the webcam, I have interfaced a button switch with one of the GPIO pins, so if I press the push button once it should start recording the video and on the next press, it must stop recording. I used the example code to check whether I am able to record the video or not? I run the code using terminal and yes it works, but whenever I want to stop recording I have to press Ctrl+C, which terminates the entire process, I also want to execute some different operation once the video stops, but I have no idea how to do that. Please I need your guidance. import numpy as np import cv2 cap = cv2.VideoCapture(0) #defining the webcam fourcc = cv2.cv.CV_FOURCC(*'XVID') out = cv2.VideoWriter('webcamOut.avi',fourcc,30.0,(640,480)) while True: ret, frame = cap.read() out.write(frame) cv2.imshow('frame',frame) if cv2.waitKey(1) & 0xFF ==ord('q'): break cap.release() out.release() cv2.destroyAllWindow() Also, one more thing. If I set the resolution higher than 480 (i.e. 720 or 1080), the output video saved with 5kb file size. what to do in that case? thanks your code clearly shows, that it will stop both capturing & recording, once you press the "q" key (in the gui window) , you probably did not notice it yet. you could easily insert some code just before cap.release() if you want to do the same using gpio pins, you'll have to insert some logic for that into the while loop, unfortunately, this is totally off-topic, we can't help you with that.
https://answers.opencv.org/question/146991/how-to-manually-stop-video-recording/
CC-MAIN-2019-26
refinedweb
327
73.37
. - Identify the problem. - Try to reproduce the problem. - Identify what component has the problem. - Locate the source code for the component. - Figure out how to build the source code you’ve found. - Ensure that the source code you’ve found compiles into a component that has the problem you have identified. - Locate where in the source code the problem lies. - Read and understand how the source code produces the problem. - Build a plan for how to change the source code so it doesn’t have the problem anymore. - Make sure you’ve made a backup of the source code somewhere. - Make the necessary changes ensuring you don’t break anything else. - Re-compile the component. - Try to reproduce the problem with the new component. If you can, go back to step 8. - Make sure you fixed the problem in the best way possible. - Try to tell someone what you fixed and give them your changes.. About the Author Trent Waddington is a professional software developer and maintenance programmer associated with the Centre for Software Maintenance at the University of Queensland, Australia. His interest in prototype based programming languages began in 1997 when he reviewed the work of researchers at Sun Labs working on the Self project. He has written compiler backends for the java virtual machine and is an active contributor to open source projects including the Boomerang open source decompiler. If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews. “As prototype based programming languages become fashionable again we may yet see some very interesting integration of revision control systems with exploratory development environments. ” Not just that, also how well it will be able to fit into the rest of the software development process. Software development is still wrestling with the issues between “I have an idea” and “here, code this up”. The benefits of fixing the program while its running are not limited to prototype-based languages. Class-based languages like Lisp and Smalltalk have been doing these things for years. What’s really needed for this sort of thing is not a prototype-based language, but a language runtime that is tightly integrated with both the source code and the compiler. That way, when the source code is edited, the compiler is invoked to generate new machine code, and the runtime makes sure that the appropriate references are updated. 4,5,6,10 and 12 can be solved on debian thusly: apt-get source <packagename> dpkg-buildpackage dpkg -i <.deb file> I’m sure you can use srpms to do the same thing. -Mark Not if you have Woody in you source list. You’ll be waiting a looong time As much as I hate those “our goober solves that problem”-type posts, our goober — DTrace — does solve this particular problem, or at least Step 7 of it. DTrace can quickly point you to the errant problem in a running app (or collection of apps) without recompiling, relinking or restarting it — usually so quickly and concisely that fixing it becomes the more difficult part. We have used DTrace with great success to find problems in gigantic software systems. More details on DTrace are here: Alternatively, you can look at the OSNews story on DTrace: – Bryan ———————————————————————- —— Bryan Cantrill, Solaris Kernel Development. bmc@eng.sun.com (650) 786-3652 Umm… Lisp Machine anyone? I would have to disagree here. What you propose would be handy, but it still wouldn’t fix the heart of the problem – The old fashioned try, test, debug, rewrite loop wouldn’t be broken. The importance of prototype based languages over what you suggest would be the Dijkstra inspired top-down, post-condition, to specification, & {via recursive step-wise refinement} to eventual code generation. Even this though wouldn’t suffice. Instead, it would lay the ground work for a tight coupling between prototyping & formal methods. What we need is a language in which we can write a specification & via stepwise refinement simultaneously generate the code & a proof of correctness. As long as we are stuck in the test & debug loop we can never guarantee that our code will always work. We need a mathematical approach that can prove the code is correct. Formal methods formal methods formal methods! I really don’t see how prototype-based languages address your concerns about top-down design. My comment was in response to the fact that the author attributed runtime modfication of code to prototype-based programming languages, while in reality it is attributable to any of the languages that have more highly-evolved runtimes. As for your comments about formal methods, I (along with many others) am not a big believer in them. Alan Kay once said: “Until real software engineering is developed, the next best practice is to develop with a dynamic system that has extreme late binding in all aspects.” I don’t believe we have gotten to a point where we can reliably and easily treat complex software with a mathematical level of rigour. The tools we do have today often sacrifice expressiveness for precision, which I consider an unacceptable trade-off. In the end, it comes down to the fact that the science of software development (as opposed to computational theory) is underdeveloped. Engineers in other fields have all sorts of formal methods for treating large, complex systems with a mathematical level of precision, and good tools to automate that process, but software developers do not. Until such tools are invented, I agree with Alan Kay that the next best thing is an extremely dynamic environment that allows for iterative, incremental devlepment. but a language runtime that is tightly integrated with both the source code and the compiler Maybe some ultra-JIT system could do that, but don’t we really want some sort of uber-repository that can do a lot more that just slow down our execution? Maybe IBMs integration of Eclipse with UML will be a step in the right direction… ReactoGraph is an example of another visual prototype-based language that also uses message-passing and visual data flow programming… Enjoy! This is why open source will never work. No one wants to spend time to make a serious, high performance program for free. Eh? You don’t need a JIT. Most such systems have native-code compilers. You do pay a 10-20MB memory overhead, for having the compiler integrated into the runtime, but a good implementation will stuff that into a shared library so you only have to pay it once. To get a taste of what such development environments can do, there are a number of products you can check out. There is: IBM’s Visual Age Smalltalk: Dolphin Smalltalk: Cincom’s VisualWorks: All of these have free demos that you can play with. Lately, I’ve been playing with Functional Developer, an interactive IDE for Dylan. They’ve got a free basic edition IDE for Windows. It does some really nifty stuff. You can do things like catch an exception, edit the offending methods or classes, and restart from where the exception was thrown. You can also run more than one program at the same time under the debugger, to make it easier to debug synchronization issues between client/server programs. The window of innovation only opens every few years. Java crept in just as it was clear C++ was too complex. The window only stayed open for a year or so. Python was too late. Try again in five years. Linux waltzed through the door to innovation just as the old unixes died and Windows looked around for any competition. SkyOS was too late. Maybe in a decade the door for a new OS will open again. I’m not trying to be a troll – I’m serious in stating that you cannot hope to re-seed the marketplace with a new tool or method when that market has already picked the next model and has not found fault with it (yet). The window for a new methodology of programming languages is not open. The Java/C# model, as repugnant as it might be, will need to run its course for another few years before people seriously start looking for alternatives. Frankly trying to push an alternative at this point is a waste of time, as is trying to push a linux alternative. Rayiner Hashem quoted Alan Kay: “Until real software engineering is developed, the next best practice is to develop with a dynamic system that has extreme late binding in all aspects.” I find this strange. In my experience, the earlier the binding happens, the fewer chances for errors. Furthermore, you can get a long way towards more maintainability and fewer bugs by adopting a good coding style. I’m not talking about rules about where to put your braces and spaces, but start by coding what you want done, and defer coding how to as late as possible. Generic programming helps here, and you can start working in this way today with C++. I’ve explored a couple of packages using netbeans for java and the debugging tools address some of the problems, notably about understanding the what code does what behaviour…. The packages had visibly been coded using netbeans so that did help.. When you’re using a prototype based language you can edit objects as they’re running. mh.. you mean something like: if var def object.method .. end end else def obj2.meth() .. end end or you mean Object.define_method(code)? Or similar mechanism in ruby, smalltalk, python, CLOS.. There is no separate compilation step, Oh so you mean an interpreted language.. and your program is always running. ..or an *image based* language. SmallTalk had this for looong time, this does not relates anyhow with prototypes. As prototype based programming languages become fashionable again we may yet see some very interesting integration of revision control systems with exploratory development environments. You never took a look at Squeak, right? never heard of monticello? Guess what? Squeak has open/always running/editable objects. And it has a versioning system built on his architecture. Like any other SmallTalk system. I can’t understand how prototypes have a role in this article. “””but start by coding what you want done, and defer coding how to as late as possible. Generic programming helps here, and you can start working in this way today with C++. “”” I’ve found that only works when you can design the whole system top to bottom and you have relatively static requirements. In most cases, an over all design, a data structure design, and a bottom up design seem to give a better success rate (anecdotally). As for generic programming in C++, yuck, ML derivitives offer a much better alternative. As a hobbyist hacker, I shudder to even consider looking at C/C++ or even Java source code of a large project to find a particular bug. What would significantly help me overcome this hurdle would be if there were some good, documented UML for the software in question, in particular, class diagrams and maybe some sequence diagrams, and use case diagrams). People are good at ‘talking’ the benefits of modelling, best practises development processes and design patterns. It’s time people start ‘walking’ what they’re ‘talking’. Let’s see some UML for Apache, Mozilla Firebird, MySQL, and even parts of the Linux kernel, Xfree86, bash, and such, with an emphasis on those areas most likely to be contributed to and/or bug-fixed by the community. There is a saying in the Lisp community — languages don’t just determine how you express solutions to your problems, but how you think about your problems. The whole idea of late-bound languages, as Alan Kay refers to, is that they make it easy to think about your problem with code. Solving “hard” problems in static languages is usually a chore. You have to deal with all sorts of requirements that having nothing to do with solving the problem. You have to specify types, for example, even though they just get in the way of refactoring your code. Thus, you have to think independently of the language, and once you think you’ve got a solution, implement it in the language. However, many problems are much easier to think about when you have the computer to help. You can ‘think’ by prototyping in a late-bound language. You can toy with your ideas immediately, and quickly build working prototypes. Because of the ease with which dynamic code is refactored, that prototype can be easily turned into production code, so that effort is not wasted, and no bugs are introduced in rewriting the prototype. Its a matter of top-down vs bottom-up development. In the former model, the computer doesn’t help you solve your problem, it just offers a way to express the solution. In the latter model, the computer is an active partner in the problem-solving process. In the former model, you often have to write a significant amount of code to the spec before you can get any of it to work. In the latter, you start with at tiny core program, and gradually refactor and build it up, keeping it working the whole time. Its the whole idea of iterative development, which is taking off in extreme programming circles these days. There is a nice article about the history of ID here: I’ve seen a few comments thus far that have basically amounted to “Smalltalk can do that!!” Clearly Smalltalk (and LISP) is the grandfather of runtime editable objects, but in the same breath that I mentioned runtime editability I also mentioned program understanding. In particular:. That applies to Smalltalk also. The level of indirection from programming a class and then instantiating that class is simply too great for runtime editing.
https://www.osnews.com/story/6523/software-maintenance-and-prototype-based-languages/
CC-MAIN-2021-17
refinedweb
2,317
62.88
12 August 2008 14:31 [Source: ICIS news] TORONTO (ICIS news)--Valspar’s fiscal third-quarter results released on Monday were marked by lower than expected operating profits and margins even though sales were higher than expected, JP Morgan said on Tuesday. ?xml:namespace> Higher costs for the US paint producer's key raw materials - pigments, binders, solvents and additives - led to a year-on-year gross margin reduction from 31.2% to 29% for the three months ended 25 July, the bank said. The quarter’s operating profits fell 14.3% to $88.4m (€59.3m) as operating margin slipped from 11.5% to 9.2%, reflecting the gross margin decrease. JP Morgan expected Valspar to raise its average annual selling prices by 2.0%-2.5% in 2008, which would only partially offset raw material cost inflation, it said. Meanwhile, Valspar’s consolidated fiscal third-quarter sales grew 7.2% to $958m, mainly on higher prices and a better sales mix, acquisition benefits and favourable currency effects, JP Morgan said. Overall reported third-quarter profit from operations was 50 cents on an earnings per share (EPS) basis, down from 57 cents in the year-ago period and below JP Morgan’s estimate of 52 cents, the analysts.
http://www.icis.com/Articles/2008/08/12/9148155/valspar-sees-profit-margins-shrink-jp-morgan.html
CC-MAIN-2013-20
refinedweb
209
57.06
Compatible with Windows7 & Mac OS X Snow Leopard Hi Nick, > When i run my stylesheet I have been getting a namespace error for > <ref:SECTION. > It says the namespace ref is not declared. in my attemps to fix > this, the only way i could get it to work was by adding > xmlns: in the xml file. XSLT cannot operate on any old XML files. As well as being well formed, the XML files have to adhere to the XML Namespaces Recommendation. What that means is that you can't use colons in element and attribute names willy-nilly -- you can only use them if you're using them to separate a prefix from a local part of an element or attribute name. And the prefix has to be declared with a namespace declaration. If the XML files that you're working with *don't* adhere to the XML Namespace Recommendation then you can't use XSLT with them. The best thing that you can do is either add the namespace declaration to the files (preferably with a meaningful namespace name) or stop using colons in the element names (perhaps replace ref:SECTION with ref.SECTION instead). You could get round editing the existing files by generating XML documents that use entities to pull in the existing files, something like: <!DOCTYPE wrapper [ <!ENTITY file SYSTEM 'file.xml'> ]> <wrapper xmlns: &file; </wrapper> but creating one of these for each of your files is likely to be just as much trouble as editing the files you have. Cheers, Jeni --- Jeni Tennison XSL-List info and archive:
http://www.oxygenxml.com/archives/xsl-list/200206/msg01143.html
CC-MAIN-2013-20
refinedweb
263
68.7
Cancel order example? Hi all, what I'm trying to do in english is this: If price closes under the lowest of last 50 candles: create 2 layers of limit buy order at 0.9lowest and 0.8lowest. if one layer is filled, limit sell at buyprice+5% if both layers are filled, cancel the sell order above and set a new limit sell order at averagebuyprice+5% #------------------------------------------ The canceling part is needed because the next round my signal enters, the unfilled sell limit will be there waiting for me. I haven't found any working examples of cancelling a specific order, and can't make it just by reading the docs. Strat is working as intended, but I can't get to cancel the sell order. What I'm trying rn is: when one layer is filled: self.order = self.sell(exectype=Order.Limit, price=self.buy1 * 1.05, tradeid=self.something) if the second layer gets filled: self.cancel(ref=self.something) self.order = self.sell(exectype=Order.Limit, price=self.averagebuy* 1.05) any help or live example is appreciated. I know I could probably ease my life using oco or brackets but I really would like to understand how to cancel orders just to have it in my toolbox. @firebee You are going to have to track your orders. You can do this either with a list or a dictionary. If just trading one symbol, a list will suffice,otherwise a dict of lists with symbols as keys. So, based upon what you said above, when you place your orders, you will have # two limit buy orders. self.ords = [o1, o2] When these orders fill, you will get a notification in notify order where you can filter for completed trades. As your trades complete, you can take action. def notify_order(self, order): # Check if an order has been completed if order.status in [order.Completed]: ## Here you have a just completed order. Compare this to your self.ords list and take whatever actions you need. Cancel/create, whatever works.
https://community.backtrader.com/topic/4074/cancel-order-example/2
CC-MAIN-2022-05
refinedweb
343
65.83
Hi ~ As you can see from the code below I am in school for programming. I have come across this homework assignment a few times on the site already however the requirements for mine are different. I cannot use arrays. Three functions - I only have two coded. The void calcAverage prototype is not causing an error and the program compiles and runs asking the using for the five test scores with no problem. I cannot find the error in passing the 'lowest' score value. Upon the user (me) entering the five scores, the lowest score value is always 0. I have been working on this darn thing for a week and a half, think I have it figured out but this is stumping me. Why the heck isn't my lowest score of five entered showing on output? I have not added the third function because I wanted to test this first. Any help will be very appreciated!! Thanks! #include <iostream> using namespace std; void getScore (int &); int findLowest (int); void calcAverage(int); int main () { static int refscore, lowest; for(int count = 1; count <= 5; count++) { getScore(refscore); int findLowest(lowest); } cout << "The lowest score is: " << lowest << endl; system("pause"); return 0; } void getScore(int &refscore) { if(refscore >= 0 && refscore <= 100) { cout << "Please enter a test score: "; cin >> refscore; } if(refscore < 0 || refscore > 100) { cout << "Invalid Entry. Please enter a test score "; cout << "between 0 and 100. "; cin >> refscore; } } int findLowest(int lowest) { static int refscore; getScore(refscore); lowest = refscore; if(refscore > lowest) lowest = lowest; return lowest; } Edited 3 Years Ago by mike_2000_17: Fixed formatting
https://www.daniweb.com/programming/software-development/threads/347751/cannot-find-value-error
CC-MAIN-2016-50
refinedweb
265
61.56
PART TWO RADAR SYSTEMS --7/9-- [BLANK PAGE] --10-- Part Two Radar Systems PRINCIPLES OF OPERATION OF THE RADAR SYSTEM I. GENERAL RADAR is a coined word. It means RA-dio Detection A-nd R-anging. Air-borne radars perform three main functions: 1. They detect the presence of targets. 2. They indicate relative bearing of the target from the aircraft. 3. They indicate in nautical miles the range of the target from the aircraft. Certain basic radio phenomena contribute to the function of radar: 1. Radio waves generated at ultra-high frequencies have practically the same transmission properties as light. They travel from the transmitting antenna out to the horizon, and beyond, in straight lines on a line-of-sight principle. See figure 2-1. 2. The employment of ultra-high frequencies permits the use of relatively small, compact antennas which can be designed to concentrate the transmitted radio energy in a narrow path or beam: thus directional transmission and reception is obtained. See figure 2-2. Objects which are in the path of beamed ultra-high-frequency transmissions reflect or reradiate a small portion of the transmitted energy back to the source. See figures 2-3 and 2-5. Together, these three phenomena form the fundamental concept on which the operation of all radars is based. To achieve its purpose of denoting the presence, bearing and range of objects (targets), a radar system employs certain basic units: 1. The transmitter.--Unlike conventional continuous wave radio transmitters, the radar transmitter generates ultra-high-frequency radio energy which is transmitted in short bursts or pulses. The pulsing cycle of a radar transmitter would look something like that shown in figure 2-4. Each pulse is on the order of one or two microseconds in duration, with relatively long rest or quiescent periods in between pulses during which the reflected echoes are received. For instance, a transmitter which pulsed 500 times a second with 2-microsecond pulses would have a rest period in between pulses of 1,998 Figure 2-2.--Depending on the design of the antenna, the transmitter radio energy can be directed in a narrow lobe or beam. Figure 2-1.--Radio waves generated at ultra-high frequencies have practically the same transmission properties as light waves. They travel from the source out to the horizon and beyond on a line-of-sight principle. --11-- Figure 2-3.--In this diagram a single pulse is shown, the shaded arrows in the beam denoting the progressive positions of the outward-going pulse as it travels from the directive antenna. microseconds. Because radio waves travel at a constant velocity of 162,000 nautical miles per second, the time (during the rest period) required for a pulse to travel out to the target and return as a reflected echo would depend upon the distance of the target from the transmitter. 2. Directional antennas.--The pulses generated by the transmitter are fed to a directional antenna which transmits them into space in a narrow beam. An object in the path of the beam will reflect back a small portion of the pulse energy. Early types of radar employed fixed directional antennas which transmitted the beam of radio energy out to either side of the aircraft, or straight ahead. Later equipments employed a movable type of antenna which could be rotated manually from abeam to ahead. The newer types employ an electrically driven antenna which either oscillates through an arc of 150° ahead of the aircraft or rotates through 360° of azimuth. 3. Receiver-indicator system.--Back on the aircraft, the reflected pulse energy is received by the antenna, amplified in the receiver and passed along to an indicator unit containing a special timing device-a cathode-ray tube- which translates into terms of range the elapsed time that it took for the transmitted pulse to go out to an object and return to the radar as a reflected echo. One the cathode-ray tube screen (scope), the range of the target is indicated by the position of the target echo on a calibrated time trace. The bearing from which an echo returns is determined by utilizing the directional characteristic of the antenna system, and synchronizing it with the scope display. II. TYPES OF SCOPE PRESENTATION The diagrams, figures 2-3 and 2-5, show the relationship of the indicator to the other basic Figure 2-4.--Radar transmitters send out pulses of radio-frequency energy many times per second. These pulses vary in duration from-about ½ to 2 microseconds (millionths of a second). Depending upon the range for which the equipment is designed to operate, the quiescent or inoperative period between pulses is long enough. --12-- Figure 2-5.--When the outward-going pulse strikes an object in its path, a small amount of radio energy is reflected back toward the source where it will be displayed on the indicator as a visual signal. radar system units. Each time that the trans-milter furnishes a pulse to the antenna, a minute quantity of pulse energy is fed to the indicator where it is used as a timing pulse to cause a spot of light, visible on the cathode-ray tube screen, to form a time trace. There are two methods in use by which a target echo is made to register its presence on the time trace. One method is called the "blip type" and the other is called the "spot type" indication. A. BLIP TYPE. 1. L-scan.--Early type airborne radar equipments, such as the ASB, employ what is termed an L-type scope presentation where the spot of light, in synchronization with (he pulsing of the transmitter, appears lirst at the bottom center of the screen and moves upward vertically as the pulse travels outward through space. The recurrence and movement of the spot is so rapid that, to the eye, a vert ical bar of light, called the time trace, is formed. It is calibrated in terms of range. See figure 2-6. Received radar echoes cause the rapidly moving spot of light to detour momentarily in its upward path to form jagged pointers of light out from the side of the trace. This is called, tlie blip type of indication, the blip appearing at a distance upward from the bottom of the trace proportional to the time that it took for the transmitted pulse to go out from the antenna, strike a target, and return to the radar in the form of a week, reflected target echo. The bearing of the indicated target is determined by noting the position of the antenna when maximum lateral length of blip is obtained. 2. A-scan.--Some equipments (ASG, AN/APS-2, AN/APS-15, etc.) employ a monitoring A-scope which is similar in operation to the L-type scope excepting that a horizontal trace is employed and signal voltages produce blips which appear up from and along the trace. B. SPOT TYPE. Later air-borne radar equipments utilize spot-type scope displays which Figure 2-6.--The L-type scope is the ASB indicator. A rapidly recurring spot of light forms a vertical trace. Signal voltages applied to deflecting plates within the cathode-ray tube cause the spot to momentarily detour literally, producing the target blips. --13-- Figure 2-7.--This is the PPI-type scan. Rotating in step with the antenna a radial trace has portions of its length intensified by the reception of reflected pulses causing a pattern to be "wiped' on the face of the PPI tube. For purposes of illustration the brilliance of the trace has been increased in the scope presentation above. In actual practice the brilliance of the trace is brought up to a point to where it is barely visible. include the PPI (circular map), the B-scan (square map), the H-scan and O-scan (double dot), and the G-scan (growing wing) types of scope presentation. Some radars use combinations of these scope presentations. (AN/APS-4 uses B-scan and H-scan: AN/APS-6 uses B-scan and G-scan.) 1. PPI-scan.--In radar equipments employing the PPI type of presentation (fig. 2-7), the dish-type antenna revolves through 360° of azimuth. On the scope screen of the indicator, the rapidly recurring spots of light form a radial time trace which rotates in synchronism with the rotation of the antenna. Target echoes are displayed on the screen as a momentary intensification in the brilliance of a portion of the revolving trace. They appear at points outward from the center of the screen, along the trace, to indicate azimuth and range of the target. Because of the persistence of the screen's fluorescent coating, the target images remain in view, diminishing in brilliance after the trace has swept past. (This type of spot presentation is found in such airborne radars as the AN/APS-2F and AN/APS-15.) 2. B-scan.--In the B-scan or square map type (fig. 2-8), a dish antenna mechanically sweeps from side to side through a 150° arc to scan a fan-shaped area ahead of the aircraft. On the scope screen of the indicator, the rapidly recurring spots of light form a vertical time trace which moves from side to side across a rectangular screen in synchronism with the antenna movement. The echo from a ship or other relatively small target will cause a momentary intensification in the brilliance of a small portion of the trace to produce a spot of light. The distance at some point upward from the bottom of the trace at which this spot appears is proportional to the target's range. Echoes from land masses or large targets will cause a correspondingly greater portion of the trace to increase in brilliance during the period of scan. As the trace moves from side to side across the screen, its intensified portion "wipes" the target images on the scope in the form of a brilliant light pattern which, because of the relatively high order of persistence of the fluorescent screen coating, remains visible after the tract' has swept past. (This type of spot presentation is found in such air-borne radars as the AN/APS-3 and AN/APS-4.) Figure 2-8.--In the B-type scan a rectangular area is covered by the vertical trace which moves from side to side in step with the scanning movement of the antenna. Target areas produce reflected echoes which intensify a portion of the trace, forming a light pattern on the scope face. Because of the relatively high persistence of the fluorescent coating on the tube face, the pattern is retained momentarily after the trace moves by. --14-- B-scan, distortion.--In the B-scan, or square-map type of scope presentation, the fan-shaped area being scanned is represented on the scope screen as an illuminated rectangle. A fan-shaped scope screen could be provided but it would have the disadvantage of crowding targets at the apex of the triangle. Moreover, the crowding would become decidedly pronounced as the range to the target was closed, making definition of separate targets difficult if not impossible. By virtually spreading out the apex area and producing a rectangular display, the crowding effect is overcome; but it introduces two separate distortion effects for which mental correction must be made in interpreting scope patterns on the rectangular screen. The first distortion effect is particularly noticeable with large land-mass patterns or ships in column. See figure 2-9. Suppose that the range from the aircraft to the target dead ahead, "D," is 6 miles; the range to the other targets in column of ships would vary. To the "A" and "G," the range would be 10 miles; to "B" and "F," 8 miles; to "C" and "B," 6½ miles. On the scope, instead of being located across the screen in a straight line, "A" and "G" would appear at the 10-mile range, "B" and "F" at Figure 2-9.--At the top is shown a column of ships as they would appear under conditions of good visibility to an observer in an aircraft. Note the range and bearing to each of the ships. The lower half of the figure shows how these same ship targets would appear on scopes employing the B-scan display. Mental correction must be made for the apparent distortion introduced by the rectangular scan. Figure 2-10.--Because the fan-shaped area being scanned ahead of the aircraft is represented as a rectangular pattern, the radar images are distorted. Here a concave shore line whose various points are approximately equidistant from the aircraft is represented on the scope as a land-mass pattern whose shore line is practically a straight line. --15--. Figure 2-11.--When a shore line being scanned is irregularly convex as shown here, the scope pattern appears with an exaggerated curvature. Upon inspection, however, points of reference on the land area appear correctly with respect to range and azimuth on the scope pattern. Figure 2-12.--As shown in the sketch at the top, the bearing of the ship target alters as the range is closed. On the scope, because the apex of the fan-shaped area being scanned is in effect stretched out to form a rectangle, targets to either side of dead ahead as they are approached appear to rapidly drift off to the side of the scope. In figure 2-10, because the coast line being approached is concave, the radial distance to all points along the coast is about the same. Therefore, on the scope the coast line appears approximately as a straight line. A convex coast line, on the other hand, looks like that shown in figure 2-11. The second distortion effect is particularly noticeable with respect to the bearing of nearby targets as the range is being closed. Targets slightly off to either side of dead ahead appear to move rapidly outward to the edges of the rectangular screen during the approach. Figure 2-12 illustrates this point. 3. H-scan.--The AN/APS-4 is one of the types of air-borne radar equipments employing the H-scan (in addition to the B-scan). When this equipment is set for Intercept, a 6° beam from the antenna scanner is made to sweep from side to side over a rectangular area ahead of the longitudinal axis of the aircraft, 150° in azimuth and 24° in vertical plane. This back- --16-- Figure 2-13.--When set for Intercept the AN/APS-4 equipment produces a scope pattern consisting of two dots (an echo and shadow pip), formed by the lateral sweep of a double trace moving from side to side across the screen. and-forth motion takes place about 60 times a minute. On the rectangular screen of the scope, a single trace moves from side to side in synchronism with the azimuth sweep of the scanner. With respect to the scope presentation, the position of the intercepting aircraft is considered to be at the lower center of the screen. Another aircraft within the field of the beam will reflect echoes which are displayed on the scope screen as two spots of light, termed "pips." The pip at t he left is called the echo pip, its location on the scope screen denoting the target's range and azimuth. The pip to the right of the echo pip is called a shadow pip. Its angular position with respect to the echo pip denotes the relative elevation of the target. The two pips move about on the screen together. When the interceptor aircraft is maneuvered so that the echo pip lies on the central scribed azimuth line, the target is dead ahead. When the interceptor aircraft is maneuvered so that the shadow pip is aligned horizontally with the echo pip, the target is in line with the longitudinal axis of the interceptor aircraft. See figure 2-13. O. O-scan.--The O-type scan, whose scope presentation is somewhat similar to the H-scan, is employed in air-borne radar equipments especially designed for interception (fighter aircraft). In the O-type presentation, the scan or sweep of the antenna is produced by the combination of two movements of the spinner. One movement rapidly rotates the antenna dish 1,200 r. p. m.; the other movement sweeps the antenna assembly through 60° from dead ahead to the outer limits of the 60° angle and returns to center. This cycle occurs once every 2 seconds. As a result of these two movements, an outward-inward spiral motion is imparted to the antenna to sweep the 9° antenna beam spirally through a 120° conelike area ahead of the longitudinal axis of the aircraft. On the O-type scope screen this antenna action is translated into the movement of two vertical traces which start at the vertical center of the screen and move outward, then inward, in synchronism with the outward and inward spiralling motion of the antenna. On the screen the opening and closing movement of the two traces is referred to as the "barn door effect." Target echoes are of the "double-dot" presentation, similar to that obtained in the H-type scan. The azimuth position of the target is Figure 2-14.--In the O-type scan a spiralling antenna motion is translated on the scope screen into the movement of two vertical traces which start at the vertical center of the screen and move outward and inward to produce the pattern shown here. --17-- Figure 2-15.--The G-type or gun aim scan produces a spot of light when a target is within range of the conical antenna beam. The radial position of the spot of light denotes bearing in elevation and azimuth. Maneuvering the aircraft to put the spot in the center of the scope places the target dead ahead. As the range is closed the spot grows wings. When the wings touch the scribed markings the target is at firing range. indicated by the position of an imaginary line between the two dots. In addition to the double-dot presentation, a sea-return and altitude echo is displayed. The sea-return pattern, curtainlike in effect, represents the echo reflection from the area where the spiralling outer edge of the cone of energy strikes the surface of the sea ahead of the aircraft. The altitude mark is a straight brilliant line of light appearing upward from the bottom of the rectangular scope screen at a distance proportional to the aircraft's altitude. The O-type scope display is illustrated in figure 2-14. 5. G-scan.--The AN/APS-6 series of airborne radars employ three separate types of scope display: (1) the B-type scan for search, (2) the O-type scan for intercept, and (3) the G-type scan for gun aiming. After a target is brought to close range by the use of the intercept scope presentation, (O-scan) the AN/APS-6 series of radars may be set to produce still another type of scope presentation-the G-scan for gun sighting. When set for Sight, the antenna spinner movement is altered to eccentrically rotate the 9° beam, producing a 10° cone bore sighted with respect to the forward-firing guns. On the scope, the target is identified as a bright spot whose radial position with respect to the center of the screen denotes the relative azimuth and elevation of the target aircraft. When the interceptor aircraft is maneuvered to cause the spot of Light to rest in the center of the scope, the target aircraft is bore sighted dead ahead. As the range is closed, the spot grows wings whose tips, when they touch vertical lines scribed on the scale face, indicate that the target aircraft is within firing range. See figure 2-15. Figure 2-16.--A graduated plastic scale placed over the face of the L-type scope (left, above) provides a ready reference for estimating target range. In the B-scope (center) bright spots on the vertical trace produce horizontal range marks for judging range. On the PPI-scope (right, above) electronically generated bright spots on the trace leave range circles on the scope face. III. RANGE MEASUREMENT In all types of air-borne scope presentation (with the exception of the G-scan), it should --18-- be borne in mind that the time trace, whether it be stationary as in the L-type, revolving as in the PPI-scan, or laterally oscillating as in the B- and H-scan, represents a time-measuring device which is calibrated to indicate range in nautical miles. The L-scan uses a calibrated scale etched on a transparent plastic material which is placed over the face of the scope to assist in determining the range of a blip. In the other presentations (PPI. B- and H-scans), electronically generated bright spots of light appear along the length of the trace at accurately spaced intervals. In the B-scan, these bright spots on the trace activate the fluorescent coating of the scope so that as the trace moves hack and forth, it produces lines of light called range marks. In the PPI-scan, circular range marks are similarly formed by the spaced bright spots on the trace as it revolves in step with the antenna. See figure 2-16. (In the H-scan, range mark spots appear only at the extreme right vertical edge of the rectangular scope screen.) IV. FACTORS WHICH LIMIT ALL RADAR PERFORMANCE A. Variable factors. Maximum ranges claimed for particular radar equipments are not always obtained. Different conditions of flight, weather involving electrical disturbances, and other phenomena, will cause variations from day to day and from hour to hour in maximum ranges of detection. The following "variables'" affect the performance of all radar equipments: 1. Altitude.--Under normal conditions, the beam from the radar antenna travels along a line of sight to the horizon. The higher the altitude, the greater is the distance to the horizon: therefore, range is limited by the altitude at which the plane flies. The chart, figure 17, shows, in nautical miles, the approximate radar line-of-sight range to the horizon for the altitudes listed. The distances given apply to targets at sea levels; an increase in target height extends the line-of-sight range to the target. Figure 2-17.--Because radio waves are bent down slightly by the ionosphere their path is larger than the line-of-sight distance to the horizon. This graph illustrates the approximate line of sight and radar distances obtained at altitudes up to 30,000 feet. 2. Sea and ground return.--Although most of the ultra-high-frequency radio energy is consent rated by directive antennas into a narrow beam or lobe, some of this energy is projected downward from the aircraft where it strikes the land or sea below. Radar reflections from the land or sea directly below the aircraft return and show upon the scope screen as a diffused, irregular pattern close to the start of the trace. The area on the scope covered by such land or sea return depends largely on the altitude of the aircraft. At low altitudes, a small area is covered. Consequently, a small area about the start of the trace is covered by the returning echoes. At high altitudes, a correspondingly larger area of the scope is covered by the return because the stray radiation covers a larger area below the aircraft. Land areas below the aircraft usually cause stronger, larger returns because the ground offers more and varied reflecting surfaces. Areas of sea below the aircraft return weaker, fewer returns because of the relatively flat surface intercepted by the stray radiations. In a rough sea, however, the advancing front of waves offers good reflecting surfaces and. in --19-- addition to increasing the area of sea return on the scope, indicates generally the direction of the wind causing the high seas. 3. Target size, shape, and substance.--In general, large targets return echoes at greater ranges than small targets. A target that is broadside to the radar beam will return echoes from greater distances than a long but narrow target which is bow-on to the radar beam. The material of which the target is made has a decided effect upon the strength of the echo which it returns. Metal targets are detected at greater ranges than nonmetallic targets. A steel ship, for example, is detected at greater ranges than a wooden ship. 4. Fading.--Distant targets often fade in and out much like the fading of radio signals from a distant broadcasting station. Aircraft targets, in particular, usually fade rapidly on the radar scope, and reappear just as rapidly. One cause of this fading is the changing attitude of the reflecting surfaces of the target. 5. Nonstandard propagation of radio waves, trapping, ducts--a. General Features.--As a general rule, and in accordance with the accepted theory of wave propagation, it has been assumed that waves generated at frequencies above 30 megacycles travel from a transmitter outward along a long of sight and, with respect to radar, that targets in the line of sight will reflect a portion of these waves back to the source also along a line of sight. This has been and still remains the basic premise on which the explanation of the operation of radar is based. Reports on the variability of radar coverage show, however, that certain weather and atmospheric conditions prevailing along the transmission path may greatly modify the day-to-day and hour-to-hour normal radar range characteristics. In order to establish a reference for the purpose of investigating the effect of atmosphere on radio wave propagation, a so-called standard atmosphere has been postulated. The standard atmosphere is likely to be found in nature when the air is well mixed so that no unusual temperature or humidity gradients can exist. A standard atmosphere may be defined as that condition of the atmosphere in which the temperature and moisture content of the air decreases uniformly with height. In the standard atmosphere, the air temperature decreases with increasing altitude at the rate of 6.5° C. per kilometer from 15° C. at sea level. Although this condition is postulated as "standard," it is not necessarily normal at any particular location. The atmosphere is likely to be of standard composition when strong, gusty winds are blowing, because the turbulence created prevents both stratification of the air and establishment of nonstandard temperature and humidity gradients. Standard propagation is likely also when the water is colder than the air, or over land when cooling of the earth by radiation is limited by overcast skies. If rain is falling, the lower atmosphere in general will be saturated with water vapor, so that the existence of abnormal humidity gradients is prevented and propagation should be standard. In the standard atmosphere both the air temperature and the moisture content decrease uniformly with height above the surface of the --20-- earth, so that the index of refraction of the atmosphere also decreases uniformly. However, the atmosphere is subject to many changes. The temperature may, for example, first increase with height and then begin to decrease. Such a situation is called a "temperature inversion." More important, the moisture content may decrease markedly with height just above the sea. This latter effect, which is called a "moisture lapse," either alone or in combination with a temperature inversion, may produce a great change in the index of refraction of the lowest few hundred feet of the atmosphere. The altered characteristics of the atmosphere may result in an excessive bending of the radar waves that are passing through the lower atmosphere. In certain regions, notably in warm climates, excessive bending is observed as high as 5,000 feet. The amount of bending in regions above this height is almost always that of normal atmosphere. The atmosphere must be relatively calm in order to permit the existence of the conditions that produce this excessive bending. After a period of calm or of light breezes, the lower air may be stratified so that nonstandard propagation is likely throughout a wide area. There will be formed within this area a sort of duct or wave guide in which the radar waxes can be trapped. The most remarkable effect of such trapping is the extreme ranges that have been obtained as a result of it. Numerous instances have been reported where the normal radar horizon range has been increased four or five times, while a few hours later, the same radar has failed entirely to detect targets clearly visible to the eye. The duct in the atmosphere may be formed along the surface of the sea or elevated above the earth. In either case, the coverage of the radar will be changed by the great extension of detection range for targets within the duct. In figure 2-18, one possible type of deformation of a radar coverage pattern is illustrated. The duct acts very much like a wave guide that is used as the transmission line in the radar set. The humidity gradient is such that the rays from the radar antenna are bent an excessive amount, and they are trapped in a zone near the surface of the water. Since the effect of the duct is to guide the radiated energy around the curvature of the earth and back again to the antenna, the radar is able to detect the ship at a range much greater than normal. If the moisture content of the air increases with height, the radar waves may be bent up instead of down. Nonstandard propagation of this sort reduces the radar coverage instead of increasing it. b. Effect on radar performance.--The position of the radar antenna relative to the duct in the atmosphere is one of the most important factors in controlling the effect of trapping. The radar pulse can be trapped and carried out to very distant targets and back to the radar if the antenna lies within the duct. If the antenna lies below the duct, trapping may take place, but only if the radiated rays from the antenna enter at a very small angle. For this reason, elevated ducts that are very high will not have any direct effect on radar performance. On the other hand, if the antenna is a small distance above the top of the duct, blind zones may be present in the low-level portion of the coverage pattern that do not exist during standard propagation conditions. However, if the antenna is high above the top of the duct, as an airplane flying over a ground-based duct, no effect on the air-borne radar coverage will be apparent. Since the ducts that are most important to the operation of shipboard radar are ground based, radars that have relatively low antenna heights are the most likely sets to experience trapping. Experiments and observations of shore radar installations in both the United States and England have indicated that low sites are rather generally affected by trapping while high sites experience the phenomenon relatively infrequently. For some time after the discovery of trapping it was thought that the long ranges observed resulted from the diversion of a large amount of the radiated energy into the duct. Consequently, it was feared that the existence of trapping would cause a great decrease in coverage in the area above the duct. However, the conditions that exist in the most common type of trapping do not support this presumption. In spite of the great increase in ranges --21-- Figure 2-19.--Variation of echo amplitude with constant radar performance. in the duct, the amount of energy trapped is small compared to the total energy radiated in the pulse. The increased range apparently is caused by the increased transmission efficiency that exists with the duct. In many cases, coverage will be reduced for targets that are just above the duct. This reduction is caused by the wave-guide action of the trapping layer. The radiated pulse is contained within the duct because the refractive power of the atmosphere within this region is such that the radiated rays are bent downward. Directly above the duct there may then be little or no energy, and targets flying there may not be detected. However, energy may reach targets that are well above the duct by traveling along paths that enter the trapping region at angles too great to be trapped. The wavelength that can be trapped in the duct formed by nonstandard propagation conditions varies with the height of the duct. If the duct is approximately 400 feet high, all radar frequencies can be trapped within it. However, if the duct is only 100 feet high, only microwave energy will be affected. A duct almost always exists within perhaps the first 20 feet above the water, but this is not yet of any practical use since only K-band energy could be trapped within such a narrow region. There is fairly conclusive evidence of the rather general existence of a surface duct 50 to 65 feet high throughout the trade wind area of the Pacific Ocean. This duct is capable of trapping S-band and shorter wavelengths provided the antenna lies within the duct. Because of the characteristics of the ducts that are usually formed, radar frequencies below approximately 250 megacycles are the least likely to be trapped. The amount of refraction, that is, the amount of angular deflection of the rays, is very small and very rarely will exceed 1°. Such bending does not affect radar operation except in regions where the angle between the radiated ray and the horizontal is very small. Only low-angle search is affected by meteorological conditions; performance of the radar is rarely affected by weather for targets above approximately 1° position angle. A very serious operational consequence of trapping is the misleading of radar operators as to the over-all performance of the equipment. Long-range echoes caused by trapping have frequently been assumed to indicate good condition of the equipment when precisely the --22-- opposite may be the case. For example, figure 2-19 shows a part of a record of echo amplitude taken during an investigation of radar propagation. The performance of the radar was maintained at a constant level by the use of an echo box. The target was 30 miles away from the radar, with the path of the pulses passing over sea. Aside from the rise and fall of the tide, the variations in echo strength are due to changing characteristics of the targets or of the propagation pathway. The tremendous variation shown in figure 2-19 would certainly preclude the use of this target as a standard to check the performance level of a radar. At one point in the curve, the echo changed two-hundred-thousand-fold (53db) within 2½ hours. However, the phenomenon of trapping does not invalidate the measurement of echo height on nearby targets its a criterion of overall set performance, since the echoes from objects well within the optical horizon are not affected appreciably by propagation variations. Under conditions where the security of radar transmissions is involved, the possibility of trapping should be constantly kept in mind. When a duct is present, the enemy can intercept both radar pulses and VHF communications at far greater ranges than normal. When trapping is known to be present, and a choice of frequencies is available, it may be better to choose the lowest band since the high frequencies are the more likely to be trapped. Tints, decisions as to when to employ radar silence must be modified by consideration of propagation conditions. Trapping may be the cause of apparently inexplicable failure in communications. Elevated ducts do not have much effect on ship-to-ship or ground-to-ship communications unless the duct is at low level. However, a plane flying within an elevated duct may have difficulty communicating with a plane outside the duct or with a distant ship because the duct may act as a barrier. When the temperature inversion and moisture lapse are very steep over a relatively thin layer of the atmosphere, radio waves may be reflected directly from the bottom of the duct. Under this condition, which probably will be encountered only rarely, the lower frequency radar and communications waves ma}' be reflected repeatedly between the duct and the sea, and so carried long distances, even without trapping taking place. There will be skip distances associated with this type of propagation because the nature of the reflection is such that the waves are returned to earth only at intervals. If a plane is flying above a thin, sharply bounded duct of this type, it may be impossible for communications to be established between the plane and the ship. In rare cases, a phenomenon of this sort may also be responsible for failure of IFF. When communications from a plane fail, or the plane cannot make contact on a navigational aid, the pilot may be able to reestablish contact by changing altitude to get either above or below the duct. When trapping causes echoes to be returned from targets at long ranges, the pips are likely to fluctuate far more than those from targets at short range. This fading is caused by changing propagation conditions within the duct. Such fading is of large amplitude, perhaps involving a change in echo power of as much as one-thousand-fold, and the echoes vary over a period of approximately 15 minutes. Violent fading of echoes appears to be a characteristic feature of trapping conditions that may aid the operator in realizing when such conditions exist. If the water vapor content of the air increases with elevation, the radar waves are bent up instead of down. This condition, which is referred to as substandard, can exist in some kinds of fog or when fog is about to form. Although not enough is yet known about radar propagation in fogs to state its full effect, it appears likely that nearly all fogs produce substandard conditions. The obvious danger of this condition is that the fog prevents ordinary optical search, and at the same time reduces the radar detection range on surface vessels and low-flying aircraft. Unfortunately, there is very little that the operator can do to combat the effect of fog except to run his set at the greatest possible sensitivity. Airplanes often fly very low to avoid detection by radar. However if a duct is formed near the surface of the water, such evasive action will probably be unsuccessful. If the --23-- presence and height of the duct were known, better protection against radar detection would be furnished if the planes were to fly just above the top of the duct. The changing propagation conditions that radar waves can encounter, then, can have considerable effect on radar performance. To summarize, the effect of trapping on a radar may be to: Increase the range of detection for airplanes flying within a duct. Reduce the range of detection for airplanes flying just above a duct. Reduce the range of detection for low-flying planes in some types of foggy weather. Modify the height at which airplanes should fly to avoid detection by enemy radar. Cause errors in height finding for targets below position angles of 1°. Increase the extent of clutter from sea return, and thus reduce the operational efficiency. Increase the range at which a navigational aid can be effective. Increase the range at which radar signals can be heard. Modify the degree of jamming either suffered or inflicted. Weather effects cannot be blamed indiscriminately for variable performance of radar. In general, inadequate adjustment or incipient electrical failure will cause a far greater decrease in range than any variation in the propagation conditions. For example, failure to detect a high-flying airplane at short range cannot be attributed to nonstandard propagation; trapping affects the detection of targets only at low angles above the horizon. c. Meteorological factors that came trapping.--The atmospheric conditions which allow standard propagation to take place are common. The peculiar structure of the atmosphere that produces trapping, especially for frequencies of 3,000 megacycles and higher, also occurs fairly often in many parts of the world. Several types of meteorological conditions can produce the temperature and humidity gradients necessary for trapping to take place. Warm continental air blowing over a cooler sea leads to the formation of a duct by causing a temperature inversion as well as by evaporation of water from the cooler sea into the lower levels of the warm dry air. The base of such a duct is usually the sea surface with the trapping region extending several hundred feet upward. At higher latitudes--above 25° or 30°--this type of trapping is most prevalent along the eastern shores of continents. If it is present in the lower latitudes, the duct will be formed on the western coasts. This distribution with latitude results from the normal direction of the wind, which must be off-shore to produce the effect. Ducts of this sort will form only when there is a distinct temperature difference between the sea and the air blowing from the land. Hence, conditions are most favorable for trapping in the summer months. The duct is usually from 500 to 600 feet high, and it tends to remain level for a distance of 100 to 200 miles out to sea. Near the coast, trapping will be found to be strongest just after noon, and weakest just before dawn. Fog usually is not associated with the meteorological conditions that produce this form of trapping, but at times a surface fog will be observed 5 to 10 miles off shore. Over the open ocean, a surface duct such as that in A or B, figure 2-20, may be formed by cool air blowing over a warmer sea. There is no temperature inversion associated with this phenomenon, and the entire effect is caused, apparently, by the evaporation of water into the lower levels of the atmosphere. Ducts of this sort are often created by easterly winds, such as the trade winds, that have blown for a long distance over the open sea. The height of the duct increases with the wind speed, and at the higher wind speeds typical of Pacific trade belt--10 to 20 knots--ducts 50 to 60 feet high occur quite generally. These ducts seldom attain the height and intensity necessary to entrap the lower frequencies, but S- and X-band radars that have antennas within the duct show better range performance for the detection of surface vessels and low-flying aircraft than can be predicted from consideration of the characteristics of the radar alone. These ducts are important because they extend for long distances. An elevated duct as in C or D, figure 2-20, may form in an area of high barometric pressure --24-- because of the sinking and lateral spreading of the air, which is termed "subsidence." When the air is warm and dry and the subsidence takes place over the sea, water is evaporated into the air, forming a moisture gradient that leads to the formation of a duct. Such ducts are always formed above the sea, with the base of the trapping layer ranging in elevation from a few thousand to 20,000 feet. Subsidence trapping can nearly always be found in the tropics; the ducts so produced usually are low and strong off the western coasts of continents in the trade winds latitudes, and high and weak near the equator. In areas where a monsoonal climate is found, subsidence trapping is weak or nonexistent in the moist, onshore, summer monsoon, and (he duct is low and trapping strong during the dry, off-shore, winter monsoon. When a duct is formed as the result of subsidence, the strength of the trapping varies throughout the day, being weak in the mid forenoon, and strong after sunset. Other meteorological conditions that may produce trapping are cooling of hind at night by radiation and the mixing of two masses of air, as at a warm or cold front. The ducts formed by these effects are likely to be of such limited extent that they are unable to modify radar propagation by any appreciable amount. d. Prediction of non-Standard propagation.--In spite of the fad that the relation of weather to the production of trapping conditions is not fully understood, enough is known to permit a meteorologist to make a fairly reliable predict ion. Considerable equipment is required to measure the several variables that must be known to determine the structure of the atmosphere, and rather specialized meteorological skills are needed to interpret the data. Since neither the skills, nor the equipment is generally mailable to the fleet, reliable predictions are diffcult to make on board ship. However, even in i hi' absence of specialized equipment and personnel, it may sometimes be possible to predict the formation of ducts from observation of weather conditions, coupled with simple measurements that can be made on board any ship. Such ail estimate cannot hope to be highly reliable at present, but continuing research on the problem may ultimately reveal a fairly simple relationship between weather and trapping that will allow propagation conditions to be evaluated routinely. For a number of reasons the meteorological conditions in a region of high barometric pressure are favorable for the formation of ducts. Among the favorable factors are: subsidence, which creates temperature inversions, and which occurs in areas where the air is very dry so that evaporation can take place from the surface of the sea; calm conditions that prevent mixing Figure 2-20.--Coverage diagram showing the existence of ground-based and elevated ducts. --25-- of the lower layers of the atmosphere by turbulence, allowing thermal stratification to persist; and clear skies which permit nocturnal cooling over land. The conditions in a barometric low, on the other hand, generally favor standard propagation. A lifting of the air, the opposite of subsidence, usually occurs in such regions and it is accompanied by strong winds. The combined effect of these factors is to destroy any local stratification of the atmosphere by a thorough mixing of the air. Moreover, the sky is usually overcast in a low-pressure area, and nocturnal cooling is therefore negligible. Very often rain falls in a low-pressure area, and the falling drops of water have the effect of destroying any nonstandard humidity or temperature gradients that may have been established. In all of the weather conditions that produce trapping, the atmosphere must be sufficiently stable to allow the necessary stratification of the atmosphere to be established and to persist. Thus, continued calm weather or moderate breezes are necessary. It must be emphasized, however, that even if weather conditions may favor the formation of ducts, they do not always produce them. In terms of the readily observable phenomena, the weather conditions that may favor trapping are: A moderate breeze that is warmer than the water, blowing from a continental land mass. Clear skies, little wind, and high barometric pressure. A cool breeze blowing over the open ocean far from large land masses, especially in the tropical trade wind belt. Smoke, haze, or dust that fails to rise, but spreads out horizontally, indicating quiet air in which a temperature inversion may exist. When the air temperature at bridge level on a ship definitely exceeds that of the sea, or when the moisture content of the air at bridge level is considerably less than that just above the water, and the air is relatively calm. Although trapping conditions can occur at any place in the word, the climate and weather in some areas make their occurrence more likely. In some parts of the world, particularly those possessing a monsoonal type of climate, variation in the degree of trapping is mainly seasonal, and enormous fluctuations from day to day may not occur. In other parts of the world, especially those in which low barometric pressure areas recur often, the extent of nonstandard propagation conditions varies considerably from day to day, even during the season of greatest prevalence. Even though the geographical and seasonal aspects of trapping are not yet so firmly established that a map of the world can be drawn with trapping areas reliably delineated, it is possible to make a general summary. (1) Atlantic coast of the United States.-- Along the northern part of this coast, trapping is common in summer, while in the Florida region the seasonal trend is the reverse, with a maximum in the winter season. (2) Western Europe.--On the eastern side of the Atlantic, around the British Isles and in the North Sea, there is a pronounced maximum of trapping conditions in the summer months. (3) Mediterranean region.--Available reports indicate that the seasonal variation is very marked, with trapping more or less the rule in summer, while conditions arc approximately standard in winter. Trapping in the central Mediterranean area is caused by flow of warm dry air from the south (sirocco) which moves across the sea and thus provides an excellent opportunity for the formation of ducts. In the wintertime, however, the climate in the central Mediterranean is more or less a reflection of Atlantic conditions and hence it is not favorable for duct formation. (4) The Arabian Sea.--The dominating meteorological factor in this region is the southwest monsoon that blows from early June to mid-September and covers the whole Arabian Sea with moist equatorial air up to considerable heights. When this meteorological situation is fully developed, no occurrence of trapping is to be expected. During the dry season, on the other hand, conditions are very different. Trapping is then the rule rather than the exception, and on some occasions extremely long --26-- ranges, up to 1,500 miles, have been observed on P-band radar on fixed echoes. When the southwest monsoon sets in early in June, trapping disappears on the Indian side of the Arabian Sea. However, along the western coasts, conditions favoring trapping may still linger. The Strait of Hormuz is particularly interesting as the monsoon there has to contend against the shamal from the north. The strait itself lies at the boundary between the two wind systems, and a front is formed with the warm, dry shamal on top and the colder, humid monsoon underneath. As a consequence, conditions are favorable to the formation of an extensive duct which is of great importance to radar operation in the Strait of Hormuz. (5) The Bay of Bengal.--The seasonal trend of trapping conditions is the same as in the Arabian Sea, with standard conditions occurring during the summer southwest monsoon, while trapping is found during the dry season. (6) The Pacific Ocean.--This region appears to be one where, up to the present, least precise knowledge is available. However, there seems to he definite evidence of the frequent occurrence of 1 rapping around Guadalcanal, tIn- east coast of Australia, and around New Guinea. Observations along the Pacific coast of the United States indicate frequent occurrence of trapping, hut no clear indication of its seasonal trend is available. The meteorological conditions in the Yellow Sea and the Sea of Japan, including the island of Honshu, are approximately like those of the northeastern coast of the United States. Therefore, trapping in this area should be common in the summer. Conditions in the South China Sea approximate those off the southeastern coast of the United States only during the winter months, when trapping can be expected. During the rest of the year, the Asiatic monsoon modifies the climate in this area and no data are available as to the prevalence of trapping during this time. The trade winds in the Pacific lead to the formation of rather low ducts quite generally over the open ocean. 6. Weather.--Since moisture has substance, and hence reflects radio-frequency energy, the radio-frequency pulses of a radar beam are capable of detecting the presence of rain, or dense, electrically charged clouds. The image as presented on a scope appears as a strongly illuminated area of irregular shape. The intensity of cloud echoes as indicated by a radar set depends upon: The total amount of visible water per unit volume of the atmosphere. The size of the water droplets. The frequency of the radar set. a. General.-- (1) The relationship of the intensity of a cloud echo to the intensity or development of the meteorological condition is at present known only qualitatively. (2) When cloud formation is used simultaneously with the current synoptic information Figure 2-21.--Radar echo from thunderstorm of the convective type shown on the PPI. Radar is adjusted for 40-mile range and concentric lines are 5-mile markers. Irregular bright area at azimuth 190 and 20-mile range is thunderstorm. Note bright, solid appearance of echo and distinct boundary. Thunderstorms may often be identified as such by tipping parabola upward and scanning in elevation to disclose vertical structure of area from which echo is coming. This is readily computed from range and elevation angle data. In this case scanning in elevation disclosed a vertical extent of this storm of 40,000 feet, probably exaggerated to some extent by the beam width. Summer thunderstorms, however, give radar echoes to great heights in the atmosphere, thus readily distinguishing them from echoes from cumulus congestus clouds and other phenomena. --27-- Figure 2-22.--PPI presentation of radar echoes from cold-front thunderstorms. Radar is adjusted to 100-mile range. Range markers are at 20-mile intervals; 80-mile marker is farthest visible. Cold-front thunderstorms appear as line of bright echoes in northwest quadrant. Note ready distinction from irregular distribution of convective type of thunderstorm shown in figure 2-21. Radar was located at Cambridge, Mass., and photo was taken July 22, 1943, at 5 p. m. Because storm echoes may be seen so much farther than normal land targets (due to their great vertical development) it is important to search from time to time with the long-range adjustment of the system in order to pick up the echo at the earliest possible moment. of the same area, an estimate may be made of the conditions aloft. For tactical purposes, aircraft may be directed to avoid weather conditions hazardous to flight operations within the range of the set. In many cases a check is available on the development of storm conditions. b. Identification of thunderstorms.-- (1) The echo of a thunderstorm is one of the most easily identified signals detected. When examined on the PPI, it appears as a bright, dense, central area with indistinct boundaries. Because of the vertical extent of thunderstorms, the cloud echo will be indicated at higher elevation angles than targets at the same range. The maximum angle of elevation at which the cloud echo is received and the distance is a rough measure of the height or vertical structure of the thunderstorm. (2) Air mass or convective thunderstorms are scattered in a random manner; the area covered by an individual thunderstorm is usually of the order of several square miles. This type storm moves with the velocity and direction of the general circulation. (3) Orographic thunderstorms are generally characteristic of the terrain for certain atmospheric conditions. This type of thunderstorm will usually show little movement. (4) Cold-front thunderstorms are usually arranged in a line and bear a marked resemblance to the frontal system. Warm-front thunderstorms may be scattered or they may form along a line. c. Identification of frontal conditions.--(1) General.--Frontal conditions are generally accompanied by thunderstorms, rain showers, rain, and various cloud systems. The greatest concentration of activity is usually located in the frontal zone. (2) Cold fronts.--An active cold front is easily located as a line of cloud echoes. The structure and activity of cold fronts can be estimated qualitatively by consideration of the intensity of the cloud echoes, spacing between areas of brightness, the area covered by each cloud, the vertical extent and the velocity of the front. Weak cold fronts are often poorly defined and may be missed entirely. (3) Warm fronts.--Echoes from warm fronts are extremely variable. Atmospheric conditions (thunderstorms, showers, general precipitation, etc.) associated with warm fronts usually cover a wide area. Occasionally these conditions are concentrated in the active frontal zone; however, the majority of cloud echoes from warm-frontal conditions will be hazy and will cover extensive areas of the order of hundreds of square miles. (4) Equatorial front.--Echoes reported from the equatorial front may have the characteristics of cold fronts or warm fronts or a combination of both. d. Identification of other atmospheric echoes--(1) Line squalls.--The cloud echo of a line squall will appear as a long, narrow, rapidly moving thunderstorm (fig. 2-23). (2) Showers.--Echoes received from showers are generally less intense than thounderstorm echoes and have a hazy structure. The central portion is not always of the greatest intensity. --28-- Cloud echoes have been reported when no precipitation was reaching the ground in the area of the cloud echoes. It is still uncertain as to the reasons why cloud echoes received from certain atmospheric conditions are not received again under an apparently similar set of atmospheric conditions at another time. Figure 2-23.--Radar echo from precipitation in cumulus con-gestus clouds. System is adjusted for 20-mile range and markers show 4-mile intervals. It is noted that appearance on PPI is similar to thunderstorm echo. Distinction is made by scanning elevation. In this case it was disclosed that vertical development extended to only 15,000 feet. The meteorologist can thus get a valuable hint as to lapse rate aloft and vertical extent of instability by scanning in cross section through cloud areas. While no precipitation may reach the ground in some of these cases the area would undoubtedly be a poor one to fly through with aircraft. Few land targets are seen on the scope because the beam is elevated above the horizon. B. CONSTANT FACTORS. Regardless of the type of radar, there are certain "constant" factors which limit performance or impair to some extent the correct interpretation of screen patterns: 1. Power.--Power output from airplane generators, limitations of space, and the weight that can be carried by different types of aircraft affect the radiation power of air-borne radar and, in consequence, determine to a large extent the maximum ranges which can be attained. 2. Beam width versus trace width.--The width of the radar beam varies from about one-half of one degree in some equipments to as much as 30 degrees in others. The time trace on the map-type radars is a fine sweep line which denotes the orientation of the antenna in azimuth with respect to dead ahead. Also, it represents a center line through the radar beam. However, echoes from any target lying within the beam width of the radar will register on the time trace. Thus, if a beam is, for example, 10° wide, the leading edge of the 10° beam will cause an intensification 5° too soon on the rotating time trace; likewise, the trailing edge of the beam will cause retention of the intensification 5° too long. See figure 2-24. The result is an elongated indication on the scope, giving the impression that the target is of greater length than is actually the case. This effect varies with range. Figure 2-24.--The size of the target on the scope is no reliable indication of relative target size. This diagram shows how the beam width tends to distort target size (length). 3. Range resolution and diffusion of light.--Depending on the angle at which a radar beam strikes a target, the apparent size of an object --29-- on map-type presentations is increased by a diffused fringe of light which surrounds the scope presentation of all targets. On the 20-mile scale, this extra illumination will cause an error of about one-tenth nautical mile. The cumulative effect of these errors results in a slight apparent increase in the dimensions of land and other targets, and thus partially obscures certain identifying characteristics of size, shape and depth. In some cases, this diffusion is sufficient to mask or fill in scope images of channels and rivers. The condition, however, can be cleared up considerably by the intelligent manipulation of the tilt control of the antenna spinner. None of the above factors seriously affects the accurate computation of range and does not attain major importance unless the radar is to be used to bomb cities and islands where sharp definition is necessary for the selection of the proper targets. Greater definition has been attained in.the newer equipments by the use of a narrow beam width, a short pulse length, and operation within the microwave frequencies. 4. Maintenance.--Probably the most important factor which limits the ability of any piece of electronic gear to function at top efficiency is the level of systematic maintenance which the system receives. The regularity with which preflight and other periodic performance checks are made will do much to insure that not only are the maximum number of available airborne electronic installations functioning but that each one is operating at a degree of efficiency which will allow tactical commanders to expect the maximum in performance. --30-- [BLANK PAGE] --31-- ASB --32-- EQUIPMENTS ASB SERIES NOTE:-These equipments were designed and manufactured prior to the adoption of the "AN" nomenclature system and continue to bear the older, Navy ASB designation. ASB-1, -2, -3, -4, and Bendix receivers for ASB-5 (Serials 1 through 207) are obsolete. It is expected that ASB installations ultimately will be replaced by AN/APS-4. I. FUNCTION A. PRIMARY PURPOSE. ASB is low-powered type of L-band air-borne radar intended primarily for search use. B. SECONDARY USES. Although designed specifically to operate as a search radar, ASB through its long period of use has lent itself admirably to such applications as navigation, beacon homing, low-altitude bombing, and IFF (target identification). II. DESCRIPTION A. MAIN COMPONENTS. The units comprising the ASB system are:. - Movable, directive antennas. - Hydraulic antenna controls. - Low-powered transmitter. - Receiver. - Antenna switching system. - Indicator. - Control unit. - Power supply, cables, junction boxes. B. ANTENNA SCAN AND SCOPE PRESENTATION. 1. Beam coverage and antenna control.--Each of the antennas emits radio energy in a beam or lobe approximately 40° wide in the horizontal plane. In the vertical plane the beam is approximately 45° in depth when the antennas are in the homing position and about half that depth when in the search position. By means of separate hydraulic controls, one for each antenna, the antennas can be moved together or independently, from ahead outboard through an arc of about 90° (see fig. 2-26). A notched arrangement of the control mechanism allows the antennas to be moved through their train in 8° steps. When the hydraulic controls are fully retarded, the antennas point outboard to port and starboard, at right angles to the line of flight (search position). When moved to their full forward position the antennas point dead ahead (homing position). Intermediate positions of the antennas may be judged by observing the positions of the antenna control handles. 2. L-scan.--The scope in the ASB indicator is the L-type (see fig. 2-27), where range is indicated by the position of blips upward from the bottom of the vertical time trace. Bearing is Figure 2-25.--The ASB antennas are mounted under the wing tips on the TBF aircraft. --33-- Figure 2-26.--Manually operated hydraulic controls permit the wing tip antennas to be rotated in steps from the homing position dead ahead, outboard to the search position. The antennas may be moved singly or together. indicated by the lateral disposition of the blips about the trace. The width of the beam is such that in the full forward position an overlap of both antenna beams is produced; from targets dead ahead, then, each antenna receives an equal amount of reflected energy and so produces a blip equally spaced laterally on the trace. Targets slightly off to either side of dead ahead will produce a blip whose sides are of unequal length and so indicates their bearing. Echoes from port or starboard targets are indicated on the scope as blips on the port or starboard sides of the trace only when one or the other of the antennas has been moved sufficiently to place its beam on the target. A single target such as a ship, aircraft, or buoy will produce a single blip on the trace, its Figure 2-27.--When the "gain" or sensitivity of the receiver is increased, electrical and atmospheric noise causes an irregular indication (grass) along the length of the trace. A large target is indicated at about 4% miles and a small target to starboard at 3 miles. The height of Ihe lower edge of the sea return furnishes a rough indication of the aircraft's altitude. --34-- lateral length affording some indication of its relative size. Land targets, ship convoys, or closely grouped islands, because they return many echoes, produce many blips closely bunched together. Stray radiation of radio energy from the antennas to the sea or terrain directly below the aircraft returns echoes of considerable strength. They appear on the scope screen as irregular blips near the bottom of the trace. This irregular scope pattern is called the sea return (or ground return). The distance upward from the bottom of the trace to the bottom of the sea return is a measure of the aircraft's absolute all it ude. 3. Range settings.--Four operating ranges are provided-2, 7, 28, and 70 nautical miles. A graduated plastic scale placed over the face of the scope enables the operator to judge target ranges with a fair degree of accuracy for the 7-, 28-, and 70-mile ranges. On the 2-mile setting, ranges must be approximated. C. IMPROVEMENTS INCORPORATED IN LATER ASG EQUIPMENTS. Long standard for carrier-based aircraft, the ASH radar was continued in use for some aircraft types because of a series of improvements which enabled it to meet the demands of the fleet. Greater security, longer ranges, increased accuracy, and ability to cany nut varied tactical assignments were made possible with the later ASB equipments. 1. Frequency changes.--Because of ASB's comparatively low frequency, enemy jamming attempts have been feared more with ASB than Willi more modern microwave equipments. To assist in avoiding such action by the enemy, improved transmitters have been designed to operate nt any of three different frequencies instead of only one as was formerly the case. This group of three frequencies are referred to as "A1," "A2," and "A3." A change from one frequency to another cannot be made in flight, chiefly because antenna adjustments are necessary; however, a change between flights is easily accomplished and different aircraft in a squadron may operate on different frequencies. 2. Main components.--The ASB radars equipped for "A"-band operation include two new units: The ASB-7A indicator, which is a complete redesign, and the ASB-7A (or ASB-7B) transmitter. The ASB-7A transmitter is simply the ASB-7 transmitter modified to be tuneable over the "A" band. The ASB-7B transmitter is the redesigned, high-power, plate-pulsed unit. In addition, the complete radar set includes the ASB-7 or ASB-Series receiver of the type already in use, which is capable of being tuned to any of the "A" band frequencies, the ASB-7 control unit, rectifier power unit and antenna switching unit. 3. Sensitivity switch.--In addition to "A"-band operation, a second safeguard against enemy jamming is the sensitivity switch, mentioned above, for the receiver. The switch is designed to prevent saturation of the receiver by jamming signals. 4. Special features and advantages of the new ASB.--Installations using the new ASB-7B transmitter and the ASB-7A indicator have, in addition to greater security, several other advantages. The ASB-7B transmitter more than doubles the power output of the ASB-7 or ASB-7A transmitter-approximately 8 to 10 kilowatts (peak power) as compared to 3 to .'i kilowatts. This power increase accounts for:'i(i to l() percent increase in range results. Also, the ASH 7H transmitter has been designed to Figure 2-28.--On the 28-mile range two single-ship targets appear--one at 1 5 miles, the other at about 1 1 miles. With the antennas in the homing position the near target is almost dead ahead while the farther one is slightly off to the starboard side of dead ahead. --35-- Figure 2-29.--A beam-on approach to ships in convoy might produce the blip shown above. If shows several target blips closely grouped at about 24 miles. give low "case radiation" so that interference affecting radio reception is reduced. Still other advantages in performance, operation and maintenance are provided by the new ASB-7A indicator. Among them are: (a) More accurate indication of target ranges. (b) New range scales fitting the ASB for the tactical demands of present-day carrier-based aircraft; a 150-mile range with 30-mile calibration intervals; a 50-mile range with 10-mile calibration intervals; a 15-mile range with 3-mile calibration intervals; a 5,000-yard range with 1,000-yard calibration intervals, and additional 200-yard intervals between zero and 2,000 yards (provided for torpedo runs and bombing). (c) Provisions for operating ASB with AN A PA 16 (low altitude bombing equipment). (d) Circuits which maintain substantially optimum focus and stable operation on all ranges. (e) Addition of front-panel controls to compensate for effects of generator voltage and wave-form variations. (f) Triggering from leading edge of pulse, eliminating the "inherent error" of former indicators. III. TACTICAL EMPLOYMENT A. SEARCH. 1. Types of targets.--In the PPI-type (and to some extent, the B-type) scope presentations, targets are displayed in their true azimuth and range relation to the searching aircraft. Coastal areas, islands, harbors, rivers and large lakes are shown in maplike form. But in equipments such as the ASB which employ the L-type scope presentation, all targets are represented by blips-or the absence of blips. Beading the L-type scope pattern requires the mental facility of converting blips of all shapes and sizes and numbers into a mind picture of the types of target they represent. With the exception of the sea return caused by the return of echoes from the sea directly below the plane, sea areas generally return no echoes. So, when flying over water areas there is an absence of blips on the trace. (The exception to this is when a relatively rough sea surface causes waves high enough to return echoes.) Conversely, land masses return many echoes, their strength and number varying with relation to the topography of the terrain, the range at which they are received, and the sensitivity (or "gain" setting) of the receiver. The depth of a land mass can be ascertained by the amount of trace covered by the blips. Single ship targets, figure 2-28, produce single blips, but their size and depth on the trace cannot be relied upon to indicate the relative Figure 2-30.--On the 7-mile range an aircraft target lies dead ahead at about 3½ miles. --36-- Figure 2-31.--This irregular long blip is caused by reflections from an island dead ahead. size of the target, for the area and composition of the reflecting surface offered by the target to the beamed energy determines the strength and size of the blip. A small ship broadside may produce a blip as large and as strong as a large ship bow-on. Ship convoys, figure 2-29, produce many blips in a small area on the trace but each ship blip is usually individually distinct because of the sea area separting the ships. Small islands and projecting strips of land, however, return one large blip (consisting of many small blips) having a jagged and irregular outline. Single aircraft target blips, figure 2-30, have the same general characteristics as a single ship blip excepting that, because of the changing attitude presented by an aircraft in flight, its blip will "beat" or pulsate. Aircraft blips can he recognized by their relatively faster movement up or down the trace, or their complete lack of movement. An aircraft on a reciprocal course to the search aircraft will produce a blip which moves down the trace rapidly. The range to an aircraft on the same course, ahead, will be closed less rapidly than for a ship target. If the speed of the two aircraft is identical, the target blip will remain in one position for a long period of time. 2. Wind (or drift) pattern.--High surface winds cause otherwise calm seas to become rough, resulting in an increase in the reflecting area of the sea surface in certain directions and causing an increase in the sea return. In general, the sea return rises highest on the trace when the antenna is pointing toward the wind and is less when the antenna is pointing downwind. Winds of 15 knots or better can be checked in this manner, from altitudes of 1,500 feet and below. 3. Altitude ranges.--ASB is not designed for high-altitude operation, works less efficiently above 10,000 feet, and is subject to electrical breakdown at higher altitudes. The best altitudes for search with ASB are approximately 6.000 feet and below. The table below presents a general performance indication of the equipment for various kinds of targets. Figure 2-32.--With the starboard antenna pointed 90° to starboard an island blip appears at about 13 miles. The height of the blip on the trace generally denotes the depth of the island. --37-- Figure 2-33.--With the ASB receiver tuned to the YJ beacon frequency a strong coded beacon signal shows at about 1 8 miles. B. NAVIGATION. Radar navigation with ASB means, primarily, identifying with respect to range and azimuth position the characteristic blips of coast lines, strings of islands, harbors or other recognizable terrain contour features and comparing this information with charts of the area for the purpose of making landfalls, guiding a flight through channels, returning to a base or homing on a specific target. 1. Homing: Radar targets.--Movements of the antennas for search purposes through their 90° arc makes it possible to pick up coast-line promontories, harbors or islands (see figs. 2-31 and 2-32). After a target has been identified by reference to its known geographical, charted location, the aircraft's heading is altered to home on it and the antennas are swung to the dead-ahead, or homing, position. Drift can be approximated to a fairly accurate degree by observing the direction of the antenna when a high sea return on one side of the trace is produced (See "Wind (or Drift) Pattern," p. 37). Actual ground speed can be estimated by observing over a measured period of time the movement of the target blip down the trace. Knowing these factors, a collision course can be developed for homing on the selected target. 2. Homing: Beacon targets.--Radar beacon homing, although requiring that all of the factors as in the case of target homing be known, is made easier of accomplishment for the following reasons: (a) Greater homing accuracy results because, instead of homing on an indefinite and sometimes only generally identifiable radar target, the radar receiver, tuned to a radar beacon frequency, receives a coded, definitely identifiable signal from a beacon transmitter whose actual location is a matter of previous knowledge. (b) Instead of being a weak, reflected radar echo, the radar beacon signal is a relatively powerful transmitted signal and can be observed at much greater ranges. (c) Retiming a radar receiver to a radar beacon transmitter frequency clears the scope screen of echo blips. Thus, radar beacon signals, if the beacon is within range, appear sharply and distinctly on the scope screen, correct in range, and, when the antennas are in the forward or homing position, correct in relative hearing. (See figs. 2-33 and 2-34.) Identity of radar beacon stations is established by observing the coded sequence of the signal blips as they appear on the scope screen. Figure 2-34.--Same target as in figure 2-33. Range is closed slightly. The displacement of the blip toward the left of the trace indicates that the search aircraft has changed bearing slightly to starboard. --38-- C. BOMBING OR TORPEDO DROPPING. 1. Factors to be determined.--Bombing with ASB presupposes that a definite target has been selected and homed on and that the range has been closed sufficiently to permit use of the 2-mile range scale. As explained previously for the homing techniques, drift and speed must be determined and taken into account when making a bombing run. In making a bombing run, observation of the target blip's lateral position on the trace can be improved immeasurably if as the range is closed the two antennas are moved simultaneously a few notches outboard from the homing or full forward position. This procedure narrows t lie overlap area of the two antenna lobes ahead of (be aircraft. Then, even a slight deviation of tlie aircraft from the true homing course will be apparent from the rapid movement of the blip either side of t be trace. 2. Release point.--In general, the greater the altitude, the more difficult it is to home directly over it target. Therefore, in order to reach the release point, bombing with ASH calls for low-altitude homing on a specified target with a high degree of accuracy right down to the last few hundred yards. The bomb release point is determined from a reference point called the "mergence point" which on the ASB indicator is a noticeable change in the appearance of the target blip at a range of about 500 yards. The actual release point may be at the mergence of the target echo with the top of the sea return or it may be at a point obtained by a timing count from its first appearance. On the 2-mile scale, certain features of sea return display must be kept in mind when determining the release point: - Sea return is a function of the aircraft's altitude. - Sea return tends to mask the blip of a target at close range. Therefore, the sensitivity or "gain" of the receiver must be reduced in order to keep the sea return at a minimum (and to prevent the target blip, at close range, from reaching the saturation point). --39-- AN/APS-15 --40-- AN/APS-15 I. FUNCTION A. PRIMARY PURPOSE. AN/APS-15 is an X-band radar designed primarily for general search operations in heavy patrol craft. B. SECONDARY USES. AN/APS-15 may be employed for navigation, beacon homing, and pattern bombing. In conjunction with other electronic equipment, it may be used for target identification. Figure 2-35.--Looking down from the top the beam emitted from the antenna spinner is 3° wide. Viewed in a vertical plane, the radiation pattern of the beam, unlike in other types of PPI equipment, is made purposely wide to cover a large area forward to the horizon and downward. II. DESCRIPTION A. MAIN COMPONENTS. The main units of AN/APS-15 are: - Transmitter-converter unit. - Rotating antenna-spinner assembly. - Receiver-indicator, housing the oepratior's viewing (PPI) and monitoring (A) scopes. - Repeater (PRI) scope, for pilot or navigator use. - Control unit. The computer box and control unit are mounted adjacent to the receiver-indicator, convenient to the operator. The antenna-spinner assembly, together with the transmitter-converter, is housed in a plastic radome to protect it in flight in its rotation through 360° of azimuth. In some installations this entire assembly is mounted on a platform within the aircraft and must be lowered through the fuselage prior to operation of the equipment. B. ANTENNA SCAN AND SCOPE PRESENTATION. 1. Beam coverage.--The rotating antenna projects a beam or lobe of radio energy, outward and downward, 3° wide in azimuth and extending in a vertical plane from a few degrees below the horizontal (o about 10° forward of the vertical axis of the aircraft. See figure 2-35. The extent of the area covered by the rotating beam depends mainly upon the altitude of the aircraft in which AN/APS-15 is installed. (See Tilt, p. 43, par. 2.) 2. PPI-scan.--On both the PPI and PRI scopes, a radial time trace revolves in step with the rotating antenna. Targets in the scanned area return echoes which cause bright spots to appear on the time trace, leaving their mark on the scope screen, correct in azimuth and range. Surface ships and aircraft produce relatively small light spots on the screen. Islands, coast lines, and large land masses produce relatively large patches of light having a maplike quality, their outlines being generally similar to the actual outlines of the terrain scanned by the antenna beam. Because of the peculiarly shaped antenna radiation pattern, large targets, Figure 2-36.--The relief map quality of a land mass pattern such as that shown above is characteristic of the scope presentation of APS-15. This view shows the Tokyo Bay area. Note the heavy concentration of bright spots indicating city areas. --41-- Figure 2-37.--Range marks on the APS-1 5 scope appear as circles of light. This view shows the circles produced when the equipment is set to the 50-mile range. Note the convoy just inside the 40-mile circle at a bearing of 10°. particularly land masses, produce scope images with a fidelity which compares favorably with a relief map of the region. See figure 2-36. 3. Range settings.--At calibrated intervals along the trace, bright spots can be made to appear, which, as the trace rotates, produce evenly spaced circles on the scope screen. These circles or range marks assist in judging approximate target ranges. See figure 2-37. Four operating ranges are provided-5, 20, 50, and 100 miles. C. SPECIAL FEATURES. AN/APS-15 incorporates in its design certain special features which increase the effectiveness of its use for search, navigation, and pattern bombing. They are: - Open center control. - Antenna tilt. - Automatic tilt stabilization. - Automatic azimuth stabilization. - Manual sector scan. - Automatic sector scan. - Electronic lubber line. - Altitude determination. - Drift determination. - Precision ranging. - Sweep delay selector. - Beacon homing. 1. Open center control.--At close ranges (5 to 20 miles), targets around the center of the screen may be somewhat crowded. In order to Figure 2-38.--APS-15 incorporates in its design the feature of automatic azimuth stabilization. Above, left, is shown the scope presentation with automatic azimuth stabilization switched off. Here zero degrees indicates the heading of the aircraft and all targets appear relative to the aircraft's heading. At the right the same pattern is shown when automatic azimuth stabilization is turned on. The land pattern has shifted to its true geographic location with respect to the top (zero degrees) of the scope, which is north. The electronics lubber line indicates this heading of the aircraft. --42-- separate these targets, an "Open Center" control has been provided. Operation of this control makes it possible to move the start of the trace outward from the center of the screen which, in effect, very noticeably expands the presentation of these scope images originating from targets close in. 2. Antenna tilt.--By manual control of a motor-driven mechanism the antenna assembly can be tilted from 20° above to 20° below the horizontal axis of the aircraft. Control of antenna tilt enables the operator to vary the angle at which the antenna beam travels outward around the aircraft. Angles of tilt are indicated on a tilt meter located on the panel of the receiver-indicator unit. Tilt affects range. With low angles of tilt, the area traversed by the beam is small; with high angles of tilt, the area traversed can be extended to the limits of the horizon. With a fixed tilt setting it is possible that changes in the flight attitude of the aircraft will cause the transmitted antenna beam to shift from a target area during successive sweeps of the antenna. To counteract this effect, automatic tilt stabilization may be employed. 3. Automatic tilt stabilization.--Automatic lilt stabilization employs a gyro system connected to the till mechanism in such a way that when the antenna is manually adjusted to any preset angle by the tilt switch, this angle of tilt will be maintained regardless of limited (20°) changes in the aircraft attitude of flight, within a range of 20° above or below the horizontal axis oil he plane. 4. Automatic azimuth stabilization.--When the automatic azimuth stabilization feature of AN/APS 15 is used, information from the aircraft's flux-gate compass is related to the motor-driven mechanism of the antenna spinner so as to cause the indicator sweep rotation to be oriented with respect to true north. As a result, t a rget patterns are displayed on the scope screen oriented with respect to true north, and they remain so oriented regardless of changes in the heading of the aircraft. See figure 2-38. 5. Manual sector scan.--By means of switches mounted on the control unit, the normal 360° rotation of the antenna spinner can be interrupted to provide manual control of antenna movement. Operation of the "L-R" switch causes the antenna to rotate left or right, at will. This is particularly helpful when a target indication in a specific sector is to be inspected more closely and more quickly than is possible with the 360° rotation. See figure 2-39. 6. Automatic sector scan.--Sector scanning a specific area can be accomplished automatically. Suitable controls on the control unit permit any sector of the full 360° to be selected, while other controls regulate the width of the sector scan. Both automatic sector scan and automatic azimuth stabilization can be employed simultaneously to display the scanned sector on the scope in its true relationship to north. 7. Electronic lubber line.--Employment of this feature produces intermittently a line of light on the scope whose azimuth position indicates the heading of the aircraft. A micro-switch aligned with the fore-aft axis of the aircraft controls electronic circuits in the equipment which produce a line of light on the scope screen each time the spinner sweeps past the dead-ahead position. See figure 2-40. 8. Altitude determination.--Two features of design assist the AN/APS-15 operator to determine the absolute altitude of his aircraft be-t ween the limits of 10,000 to 36,000 feet, with an accuracy of ±100 yards. They are the A-scope on the receiver-indicator and the altitude Figure 2-39.--Where a specific area is to be searched the APS-15 can be set to sector scanning (either manual or automatic) causing the antenna to scan the transmitted beam back and forth over the desired area. On the scope only the area being scanned appears as a radar image. --43-- Figure 2-40.--The use of the electronic lubber line on the APS-15 provides a quick and accurate indication of the aircraft's heading. When automatic azimuth stabilization is used the heading indicated by the electronic lubber line is true as shown above. When automatic azimuth stabilization is not used the electronic lubber line when switched on will appear at zero degrees. control on the computer unit. When the range unit is switched on, a downward projecting pip, called the altitude pip, is developed near the left end of the A-scope's horizontal trace. Rotating the altitude control on the computer unit shifts the position of the sea return on the A-scope trace. When the sea return is moved so that its left edge is in coincidence with the altitude pip, the absolute altitude of the aircraft is indicated by the position of a sliding index hairline along a vertical scale on the computer box. (With the AN/APS-15 A or AN/APS-1515, the minimum altitude that can be determined accurately is 3,000 feet.) 9. Drift determination.--An estimation of drift is possible when the automatic azimuth stabilization and electronic lubber line features are used. The scribed movable azimuth index line located over the face of the PPI is rotated so as to align it parallel to the movement of the pattern on the scope screen. The angle this line makes with the electronic lubber line is an indication of the aircraft's angle or drift. 10. Precision ranging.--Use of the range unit, computer box and control unit produces a range circle on the PPI which can be used to indicate the slant range to targets up to a distance of about 45 miles with an accuracy of ± 150 yards. See figure 2-11. When the factors of absolute altitude, ground speed, drift, and type of bomb are known and (he equipment is adjusted accordingly, the time for bomb release is indicated when the selected target moves in along the scribed azimuth index line (set to represent course) to coincide with the slant range circle. As a reverse function, the slant range circle can be coincided with a target or signal to denote its slant range. 11. Sweep delay selector.--Sweep delay is a design feature incorporated in AN/APS-15 which allows the start of the trace on the PPI and PRI to be delayed in 10-mile steps, up to a total of 200 miles. When this feature is used in conjunction with the equipment's normal ranges of 5, 20, or 50 miles, the trace then represents the outer portion of the extended area; e. g., if the sweep delay is set to introduce 150 miles of delay and the equipment is set to the 50-mile range, the trace will represent coverage of the last 50 miles of a 200-mile area. 12. Beacon homing.--To enable the AN/APS-15 to trigger X-band radar beacons and receive their signals, the AN/APS-15 must be reset from the search function to beacon Figure 2-41.--Operation of the ranging feature of APS-15 produces a slant range circle on the PPI scope which can be employed for precision pattern bombing of a selected target. The bomb release point is indicated when the selected target moves down to the slant range circle as produced by setting the equipment for correct altitude, drift, and ground speed. --44-- Figure 2-42.--On PPI type scope presentations a beacon signal appears as a series of vertically space dashed lines. Their number and separation denotes the code employed. Range to the beacon is measured from the center of the PPI to the lower edge of the innermost dash. operation. When so adjusted, the receiver-indicator may be retuned, not to the radar pulse-and-echo frequency, but to the transmission frequency of the X-band radar beacon transmitters, such as AN/CTN-6. As a result, radar echo signals do mil appear on the scope screen. The radar beacon signals appear correct in azimuth and range. [dentity is indicated by the coding of I he beacon signals. See figure 2-42. The range circles for the 5-, 20-, 50-, and 100-mile ranges can be employed to denote approximate range to a beacon station. By employing the sweep delay feature, beacon signals up to 250 miles away (50-mile range setting plus 200 miles delay) can be detected. The maximum distance that can be displayed on the scope is 250 miles, because the sweep delay selector settings operate only in conjunction with the basic 5-, 20-, and 50-mile range settings on the indicator. It must he remembered that maximum ranges are dependent upon the aircraft's altitude and the line-of-sight distance to the beacon transmitted (see Range, Part III, p. 18). When more accurate ranging is required, the range unit may be switched on. The slant range circle may then be employed to indicate the exact range to the beacon station. Precision ranging is possible only to 215 miles. Since the maximum radial distance which can be represented by the slant range circle is 15 miles, this figure added to the maximum of 200 miles delay obtainable from the sweep delay selector setting will produce a circle which represents a range of 200 plus 15, or 215 miles. III. TACTICAL EMPLOYMENT The tactical employment of AN/APS-15 and the effectiveness with which it contributes to the success of a mission depends in large measure upon the skill of the operator in employing any or all of the features incorporated in its design and in interpreting the scope patterns produced as a result of the use of those features. A. SEARCH. 1. Types of targets.--In search operation, targets may be classified either as single spots or as relatively larger patterns which are produced by land masses. At long ranges (50-, 100-mile range settings) the size of a target spot on the scope cannot be interpreted to indicate with any degree of accuracy whether the target spot has been produced by one target such as a single ship, or many ships in convoy. It is only after range has been closed that closer inspection of such target spots will reveal the number of targets. Similarly, the size of the target spot is no definite indication of the actual relative size of Figure 2-43.--The above view of the APS-1 5 scope gives an excellent indication of land mass, island, and surface ship targets. The circular pattern around the center of the scope is caused by sea-return reflections. --45-- the target. Large ships bow-on or stern-to may cause target indications on the scope of the same size that smaller ships beam-to will produce. Images of land masses, coastal areas and islands are faithfully reproduced on the AN/APS-15 scope screen with a degree of fidelity almost approaching that of a relief map. See figure 2-43. Mountainous or rugged terrain areas will stand out more sharply than flat areas. Coast lines are well defined. At long ranges, cities appear as concentrations of light on the scope pattern at a greater intensity than the surrounding areas. Rivers and harbors can be identified. Specific buildings amongst many cannot be identified. Thunderheads, because of their great moisture content, reflect radar echoes and cause distinctive patterns to appear on the scope screen. These patterns resemble those of land somewhat in that they are fairly solid, but have fuzzy, indistinct edges. The upward tilt of the antenna required to pick up "fronts," plus the fact that reference to charts of the area will determine whether land echoes can be expected, will assist in identifying fronts as such. Because echoes from weather fronts are as bright as echoes from other types of targets, heavily clouded areas clutter the scope, tending to mask target areas. This and heavy precipitation may mask more distant radar echoes, tending to reduce the effective range of the equipment. For the study of echoes due to weather, the A-scope of the AN/APS-15 should be used in conjunction with the PPI for the following reasons: (a) The A-scope permits a study of small variations or changes in amplitude of a returned echo which ordinarily would be masked out on the PPI. The A-scope accurately shows the relative signal strength of returning echoes. The relative density of clouds can be determined from this information. (b) Due to the high persistence of the PPI's fluorescent screen, a sudden change is not as apparent on the PPI as on the A-scope. (c) The A-scope measures contours of clouds better than the PPI scope. On the PPI, two or more clouds relatively close together would appear as one large cloud. On the A-scope, however, there would be a noticeable separation between the echoes returning from the two clouds. 2. Altitude/range.--The range at which targets may be displayed on the scope depends almost entirely on the aircraft's altitude. At an altitude of 2,000 feet, the line-of-sight distance to the horizon is about 50 miles. At this altitude, even though the 100-mile range is used, surface targets at distances greater than 50 miles will not be seen. (See altitude chart, fig. 2-17.) Altitude plays an important part not only in determining the area covered by the sweep of the beam but also in producing adequate resolution and definition of a target. Land mass targets observed at low altitudes generally are not as well defined on the scope as the same targets observed at greater altitudes. A limiting factor, however, is the production of a larger sea return as the altitude is increased. Large sea returns tend to mask targets close at hand. From a practical standpoint, some balance must be struck between large sea return at high altitudes as weighed against better resolution of targets at the higher altitudes. 3. Conversion of slant to horizontal range.-- Any radar equipment deals in slant ranges. From the computer drum can be read the distance which represents the hypotenuse of a right triangle of which the vertical distance from the airplane to the ground is one side and the horizontal distance from a point directly beneath the airplane to the target is the other side. The difference between slant range and ground range is negligible whenever the range is more than five tiihes the altitude, but for targets close by, the difference of slant range versus actual range must be taken into consideration if an accurate range indication is required. 4. Tilt control and flight attitude.--Tilt control, tilt stabilization, and automatic azimuth stabilization all contribute to clarity of target definition and to the ability to maintain a target indication on the scope, regardless of limited maneuvers of the aircraft. --46-- Figure 2-44.--When searching at lower altitude the antenna tilt must be raised in order to increase the radar horizon range. Tilt is not critical for targets which are already within the beam; but in order to pick up targets which are beyond the beam coverage, the antenna must be tilted upward. Particularly with respect to aircraft targets, the upper edge of (lie beam must be raised by means of 1 be tilt control iii order to include such targets in the beam's coverage. See figure 2-44. The tilt control also functions to insure that with de-crease in altitude the beam may be progressively raised to keep the target in the area covered by i he beam. 5. Ranges of detection.--The actual ranges at which targets may be observed depend upon such factors as: - Varying performance of individual equipments. - Operator skill. - Altitude. - Type and composition of target. - Intelligent use of tilt control. The following table of average ranges to be expected for various types of targets is indicative only in a general way of what can be expected in the use of AN/APS-15: B. NAVIGATION. 1. Aids to precision navigation.--Because of the high-fidelity, maplike scope presentation of prominent land masses which are produced when AN/APS-15 is set for search, and the display of code-identified beacon signals when set for beacon reception, precision navigation is one of the main uses to which AN/APS-15 equipment can be put. A number of design features contribute to precision navigation: When set for Search (a) The electronic lubber line shows the heading of the aircraft. (See figure 2-45.) (b) The scribed azimuth index line, when aligned parallel to target movement across the scope screen, can be used with the electronic lubber line to indicate angle of drift. (c) With the slant-range circle feature, precision ranging on a given target can be conducted over a measured period of time to calculate absolute ground speed. (d) Azimuth stabilization allows viewing of a truly oriented maplike land mass area for comparison with charts of the same area. (e) Knowing the above, a collision course with a selected target can be established. When set to Beacon (f) Relative bearing of beacon stations can be ascertained by lining up the scribed index line with the center of the coded signal and noting its azimuth position with respect to the --47-- Figure 2-45.--When it is desired to home on a target such as a point of land as indicated in the view above the use of the electronic lubber line will assist in maintaining the heading of the aircraft toward the selected target. azimuth indicator scale around the periphery of the scope. (g) Use of azimuth stabilization will orient the beacon signals on the scope to their true geographic location. (h) By the use of the sweep delay sector feature, beacon station signals up to a range of 250 miles can be received, provided the aircraft is flying at an adequate altitude. (See line-of-sight chart, figure 2-17.) (i) Precision ranging on beacon station signals permits accurate interpretation of slant range to the beacon up to approximately 215 miles. 2. Homing: Radar targets.--All of the factors affecting target resolution and definition, such as altitude, range, antenna tilt, terrain contour, and weather, enter into a consideration of effective homing. Heading and drift determine the track which must be maintained if it is desired to set a collision course for homing on a specified target. See figure 2-45. When a target in a specific area has been selected for homing purposes, the job can be simplified through the use of the electronic lubber line, azimuth stabilization and sector scanning. Use of the range marker circles will assist in approximating target ranges. Use of the slant range circle (within a range of 45 miles) will denote accurate target range. 3. Homing: Beacon targets.--The factors outlined in the preceding paragraph apply to a limited degree in homing on the signal from a radar beacon station. The exception is that with radar beacon signals the reception ranges are greater than for target echo ranges. This is because a radar beacon signal is a directly transmitted powerful signal while the target echo is a weak reflection of the original radar transmission. Also, a radar beacon station, if it is in operation in the area being patrolled and within range, produces a coded signal on the scope screen, to furnish a positive indication of the beacon transmitter's identity, range and bearing. See figure 2-46. C. BOMBING. 1. Pattern bombing.--At the altitudes at which AN/APS-15 may be employed for bombing, 10,000 to 36,000 feet, the target areas must necessarily be general in nature. The equipment is suitable for pattern bombing-not for bombing of specific objects. Through overcast, or under conditions of poor or zero visibility, AN/APS-15 can furnish information on: - The aircraft's ground speed. - The aircraft's angle of drift. - The aircraft's true heading. - The track to be flown. - The closing range of the target area. From a knowledge of these factors and the Figure 2-46.--After a beacon signal is picked up the aircraft is turned until the signal appears dead ahead. Range to the beacon may be judged wilh the aid of the range circle. --48-- type of bomb being used, an operator can determine, through the use of the slant range circle, when the bomb-release point has been reached. 2. Absolute ground speed and drift angle.-- Although AN/APS-15 can be employed to determine absolute ground speed and drift angle, it should not be relied upon solely to furnish this information when other means are available. Rather, AN/APS-15 ground speed and drift information should be used to supplement and cross-check the data provided by other proven means. 3. Low-altitude bombing.--Although the equipment is designed for bombing from altitudes above 10,000 feet, it can be used for low-altitude bombing when fitted with a lowT-alti-tude antenna reflector which is supplied with the equipments. However, because it is difficult to follow the target on the PPI at ranges of less than 1 mile, AN/APS-15 is suited to low-altitude bombing only when the proper release point can be determined visually. NOTE.--A new series of air-borne search radars, designated as an APS 30 Series, is under development to replace generally AN/APS-3 and the AN/APS-15 series. The AN'/AI'S .'!<! series consists Of two basic equipment and two alteration kits, as follows: AN/APS-31 is an X-band search radar containing many of the special features of design and operation contained in AN/APS-15. Its antenna scans through a forward-looking field of view producing a scope presentation of a 160° V on a PPI-type scope. AN/APS-32 consists of a kit of antenna and other essential replacements which when used with the AN/AI'S-31 changes it from X-band operation to K band operation. AN/APS-33 is an X-band search radar similar to AN APS-15 and AN/APS-31, but furnishing either a full 360° PPI-type presentation or a 60° V-type presentation, the apex of the latter appearing above bottom center of the scope face. AN/APS-34 is a change over kit for use with AN/APS-33 to change it from X-band operation to K-band operation. AN/APS-15A AND AN/APS-15B NOTE.--The function, uses, description and tactical employment of AN/APS-15A and AN/APS-15B are, in general, the same as those described for AN/APS-15. Therefore, only the changes and improvements incorporated in the AN/APS-15 A and AN/APS-15B are described herein. A. SPECIAL FEATURES OF AN/APS-15A and AN/APS-15B 1. Veeder counter.--The computer unit of the AN/APS-15A and AN/APS-15B models use a Veeder counter in place of the computer drum used in the AN/APS-15, for reading absolute altitude and slant range. The Veeder counter is more accurate, does not have the errors of the computer drum and is easier to read. The counter is calibrated from 3,000 feet up to 90,000 feet (approximately 15 miles). 2. Ranges and range markers.--The AN/ APS-15 A and AN/APS-15B models provide two fixed operating ranges-zero to 50 and zero to 100 miles. A third range is continuously variable between the limits of 5 and 30 miles. To assist in judging approximate ranges to a target, a range selector switch enables the operator to select any one of three sets of range circles for any one of the three operating ranges. The circle separation is as shown in the following table: 3. Sweep timing.--In the AN/APS-15, the open center control was employed where it was necessary to expand the display of targets crowded close to the center of the PPI screen. In the AN/APS-15A and AN/APS-15B, the sweep timing control, in addition to being used as a means for expanding the pattern (see Open Center Control, p. 42), also operates to contract the screen pattern. Operation of the sweep timing control makes possible the elimination of --49-- the blank area in the center of the PPI which occurs when flying at the higher altitudes, thus allowing the entire scope face to be utilized for viewing targets. Also, in those cases where a portion of a target image appears at the extreme outer edge of the scope screen, manipulation of the sweep timing control shrinks the scope pattern, bringing the target echo within the boundaries of the scope screen. 4. Precision ranging.--With AN/APS-15A and AN/APS-15B, precision ranging on objects up to approximately 100 miles is possible. When the bomb release marker is adjusted to coincide with the target image on the PPI screen, the distance to the target is that shown on the computer counter (in feet) plus the delay set in by the sweep delay selector (in nautical miles). For targets within a range of 15 miles (maximum limit of the range circle), it is not necessary to employ sweep delay. 5. Bomb release point data.--In pattern bombing, 12 separate counter dials are supplied with the computer unit, any one of which may be selected for insertion in the unit, depending upon the type of bomb employed, and different indicated air speeds. --50-- --51-- Figure 2-47.--The AN/APS-2 (or George) series of PPI type radars display scope presentation similar to the APS-1 5 but not with as high a degree of target resolution A, above shows a convoy while B shows the same convoy at a lower range setting. C and D are characteristic land mass presentations. ASG-AN/APS-2 SERIES NOTE.--The ASG-AN/APS-2 series of air-borne radars-ASG, ASG-1, AN/ASP-2 (ASG-3), AN/APS-2A (ASG-2), AN/APS-2B (ASG-2), AN/APS-2C (ASG-3), AN/APS-2D, AN/APS-2E-are somewhat similar In design and identical in function, the design differences arising chiefly from changes in mounting dimensions and mounting methods to adapt the equipment to the type of aircraft in which installation is being made. The AN/APS-2D and AN/APS-2E, in addition, substitute an "open center control" in place of the PHI focus control on the receiver-indicator, the latter control being removed to the PRI itself. The models in this series are the forerunners of the AN/ APS-2F and AN/APS-15 radar equipments. The section on AN/APS-2F which follows is applicable in general to all of the ASG-AN/APS-2 models. Those items marked with an asterisk (*) represent features incorporated in the AN/APS-2F which are not contained in the earlier equipments. I. FUNCTION A. PRIMARY PURPOSE. AN/APS-2F is an S-bancl radar designed primarily for general airborne search operations in heavy patrol aircraft. B. SECONDARY USES. AN/APS-2F may be employed for navigation, beacon homing, and, when used in conjunction with other electronic equipment, it may be employed for target identification. II. DESCRIPTION A. MAIN COMPONENTS. The main units of AN/APS-2F are: - Transmitter-converter unit. - Rotating antenna-spinner assembly. - Receiver-indicator, bousing the operator's viewing (PPI) and monitoring (A) scopes. - Repeater (PRI) scope, for pilot or navigator use. - Control unit. B. ANTENNA SCAN AND SCOPE PRESENTATION. In general, the operating clia racterisl ics of AN/ APS-2F and its antenna scan and scope presentation are identical with AN/APS L5 (see p. 39), with the following exceptions: - AN/APS-2F operates in the S-band of frequencies. - Its antenna beam is a 9° cone. - Its operating ranges are identical with AN/APS-15 (5, 20, 50, 100 miles). C. SPECIAL FEATURES. AN/APS-2F incorporates in its design certain special features --52-- Fig. 2-47C&D which enhance its value for search and navigation use. They are: - Antenna tilt. - Automatic tilt stabilization.* - Automatic azimuth stabilization.* - Manual sector scan. - Automatic sector scan.* - Automatic frequency control.* - Electronic lubber line.* - Beacon operation. - Open center control. III. TACTICAL EMPLOYMENT A. SEARCH. 1. Types of targets.--Target definition and resolution are not as good as that obtained by the AN/APS-15 (see p. 39), due to the relatively lower operating frequency iS hand) employed by the AN/APS-2F, and to the 9° beam as compared to the 3° beam width of the AN/APS-15. 2. Tilt.--The AX APS 2F antenna produces a 9° conical beam which must be controlled by the tilt switch to keep it on a target. Because of the limited coverage obtained by the beam, antenna tilt is critical. The ability to keep targets in the beam is dependent upon altitude and the intelligent use of the tilt control, particularly as the range to the target is closed. Also the 9° conical beam (as compared to the beam spread in the vertical plane produced by the special antenna design of the AN/APS-15) results in limited coverage, in depth, of land masses as displayed on the scope. This, too, is a function of altitude and correct antenna tilt. 3. Range settings.--Operating ranges are the same as for AN/APS-15 (5, 20, 50, 100 miles). B. NAVIGATION. 1. Homing.--Homing on targets and beacons is the same as for the AN/ APS-15. Precision ranging (as in the case of AN/APS-15) is not possible since a range unit is not included in the AN/APS-2F equipment. Full reliance must be placed on the use of the range marker circles for judging target and beacon signal ranges. When set for Beacon, the AN/APS-2F operates in conjunction with the AN/CPN-3 (YK) radar beacons. Range coding, as displayed on the PPI, denotes identity, and the position of the indication on the scope screen gives the range and hearing of the radar beacon. Dependent upon altitude, radar beacon signals are receivable up to the maximum range (100 miles) of the equipment. C. BOMBING. Bombing with AN/APS-2F is essentially a matter of employing any or all of the special features of this equipment in order to home on a target to within visual contact. (For low-altitude precision bombing with AN/APS-2F see AN/APQ-5B. Part II, p. 83). --53-- AN/APS-3 --54-- AN/APS-3 NOTE.--The AN/APS-3 is a later model of the ASD, ASD-1 series. I. FUNCTION A. PRIMARY PURPOSE. AN/APS-3 is an X-band air-borne radar employed primarily for search in medium patrol aircraft. B. SECONDARY USES. This equipment can be used for radar navigation, homing on radar beacons (AN/CPN-6) and for radar bombing. When employed in conjunction with other suitable identification equipment it can be used to display IFF signals. (See Part IV, page 113.) II. DESCRIPTION A. MAIN COMPONENTS. AN/APS-3 consists of the following major units: - Antenna unit. - Transmitter-converter (r-f head). - Modulator unit. - Receiver amplifier. - Rectifier unit. 0. Control unit. - Azimuth calibrator. - Repeater indicator. - Connector cables, viewing hoods, and dummy indicator, R-F head.-- The inclusion of the radio-frequency head in the AN/APS-3 design overcomes one of the main drawbacks to efficient operation so pronounced in the earlier ASD version of the equipment. In the ASD (for purposes of comparison) the generated pulses were fed from the modulator to I he antenna through a long, complicated wave guide which both from a mechanical and electrical standpoint was inefficient. As a result, the effective power was attenuated before reaching the antenna. In the AN/APS-3, part of the transmitter and receiver circuits are moved up to the r-f head, which is located close to the antenna. The transmitted pulses pass through a short wave guide direct to the antenna with no appreciable attenuation. Received echoes likewise pass through the short wave guide to the preamplifier located in the r-f head, where they are demodulated before being passed to the remainder of the equipment through flexible coaxial cables; thus, the long, inefficient wave guide is eliminated. B. ANTENNA SCAN AND SCOPE PRESENTATION. 1. Beam coverage.--The antenna unit consists of a parabolic reflector and antenna feed assembly connected to a motor-driven gear box so that the antenna and parabola are mechanically and electrically caused to sweep an arc of 160° ahead of the aircraft at a repetition rate of about 35 cycles per minute. The parabola nods over an arc of 2° from the horizontal during the sweep. The antenna-parabola assembly projects the r-f pulses in a conical beam of approximately 5°. As the antenna sweeps the r-f beam through its 160° of arc (80° either side of center), a nodding action is imparted to the parabola at the completion of each sweep, resulting in a nod downward of 2°, a sweep of 160°, a nod upward of 2°, the return sweep, etc. Although the azimuth sweep angle is 160°, only 150° of this angle is calibrated, as 5° are used at each end of the sweep for the nodding action. 2. Type of scan.--Two scopes are provided; one is the main scope for the operator's use, the other, the auxiliary scope for pilot or navigator use. Both scopes are of the B-scan type. (When either scope is removed from the installation, it must be replaced by the dummy Figure 2-48.--In APS-3 the side-to-side motion of the trace produces a rectangular scope display on the circular scope face. --55-- Figure 2-49.--Variations in the brilliance of the laterally moving trace activates the fluorescent coating of the cathode-ray tube indicator producing the target patterns. Above is shown a typical landmass pattern as it appears when the equipment is set to the 10-mile range. indicator so as not to disturb circuit characteristics. ) The effective 150° of lateral scan of the antenna assembly is translated on the scope into a lateral motion of a vertical trace which moves from side to side in step with the antenna motion, to cover a rectangular area on the scope face. See figue 2-48. The conversion of an angular antenna scan into a rectangular scope display introduces certain disadvantages due to distortion effects (see B-scan distortion, figs. 2-9, 2-10, 2-11, and 2-12), but the advantages of spreading the display of targets close in far outweighs the inherent distortion disadvantages. Reception of target echoes causes the moving trace to brighten momentarily in its lateral sweep. The bright spots on the trace appear on the scope screen at a range and azimuth corresponding to the target's range and bearing from the searching aircraft. Because of the persistence of the fluorescent coating of the scope screen, target echoes displayed on the scope remain visible for a short period of time after the trace has swept by. Each succeeding trace sweep renews the target echo brilliance. Single ship or aircraft targets are displayed as small spots of light. Land masses, islands, coast lines, etc., show up as relatively large patches of light. See figure 2-49. 3. Range settings.--At the control unit, when the beacon-search switch is set for search operation, the equipment can be used on four ranges of 4, 10, 40, and 80 nautical miles. When set for beacon operation, the ranges are 4, 10, 40, and 120 nautical miles. As the trace moves from side to side on the rectangular screen, bright spots appearing at graduated intervals along the trace leave a trail behind them to form range markers. These range markers assist in judging the approximate range of target echoes. See figure 2-50. Figure 2-50.--The horizontal lines on the scope patterns shown here are brilliant lines of light which enable the operator to judge target range. --56-- Figure 2-51.--A typical beacon signal appearing on the 40-mile range of the APS-3. 4. Bearing.--To indicate bearing, the rectangular screen is covered with a plastic scale containing seven scribed vert ical lines. These lines divide the 150° screen into three 25 sections either side of zero degrees at the vertical center. The zero-degree line represents the ail-craft's heading. C. SPECIAL FEATURES. A number of special design features contribute to the flexibility of use of AN/APS-3. With the exception of one--the azimuth calibrator--all these special features are controlled from the panel of the control unit. They are: - Antenna tilt. - Beacon reception. - Expanded sweep (with electronic lubber line and azimuth calibrator). - Phantom target. 1. Antenna tilt.--By means of a toggle switch control located on the control unit panel, the beam from the antenna can be tilted through a vertical range 8° above and 8° below the longitudinal axis of the aircraft. Degree of tilt is indicated on the control unit tilt meter. Tilt control enables the beam, with its 2° nod, to cover an area 24° in the vertical plane. 2. Beacon reception.--Controls are provided (beacon-search switch and manual timing) which permit the equipment to be retimed to the X-band radar beacons for reception of radar beacon signals. When the. equipment is switched from search to beacon operation, the manual timing control must be employed to retune the receiver away from the radar's pulse-and-echo frequency to the frequency of the radar beacon (AN/CPN-6). When this is done, radar target echoes do not appear on the scope screen. Only radar beacon signals from beacon stations within range are displayed. See figure 2-51. 3. Expanded sweep.--When the expanded sweep is employed, the scope presentation appears over the entire circular area of the scope screen instead of in the form of a rectangle. This operation "magnifies" or expands the center area of the scope so that only the central 60° portion of the rectangular pattern is displayed. With the "expand"' feature being used, an electronic lubber line flashes on the screen each time the antenna scanner passes the dead-ahead position. See figure 2-52. An azimuth calibrator control can be employed to shift the position of the electronic lubber line. When the electronic lubber line is aligned with a target echo, the dial of the azimuth calibrator will indicate the bearing of the Figure 2-52.--Here the range-coded radar beacon signals appear on the expanded sweep scope display. Positioning of the electronic lubber line allows the relative bearing of the beacon to be accurately determined. --57-- Figure 2-53.--At the left a convoy is shown at about 16 miles on the 40-mile range setting of the AN/APS-3. At the right, the same convoy is shown when the range has closed to the 10-mile range is used. target with respect to the heading of the aircraft. III. TACTICAL EMPLOYMENT A. SEARCH. When employing AN/APS-3, the prime consideration to be borne in mind is that the narrow 5° beam, like a searchlight, "illuminates" only those targets that fall within the path of the beam. Whether or not the radar beam will strike a target depends upon: - The altitude of the search aircraft. - The range of the target. - The antenna tilt. 1. Types of targets.--Depending upon the character of the radar echo spots, the screen display can be interpreted to indicate land masses, islands, coast line, harbors, rivers, airports, and surface ships. Land is indicated on the screen as a relatively large bright patch of light. Coast lines and harbors appear distinctly outlined, but, because of the distortion effect inherent in the B-scan, the scope display of such targets is not a true replica of the actual coastal outline. Rivers and lakes show up as dark areas within the bright land mass patches. Ships in convoy, at long ranges, produce target spots which appear on the scope as one large spot or as a group of indistinct spots closely bunched together. As the range is closed, however, and the shorter ranges of the equipment used, the group tends to break up into separate, distinct target spots. See figure 2-53. At extremely close ranges, a single ship target echo will appear on the screen as a long, horizontal line of light, ragged in appearance. 2. Altitude/range.--With the aircraft in level flight, the antenna of AN APS-3 points to the horizon when the tilt meter indicates zero tilt. Depending upon the altitude, then, intermediate distances to the horizon can be covered only when the antenna is tilted downward, or when the altitude is reduced. 3. Tilt control.--Because of the comparative narrowness of the beam (5°), tilt control of the AN/APS-3 is critical. As a general rule, operation of AN/APS-3 requires that tilt control be employed almost continuously in order to insure adequate coverage of the area being searched. This is particularly true as the range settings are altered. For instance, for any given altitude, low angles of title will be required to search the area immediately ahead of the aircraft when the 4-mile range setting is employed; but if the equipment is set to any one --58-- of the higher range settings, the tilt must be moved upward in order that the radar beam may encompass the more distant areas. Target definition can be improved in many cases if, after the target appears on the screen, the tilt control is employed to "aim"' the beam directly on the target area. Then, as the range is closed, successive adjustments of downward tilt are necessary in order to keep the target in the beam. B. NAVIGATION. Because the AN/APS-3 scope screen presentation is maplike in character, radar navigation is relatively simple of accomplishment. Again, the only disturbing factor present is the distortion inherent in the rectangular presentation of a fan-shaped area. 1. Drift determination mid homing on targets.-- When either a radar target or beacon signal is homed on. dri ft will be apparent if the signal shifts from the central lubber line. The degree of drift can be determined by adjusting the flashing lubber line to bisect the target echo. Degree of drift is read from the dial of the azimuth calibrator. To home on a selected target, the aircraft is maneuvered until the target appears on the centrally scribed (0°) lubber line. After a period of time, the aircraft's drift will be indicated by the movement of the target away from the zero-degree lubber line, and the course is corrected to compensate for the drift. This course correction will put the target echo on the opposite side of the lubber line from that at which it was observed at the time of drift determination. With the new heading maintained, the electronic lubber line can be set to bisect the new target position. As the aircraft is homed on the target, the echo will move down the electronic lubber line (parallel to the zero-degree line) as the range is closed. Homing on one of many closely positioned targets is simplified by the use of the "expand" function of AN/APS-3. When this function is employed, the central 60° of the full 150° sweep is displayed over the entire circular area of the scope screen and, in effect, spreads apart or magnifies the target display. 2. Homing: Beacon.--When set for beacon operation, AN/APS-3 will display radar beacon signals from beacons located as far away as 120 miles. To receive radar beacon signals, however, the AN/APS-3 must be retuned to the beacon frequency and the aircraft pointed in the general direction of the known beacon stations. Also, to pick up beacon signals at 120 miles the altitude must be such that the beacon station will be within the aircraft's radar horizon. Received beacon signals appear on the scope screen as a vertical series of short horizontal lines which are range coded. See figure 2-51. At the extreme ranges, the lines will tend to merge together but, as the range is closed, they will separate to show the range coding by wmich the radar beacon station is identified. Hominy on a radar beacon signal is similar to othei radar homing techniques. C. BOMBING. Low-altitude bombing of radar targets presupposes the employment of a homing procedure such as that previously described. In this connection, deviations from the collision course must be watched for, particularly within the last mile. When the expanded scan is used, the echo will broaden considerably and its center is the only reference point of value. Tilt and gain controls must be used constantly to get best target definition. With normal gain, the target will leave a slight "tail," in appearance much like a comet's tail, as it moves down the screen. This can be used in determining a collision course, since the tail must be parallel to the zero line if target bearing remains unchanged. --59-- AN/APS-4 --60-- Figure 2-54.--With the APS-4 set for search its antenna executes a two-line scan. When the equipment is set for intercept the scanned area is broadened vertically, the antenna executing a four-line scaninng pattern. AN/APS-4 I. FUNCTION A. PRIMARY PURPOSE. AN/APS-4 is an airborne X-band radar employed mainly for search. Its simplified construction allows it to be used in almost any type of aircraft. B. SECONDARY USES. AN/APS-4 contains control and design features which permit its use for aircraft interception. In addition, it can be used for radar navigation, radar beacon homing and radar bombing. When used with appropriate 1W equipment, it will furnish display of identification signals. II. DESCRIPTION A. MAIN COMPONENTS. The AN/APS-4 consists of the following pieces of equipment, plus interconnecting cables: - Transmitter-receiver. - Control box. - Indicator (two indicators for multiplace aircraft). - Indicator-amplifier (two indicator-amplifiers for multiplace aircraft). - Cable junction box. 1. The antenna scanner, transmitter, receiver, and rectifier power supply are mounted in a laired, pressurized bombshell similar to the Mark 17 bomb. This entire unit is supported in a bomb rack for either wing, or under-the-fuse-lage mounting. In an emergency situation during flight where the security of the radar equipment is endangered by the possibility of its falling into enemy possession, the entire bomblike container may be jettisoned. 2. Only the control unit, indicator scopes, indicator amplifiers, and junction box are mounted within the aircraft. B. ANTENNA SCAN AND SCOPE PRESENTATION. 1. Beam coverage.--a. The antenna beam is a 6° cone and may be tilted by manual control from 10° above, to 30° below the longitudinal axis of the aircraft in which it is installed. b. On search, the 6° conical antenna beam scans through 150° in azimuth and executes a two-line scan, with a 4° nod, to cause the beam to cover 10° in a vertical plane. C. On intercept, the beam executes a four-line scan, with 6° between lines, to cover a vertical plane of 24°. See figure 2-54. Figure 2-55.--A scope presentation of the APS-4 when set for the 20-mile range. The dark area represents a bay while the light area represents the land on either side. Note the surface ship targets in the convoy proceeding up the bay. --61-- Figure 2-56.--The APS-4, when set for intercept, produces a double, laterally moving trace. An aircraft target is shown as a double dot. 2. Type of scan.--a. B-scan.--When set for search, the B-type scan presents target images on a rectangular (3-inch) screen. Single targets are displayed as single spots of light. Land masses, coast lines, islands, etc., are displayed as relatively large patches of light having the general contour of the actual area being scanned. The distortion typical of B-type scans is present in the AN/APS-4 scope display. See figure 2-55. A plexiglass filter located over the face of the indicator cathode-ray tube contains scribed vertical lines at intervals of 25° either side of the zero-degree or lubber line, to assist in judging target bearing. The position of the search aircraft is considered to be at the bottom of the central (zero degree) scribed line. When set for beacon reception the AN/APS-4 indicators display the range-coded beacon signals to indicate the identity, range, and relative bearing of the beacon transmitter. b. H-scan.--On intercept, the scope dislay, figure 2-56, is of the H-type scan. A target within the field of the antenna beam (150° in azimuth and 24° in the vertical plane) will produce an echo which appears on the indicator screen as two dots of light. The left dot is termed the echo pip, the right clot is termed the shadow pip. The position of the echo pip on the screen indicates the target's range and relative bearing. The position of the shadow pip relative to the echo pip shows the target's elevation. The intercept function of AN/APS-4 permits the interceptor aircraft to be maneuvered so as to put the target aircraft dead ahead and in the line of fire. 3. Range settings.--AN/APS-4 has four operating ranges which are identical for both search and beacon operation-4, 20, 50. and 100 miles. Range marks, instead of being horizontal lines of light as in other B-type scans, are a series of brilliant light spots which appear only at the extreme right vertical edge of the scope screen each time the antenna scanner reaches the limit of its sweep to the right. See figure 2-57. Figure 2-57.--APS-4 has four operating ranges of 4, 20, 50, and 100 miles. The range marks produced by each of the ranges are shown above. --62-- Figure 2-58.--The fill of the APS-4 transmitted beam can be controlled manually from 10° above the horizontal to 20° below the horizontal. C. SPECIAL FEATURES. 1. Antenna tilt.--By means of a control located on the panel of the control box. the antenna reflector may be tilted so as to cause the 6° beam to tilt from 10° above to 20° below the aircraft's line of flight. See figure 2-58. The nodding action of the antenna is independent of any setting of tilt. In other words, for any setting of antenna tilt, the antenna will execute a two-line, 4° nod on search, and a four-line 6° nod on intercept. Figure 2-59.--Another view of the convoy proceeding up the bay as shown on the APS-4. Here the antenna has been tilted downward so as to focus the beam on the ship targets. As a result the land mass pattern loses some of its definition. 2. Warning light.--A warning light located on the control box may be set so as to flash when a target echo appears on the screen. During those periods when the operator may be unable to devote his entire attention to the indicator screen, the warning light may be set to indicate the presence of targets. Figure 2-60.--Ships in convoy as seen on the 4-mile range setting of the APS-4. III. TACTICAL EMPLOYMENT A. SEARCH; B. NAVIGATION; C. BOMBING. Tactically, the AN/APS-4 may be employed in the same way and for the same purpose as the AN/APS-3 (see p. 55) with the exception that the additional feature of intercept is provided. Figure 2-61.--Coast line as it appears on the 20-mile range setting of the APS-4. --63-- D. INTERCEPT. On intercept, the interceptor aircraft is maneuvered so as to get the target or echo pip on the central (zero-degree) lubber line with the shadow pip horizontally aligned with the echo pip. So long as the target aircraft remains in the field of the antenna beam the target pips will appear, regardless of evasive tactics employed. As the range is closed, both pips will move downward on the screen until visual contact can be made with the target aircraft. Except under ideal conditions where the target aircraft is retained within the beam scan of 24° in elevation, the intercept feature of AN/ APS-4 is limited in its use and as a rule is never employed for long periods of operation. --64-- [BLANK PAGE] --65-- AN/APS-6 --66-- AN/APS-6 SERIES NOTE.--AN/APS-6 and AX/APS-6A are identical in operation with the exception that AN/APS-6 is designed so that it may accommodate two indicator seniles instead of one as in the AN/APS-6A. I. FUNCTION A. PRIMARY PURPOSE. AN/APS-6 is an airborne (night-fighter) radar designed expressly for the dual function of search and interception. It operates in the X-band of radar frequencies. B. SECONDARY USES. AN APS 6 on search develops a B-scan scope display which can be employed for purposes of navigation, I. c. making landfalls, recognizing land masses and islands, and selecting surface targets on which to home. It can also he used for radar beacon homing and ( when suitable identification equipment is employed in connection with it) for the display of IFF signals. II. DESCRIPTION A. MAIN COMPONENTS. The main units of AN/APS-6 are: - Antenna scanner unit. - Transmitter-converter unit. - Indicator unit. - Control unit. - Auxiliary conl rol unit. - Modulator. - Rectifier-power supply unit. - Receiver amplifier. 1-2. The antenna scanner unit with its associated transmitter-converter is faired into the right wing of the aircraft. 3-4-5. Inside the aircraft, the indicator unit is mounted in the instrument panel while the control unit and auxiliary control unit are mounted convenient for pilot operation. 6-7-8. All other units (requiring no adjustment or manipulation) are installed in remote sect ions of the fuselage. B. ANTENNA SCAN AND SCOPE PRESENTATION. beam coverage.--On search, the antenna mechanism imparts an outward-inward spiralling motion to the rotating (1.200 r.p.m.) antenna scanner, to cause the 6° beam to cover a conical area of 12(1° ahead of the aircraft's line of flight. See figure 2-63. When the equipment is set to the gun-aim function, the spiralling motion is removed and the antenna revolves to produce a fixed cone of t5° along the aircraft's line of flight. See figure 2-64. 2. Type of scan.--a. B-scan.--AN/APS-6 when set for search, will produce a typical B-scan type of scope presentation, see B-scan, fig. 2-8), on its 25- and 65-mile operating range. Depending upon the altitude of the aircraft and the range setting of the AN/APS-6 controls, surface and air-borne targets will be displayed in the normal B-scan manner if such targets are within the field of the spiralling beam. See figure 2-65. O-scan.--When the range switch is set to either the 1- or 5-mile range, the scope presentation is automatically changed from a B-scan presentation to the O-scan or double-dot display. The O-scan presentation is employed mainly to display aircraft target echoes to indicate the relative range, azimuth, and elevation of the target. See figure 2-66. Surface targets such as land masses, islands, ships, etc., if they are within the 120° field of Figure 2-63.--When set for search, the APS-6 antenna spinnei covers a conelike area of 120 ahead of the aircraft. --67-- Figure 2-64.--When set for intercept, the conelike area is reduced to 15°. the beam, will appear on the scope screen somewhat as in the B-scan type of presentation with the exception that all targets are resolved into the double-dot presentation and therefore will appear as an indistinct so-called double-exposure pattern. Other aircraft within the field of the beam will appear in the double-hot manner at their true azimuth positions on the screen. c. G-scan.--When the auxiliary control is set from search to gun aim. the scope presentation is changed to a G-scan (see G-scan, p. 17). The target echo, which is represented as a single dot on the scope, will appear only when the target is within the beam at a range of 1,000 yards or less. The position of the spot with respect to the center of the scope face denotes the target's range (within 1,000 yards), its bearing and its relative elevation. As the range between interceptor and target aircraft is closed, the target dot grows horizontal wings. When the interceptor aircraft is maneuvered to place the target spot at dead center, and the range is closed to make the'tips of the wings touch the scribed index lines the target is bore sighted at a gun-firing range of 250 yards. See figure 2-67. 3. Altitude mark and sea necklace.--In addition to target echo display, two separate so-called sea returns are produced. One, a horizontal fuzzy line of light appearing upward from the bottom of the screen, is caused by direct reflection of echoes from the sea immediately below the interceptor aircraft. Its position varies with the altitude of the aircraft and provides a rough indication of absolute altitude. It is called the altitude mark. The other sea return appears as indistinct, changing lines of light (commonly referred to as the "sea necklace," or "lace curtain"), arranged in a curved pattern which seems to hang from the top of the scope screen. It represents the return of echoes from the sea ahead of the aircraft, increasing and diminishing in size in step with the outward and inward spiralling motion of the antenna scanner. Its maximum-minimum size is a function of aircraft altitude and the range setting of the equipment. See figure 2-68. --68-- In cases where target echoes above the sea return tend to become masked by the sea return, a "sea suppress" control enables the operator to cut out the central lower portion of the lace curtain. 4. Range settings.--The control unit contains a master range switch which allows the equipment to be set to any one of four operating ranges. They are G5 and 25 miles for search, 5 and 1 miles for intercept. When set for beacon reception, the same range settings denote ranges of 1, 5, 25. and 100 miles. The auxiliary control unit contains a two-way toggle switch which throws the equipment from the 1- or 5-mile intercept ranges to the 1,000-yard gun-aim range. (When using the equipment for search or intercept, the auxiliary control toggle switch must be set to search.) The AN/APS-6 scope is not furnished with a calibrated range scale. Target ranges for all four range settings must be judged with respect to the appearance of the target echo up from the bottom of the rectangular scope screen. See figure 2-69). The position of the interceptor Figure 2-65.--APS-6 search patterns. Top, left, 65-mile range shows convoy, island and coast. Top, right, shows same convoy and island at 20 and 15 miles, respectively. Lower left, /same targets on the 25-mile range. Lower right, 25-mile range. Lower right, 25-mile range; convoy at 3 miles, island at 8 miles. --69-- Figure 2-66.--On either the 1- or 5-mile range setting of APS-6 an "O" type scan presentation is produced. At the left the double-dot presentation shows that the target is about 40° port and below the intercepting aircraft. (Above, center.) The intercepting aircraft has been maneuvered to bring the target dead ahead. At the right the target is still dead ahead but the range has been closed. aircraft is assumed to be at the lower center of the rectangular screen. The top of the screen represents ranges of 1, 5, 25, or (55 miles, depending upon the setting of the range switch. For beacon-signal reception the maximum range is 100 miles. C. SPECIAL FEATURES. Certain shortcomings in design and operation of the earlier AIA equipment (forerunner of the AN/APS-6 series) dictated the need for a more versatile night-fighter radar. The AN/APS-6 contains the required improvements, as follows: 1. The transmitter-converter unit is pressurized, permitting operation of the equipment at altitudes as high as 30,000 feet. (AIA was not pressurized.) 2. Location of the transmitter-converter in the radome, close to the antenna, permits the use of a short wave guide with a resultant increase in operating efficiency. (The AIA used a long wave guide from the antenna to the transmitter in the fuselage, which introduced signal attenuation.) 3. Four search-intercept operating ranges of 1, 5, 25, and 65 miles are provided. 4. Automatic frequency control of beacon tuning is provided. (In the AIA, tuning to beacon signals had to be performed manually.) 5. AN/APS-6 contains a wing calibrate adjustment which permits the pilot initially to adjust the size of the wings of the gun-aim scope presentation. The adjustment consists of flying the night fighter 250 yards behind a friendly plane (as determined by reference to the optical gun sight) and adjusting the size of the gun-aim wings on the target spot until the wing tips just meet the vertical lines scribed on the indicator face. (AIA did not contain this adjustment.) 6. In the AN/APS-6 B-Scan presentation, the separation of the double dots is such as to produce a sharp, easily definable indication of target elevation. (The AIA lacked a sensitive display in the double-dot presentation which made evaluation of target elevation difficult.) III. TACTICAL EMPLOYMENT A. SEARCH. When specifically employed for search, AN/APS-6 enables the pilot of a night-tighter aircra ft to observe all manner of targets within a wide area ahead of the aircraft up to the maximum range of the equipment. Large ships have been picked up as far away as 50 miles. Land targets are seen up to 65 miles in range, and aircraft are regularly seen to the limit of the 5-mile range. B. INTERCEPT. AN/APS-6, although suitable for use in general search operations, is particularly well adapted to night interception of hostile aircraft. When specifically employed for intercept operations of this nature, the fighter aircraft, after being air-borne, is vectored by the intercept officer at the controlling station (CIC of the fighter director ship or ADCC of the shore control station), to put the intercepting aircraft on the enemy's tail. --70-- During the closing phase, the interceptor aircraft can employ the search function of AN/ APS-6 on the 65-mile or 25-mile ranges to assist in picking up the target. When the range has been closed to within 5 miles, the range setting is switched to the 5-mile range and. as the range is closed to within 1 mile, to the 1-mile setting. On the 5- and 1-mile settings, the scope presentation presents the target in the characteristic double-dot style to indicate the target's relative range, bearing, and elevation. When climbing or diving on a target, the latter may appear to be at the same elevation as the interceptor since it is in line with the Figure 2-67.--When APS-6 has been set to the gun-aim function a type-G scope presentation is produced. The target image will appear as a small dot whose relation to the center of the scope indicates azimuth and elevation of the target. By maneuvering the aircraft the spot can be made to appear at the center of the scope. As the range closes the spot will grow wings. When the wings touch the scribed lines on either side of the horizontal center of the scope the target aircraft is at firing range. --71-- Figure 2-68.--As the spiralling motion of the APS-6 antenna causes the beam to strike the sea ahead of the aircraft, a series of reflections are returned to the aircraft which cause the characteristic sea return called "sea necklaces." Another series of reflections from the sea directly below the aircraft produces an "altitude line." interceptor's longitudinal axis. However, because of differences in the course and speed of the two aircraft, prolonged maneuvers of this sort may result in: (a) moving the field of the radar beam off the target with consequent loss of target indication; (b) overshooting the target and disclosing the interceptor's presence, thus loosing the element of surprise; or (c) in the need for excessive corrective maneuvers requiring violent course and speed changes. Climbing or diving maneuvers employed by the interceptor to put him on the tail of and at the same level as the target aircraft should be performed in steps, leveling off at the completion of each step to observe the target's new relative elevation. Figure 2-69.--The APS-6 can be set for any one of 4 ranges. The scope face, however, is not graduated in terms of range and target distances must be estimated as shown above. C. GUN AIM. The gun-aim function of AN/APS-6 should not be used until it is certain that the target aircraft is within 1,000 yards in range and at the same elevation as the interceptor. At 1,000 yards, or less, the target will be within the radar beam and its echo will show on the scope screen to indicate its relative range, bearing, and elevation. If the range to the target should open because of too sudden decrease in speed of the interceptor aircraft or increase in speed of the target aircraft, or if evasive maneuvers of the target aircraft take it out of the conical beam, the spot of light will appear to snap quickly to the center of the screen where it will remain motionless. In such cases, the gun-aim switch must be set to search in order that the interceptor aircraft may again observe the target on the 1- or 5-mile intercept ranges and maneuver his aircraft into gun-aim range. During the period of interception on either the 1- or 5-mile ranges, an attempt should be made to ascertain the target's speed so that the range may be closed gradually. Too fast an "overtake" may put the target aircraft out of the cone of fire, or it may result in actual collision. D. NAVIGATION. As is the case with other air-borne radars employing a B-scan scope Figure 2-70.--Typical beacon coded signals, 100-mile range. --72-- presentation, radar navigation with AN/APS-6 is a matter of recognizing surface targets or land masses as such, and employing the information thus obtained for the purpose of navigating from one point to another, making landfalls, or establishing a course with respect to recognizable reference points. AN/APS-6 displays targets according to the B-scan type of scope display only on the 65- and 25-mile ranges, when set for search. When the equipment is set for radar beacon reception, the B-scan displays beacon signals (when within range of the beacon transmitter), as a series of range-coded spots which appear on the scope correct as to range and bearing. The coded arrangement of the signal presentation establishes the identity of the beacon transmitter. See figure 2-70. --73-- [BLANK PAGE] --74-- GENERAL APPLICATIONS OF RADAR I. INTRODUCTION Because they lend themselves to a diversity of uses other than those for which they were primarily designed, air-borne radars have become one of the most effective of tactical weapons. Even under the most perfect of flying conditions, the information they are capable of providing is often superior to that obtained by visual means; in other cases, radar becomes a valuable source of supplementary information with which to cross check information received visually. Under adverse conditions of reduced visibility or total darkness, radar is often the only source of information available to pilot, navigator, and bombardier. Regardless of the specific purpose for which a given air-borne radar has been developed, there are certain factors concerning its tactical employment which are common to all air-borne radars. The following section describes certain features of operation and employment which apply generally to all search-type airborne radars. II. RECONNAISSANCE AND PATROL A. GENERAL. 1. Conditions of visibility.--Radar is especially effective under conditions of low visibility, but even in good weather it should be used at all times for search. Though it is true for practical purposes that, with visibility unlimited, the human eye can reach out nearly as far as the radar, it is not true that the human eye can give as efficient coverage of an area. Small targets that blend with the color of the sea are likely to be missed by the eye, but the ability of radar to detect targets is not affected by camouflage. Targets into the sun are difficult to gee, particularly when the sun is low on the horizon; and a slight haze sometimes cuts down visibility without being noticed by the crew. Radar functions regardless of conditions of visibility. Visual search depends upon the continued alertness of the aircraft's crew at all times; radar will detect targets whenever they are within radar range. On the other hand, crew members should not abandon visual alertness because of the use of radar. The two, used together, insure the most efficient coverage of an area during search. 2. Range.--The factors which affect range are: (a) altitude, (b) power, and (c) to a limited degree, equipment break-down ceilings. a. Altitude.--The maximum effective range of the radar equipment will be dependent upon the aircraft's altitude relative to the range settings of the equipment. In other words, even if a radar is set to operate on its 100-mile range, it will not be able to detect targets at this range when its flight altitude is so low that the radar horizon extends only to a maximum of 50 miles. The chart, figure 2-17, on page 19 gives the radar line-of-sight distances to the horizon for various altitudes. b. Power.--The power of an air-borne radar equipment constitutes one of the prime factors which limit operating range. Echo reception at long ranges is dependent upon the ability of the equipment to transmit pulses which will be powerful enough to cover the distance from the radar to the target and to produce an echo of sufficient strength to complete the return trip. In this connection, therefore, once the maximum range as defined by the power limitations of a particular radio equipment itself has been reached, a further increase in range will not be effected merely by increasing the altitude of the searching aircraft. c. Radar equipment operation ceilings.--Because certain parts of air-borne radars operate at extremely high values of voltage and are subject to electrical break-down at the low atmospheric pressures encountered at high altitudes, pressurizing safety precautions have been incorporated in the design of some equipments to maintain them at pressures comparable to those at the safe operating altitudes. The altitude limitations or "break-down ceilings" for the several types of air-borne radar equipments are listed in column 8 of the table on page 145. B. AREA SEARCH. In planning radar search operations a medium range setting of the equipment should be chosen so as to insure adequate echo reception from small targets. The altitude at which the search aircraft is to be flown in order to provide this range coverage should --75-- be high enough to insure that the radar horizon is just slightly beyond the maximum range employed. The search aircraft, if equipped with either the L-type or PPI-scan radar will sweep a search swath whose width will be twice the effective range of target detection (see fig. 2-71). In laying out area searches when using the B-scan type of equipment, however, it must be remembered that here the scanner sweeps an area extending only to 75° either side of the dead-ahead position. This results in a search swatli which is somewhat narrowed in width (see fig. 2-72). The swath thus produced by the field of coverage (making allowances for a certain amount of overlap) will dictate the number of legs to be flown in order to cover a specified search area. For example, a search mission may involve the use of an aircraft fitted with AN/APS-3 search radar. Consistent, reliable target pickup, it may be found, is obtained on the equipment's 40-mile range setting. The line-of-sight table (p. 19) indicates that an altitude of 1,000 feet will produce a radar horizon at a distance of about 38 miles. This is safely within the 40-mile range setting. A conservative range at which a wide variety of targets will be displayed by the equipment is about 35 miles. This is still within the 38-mile radar horizon. Allowing for the fact that the oscillating antenna sweeps through 150° of arc (75° each side of dead ahead) and thus does not cover the full 35-mile distance either side of the aircraft, the maximum side area may be considered to be approximately 10 percent less or about 31.5 miles. This means that the air-borne radar is safely covering an area ahead and to either side of the aircraft to a distance of approximately 31 miles. If the search area to be covered is some 360 miles on a side, then allowing for a small amount of overlap for successive legs, some six legs will have to be flown, in order to cover the entire area. See figure 2-73. In cases where patrol craft are equipped with loran, the problem of area search is greatly simplified if full use is made of the loran charts in laying out the various legs of a search area (see Tactical Employment of Loran, p. 140). --76-- C. SECTOR SEARCH. Planning sector searches involves the employment of the same planning procedure as for that of area searches. The maximum range of detection is estimated for the types of target expected, and the appropriate flight altitude is determined for such range. In addition, however, the sector search depends upon meeting the coverage requirements-- 1. Where the sector angle is specified: The length by the sides of the triangle will be determined by the search swatli of the air-borne radar. 2. Where the distance to be flown from carrier or base is specified: Keeping in mind the restrictions imposed by the search swath area, a sector angle must be chosen to provide complete coverage of the sector search area. Figure 2-74 illustrates condition 1, wherein for a given sector angle the range from base must be calculated to avoid excessive overlap on the one hand and an uncovered area on the other. Figure 2-75 illustrates condition 2. Note that for a given distance from carrier or base the sector angle must be such that complete coverage of the area may be accomplished. Figure 2-73.--When planning a search of a specified area, consideration must be given to the width of the search legs so that the number of legs to be flown can be calculated to adequately cover the entire area, allowing for a slight amount of overlap between adjacent legs. D. ANTISUBMARINE SEARCH. This type of search proceeds much as any other except that the assumed radar horizon will be less and the searching altitude lower. At close ranges, most radars can detect submarines when only their periscopes show. Because it is possible for submarine targets to detect the presence of search aircraft by tuning in on the radar pulses, it is advisable, once an unidentified target is detected, to employ the search radar intermittently. III. NAVIGATION A. DRIFT. 1. Drift compensation methods.-- When an aircraft is directed toward a target Figure 2-74.--Where a given sector angle for a search is specified in the operating plan, the altitude of the aircraft and the length of the legs must be determined to provide a search swath which will adequately cover the area to be searched without producing incomplete coverage. Figure 2-75.--Where the range of a given area is specified, the altitude of the aircraft and the angle of successive legs must be determined to provide a search swath which will adequately cover the area to be searched without producing incomplete coverage. --77-- Figure 2-76.--A method as described in the text for determining correction for drift by radar. and the heading is thereafter held constant, a gradual change in the bearing of the target echo as observed on the scope indicates drift. If the echo moves to the left of the lubber line, the aircraft's drift (not the target's drift) is to the right, and vice versa. The direction of drift is easily determined, but compensating for drift is more difficult. Two radar drift compensation methods are described, as follows: a. Method 1.--To determine the amount of drift, a radar target is selected and the aircraft is maneuvered so as to place the target on the lubber line, preferably near the top of the scope screen, and the range to the target is noted. Then for a selected distance, the course is maintained, during which time (if drift is present) the target, in addition to moving down the screen, will also move to one or the other sides of the lubber line, depending upon the direction of drift. By multiplying the observed drift in degrees by the distance in miles yet to be flown to intercept the target, then dividing the product by the distance flown during the check period of observation, a result, in degrees, will be obtained which indicates the required change in heading that must be made in order to compensate for drift and to maintain a collision course with the target. (See fig. 2-76.) b. Method 2.--(This method is not adapted to L-scan (ASB) radar equipment.) If an aircraft is headed directly toward a target and a constant course maintained, drift causes the radar echo to change in bearing. The amount of change depends upon the range at which the aircraft was first headed toward the target and the distance subsequently traveled. By employing the graph (fig. 2-77), the angle of drift can be determined. The procedure for using this graph is as follows: Note the range when the target is dead ahead. Maintain this heading until the target has changed in bearing by 5°. Then note the target's new range. Along the bottom of the graph, locate the vertical line corresponding to the range at which the 5 drift was indicated. Then along the left edge of the chart locate the horizontal line corresponding to the initial range noted. At the point of intersection of the selected horizontal and vertical lines the drift angle is indicated by the degree Line that passes through this point of intersection or by an interpolated line estimated by eye to the nearest whole degree. As an example of the use of this chart: A target is observed dead ahead at a range of 40 miles. A constant heading is held until the target has changed by 5°. At this point, the range to the target has closed to 25 miles. Employing the graph, the 25-mile vertical line along the bottom of the graph is selected, and along the left edge of the graph a horizontal --78-- line corresponding to the original 40 miles is located. It is found that these two lines intersect at a point on the graph which is crossed by the 8° drift line. This indicates the aircraft's angle of drift. The successful use of this method depends upon the operator's ability to read fairly accurately a bearing of 5° to left or right of the zero-degree line. AVith the AN/APS-3, the expanded sweep should be employed and the flashing lubber line set to 5° left or right, depending on the direction of drift. As soon as the echo is centered under this Bashing lubber line, the second range reading is taken. With the AN/APS-15, the aircraft is maneuvered until the target echo is placed dead ahead. The direction in which the echo drifts is noted and the scribed azimuth index line is rotated 5° in the direction of drift. Again range is noted as soon as the target is centered under this new position of the azimuth index line. With map-type radars that do not have a flashing lubber line or a movable azimuth index line, the 5° position must be estimated. On all map-type radars, it is necessary to use the center of the echo as the point from which to determine the bearing. A broad land mass is not a suitable target unless it has a projection or indentation that produces a radar target which may be employed as an easily noted point of reference. Ship targets are not reliable for determination of drift since a change of bearing may be caused either by wind or ship motion or both. 2. Surface wind checks.--There are many times during (light over open water when no targets are in sight. In such cases, the two methods of drift determination above described cannot be successfully employed, since they require the use of a fixed target. With certain radars, however, an experienced operator can make a fair estimate of the direction of surface wind and a very rough estimate of its velocity, provided the aircraft is at an altitude of about 1,000 to 1,500 feet. Because winds at higher altitudes may differ from near-surface winds, the information obtained is of value only for low-altitude flights. Figure 2-77.--Another method for determining drift by radar is through the use of this chart, as described in the text. --79-- Since the wind at and near take-off point is usually known, the size and appearance of the sea return should be observed closely as soon as the aircraft is at flight altitude and on course. Subsequent changes in both direction and intensity of the wind produce changes in the appearance of the sea return on the scope screen. This change takes the form of an enlarged sea return on the upwind side because the larger surface of the waves present a greater reflecting area to the transmitted pulses of the radar. With practice, such changes in sea return pattern can be interpreted to indicate roughly wind direction and velocity. B. ABSOLUTE ALTITUDE. On most radars the sea or ground return can be used to indicate an aircraft's approximate absolute altitude, the degree of accuracy obtainable varying with the type of radar used. As a general rule, none are affective for this purpose when the absolute altitude is less than 1,500 to 2,000 feet. For this reason, radar altitude acts best as a check upon having sufficient margin of safety when flying over mountainous territory. C. COASTAL NAVIGATION. 1. Pilotage.--Following a coast line is quite easily accomplished with the aid of radar and a map of the area. The screen pattern indicates whether or not a course parallel to shore is being flown and position is established from shore line contours, as displayed on the scope. 2. Point-to-point navigation.--On a long flight, point-to-point radar navigation can be used. This means flying a predetermined course and obtaining radar checks of position by observing the identifying characteristics of prominent landmarks such as headlands, harbors, bays, and rivers. Normally, such flights can be flown at great distances off shore. Point-to-point radar navigation is used also for island-to-island flights where an objective is reached by following a string of islands. This type of navigation requires accurate tracking of the flight and careful map reading. To avoid the possibility of losing the identity of a check point, the ETA for each point-to-point jump should be computed. Unless there are several headlands or other check points close together, the ETA can serve as a cross check on the radar information so as to make identification positive. When making an approach from the open sea to an island dead ahead, it usually is unwise to head directly in and then make a 90° turn to follow the coast line, because mountains on the island or an enemy garrison may make such an approach dangerous. Rather, the approach should be made tangentially; that is, at a range of from 4 to 5 miles from the island, a 45° turn is made in the desired direction. When the aircraft has closed to the desired distance off shore, another turn is made to place its heading parallel to the coast line. 3. Pin-point radar navigation.--When accurate radar coastal fixes are required or when a radar approach is made to a specific objective, the aircraft should be flown close to the shore and at low altitudes. The range setting of the radar should be such that the target and the immediate area about it covers a substantial part of the scope screen. Optimum conditions for accurate work are about 1 mile off shore and 1,000 feet in altitude. Under these conditions, pin-point radar navigating for mine laying, bombing, or approach of landing can be done. Where possible, the approaches should be planned in advance with adequate attention being given to photo-reconnaissance reports and the use of landfall relief maps so that the objective as displayed by the radar can be readily identified. D. MINE LAYING. Mine laying at night, where accurate ranges are essential, requires pin-point navigation. Since mining operations usually take place at channel entrances, accurate piloting is required. With the map-type radars (particularly the B-scan type), the actual approach is relatively easy, provided the radar navigation is carefully and accurately planned in advance. When radar mine laying is conducted by several aircraft it is advisable for the operator in the lead aircraft to perform the radar navigation for the section, all aircraft releasing the mines on a light signal from the leader, thus laying the mines in pattern. The timing of the release must take into account the disposition and spacing of the aircraft in the section so that the center of the pattern is midchannel. --80-- The method of determining the release point depends both upon the type of radar and the topography of the particular area being mined. If the topography is suitable, the drop is made at a given distance from a point ahead as displayed by the radar, such as the opposite bank of a channel, coral reefs known to jut above the water, or even from channel markers. Under other conditions, it may be necessary to make the drop a certain number of seconds after passing a radar check-point on the required heading. With the map-type radars, a series of photographs of the radar screen taken during a "dry run" by reconnaissance aircraft will prove invaluable for planning mining operations. IV. RADAR BOMBING Radar bombing techniques vary widely according to the tactical situation, the type of radar, and the type of aircraft employed. Such bombing may be carried out partly by radar and partly visually; it may be accomplished using the search radar only; or search radar with bombing attachments may be employed. It may involve precision bombing against ships at low altitude: area bombing of enemy bases at medium altitude: or pattern bombing of cities at high altitude. There are two main radar problems in bombing. The first is the problem of flying a collision course to intercept the target. The second is the problem of accurately ranging on the target to determine the correct release point. At low alt itude or high altitude, these problems are essent ially the same. A. LOW ALTITUDE BOMBING. 1. Target approach.--The manner in which attacking aircraft approach the enemy depends on many things-disposition of the enemy force, visibility, possibility of interception by enemy night fighters, and other elements that enter into the total tactical situation. In approaching an enemy task force or convoy for a radar bombing attack, it is usually the case that the ships upon which the attack is to be made are inside a screen of escort ships. It becomes important, then, that the approach be made in such a way as to enable the radar to furnish maximum discrimination between tar- gets so that any given target within the group may be singled out and attacked. Usually it is desirable to approach a selected radar target within a screen in such a manner that the aircraft does not pass close to a screening vessel. The disposition of the ships should be noted beforehand and the approach made so as to avoid these escorts. If, at long range, target course can be determined, a beam approach may be possible. Under such conditions, the size of a target can be fairly accurately estimated. Should the approach be modified to suit drift conditions or enemy disposil ion, the final run should be made at a minimum of 3 miles in order to accurately set up a collision course which can be observed by radar. The altitude of approach, once radar contact with the enemy is established, varies with the tactical situation. To avoid detection by enemy radar over sea areas, a close-to-the-water approach up to about 20 miles should be made, after which correct bombing altitude is regained. Assuming the enemy can track an approach-ing aircraft with his radar, a straight-in-ap-proach has the disadvantage of letting the enemy know the aircraft is heading directly for him. At this point, a certain amount of deception may be employed to advantage. Basically, these deceptive measures involve: If possible, locating the enemy at a radar range of 10 miles or more. Setting a course that will allow closing with him but will leave him in doubt as to whether an attack is developing. Turning in to the attack at the last moment, allowing just enough time to establish a collision course for an accurate radar bombing run. 2. Release point.--Low- or high-altitude bombing demands a range precision that is difficult to obtain with search radars not equipped with special bombing attachments. Somewhat of an exception is the ASB radar with which it is possible to determine the release point within approximately 100 feet. The map-type radars have an inherent error introduced by the time interval between successive sweeps of the scanner across the target. This has the effect of --81-- making the target come down the screen by "steps"' rather than continuously. Successive scanner sweeps may display the echoes as much as 100 to 500 yards apart, depending upon the scanning speed. As the aircraft nears the release point, the step effect becomes a serious drawback, This, along with other inherent difficulties of reading accurate ranges close to the target, indicates that the degree of precision necessary for accurate bombing is neither easily nor reliably obtained from the search radar alone. 3. Torpedo attack.--A knowledge of the target's course and speed is necessary before torpedo attacks are effective. Since radar cannot furnish this information with a reliable degree of accuracy, visual assistance becomes necessary. The radar has a definite use, however, in detecting targets at long range and assisting in making the approach. For the torpedo run, once visual contact is made, radar can be employed to furnish reliable range information which can be used to assist in determining the torpedo release point. Figure 2-78.--From a bombing standpoint target width as observed on a radar scope is a function of target length and target angle. The above figure illustrates that a beam approach to a target increases the allowable deflection error which is an important factor in the successful outcome of radar bombing. 4. Accuracy of low altitude bombing.--A radar attack presumes that a target cannot be seen visually and therefore, unless target course can be determined, it is difficult to distinguish between length and beam of a ship from its echo on the screen. This introduces two errors in the low-altitude bombing problem--(a) deflection errors and (b) range errors. a. Deflection errors.--The relative importance of these errors depends largely upon the angle from which the approach to the target is made. In other words, target width is a function of target length and target angle. For example, figure 2-78 shows the effects of approaching from different angles a ship 500 feet long with 60-foot beam. The illustrations show the angular width as seen from a bomb-release point of 400 yards. The allowable error in the collision course is obtained by taking one-half the angle which the outside limits of the target form as seen at the release point. From a study of the illustrations, it is obvious that a beam approach increases the allowable deflection error. b. Range errors.--Allowable range errors depend upon the target's depth in the line of flight. This, in turn, depends upon target angle. Suppose, with a target having a freeboard of 40 feet and a beam of 100 feet, the approach is made broad on the beam. Figure 2-79.--From a consideration of the above figure it will be seen that the range errors are most critical when making a beam approach, lt will be seen that by making a radar approach on a target's beam the element of range error becomes quite critical whereas range error is low for a stern or bow-on approach. Of course, such approaches assume that the radar scope presents information which allows the target's course to be determined, thus making a beam approach or a stern or bow-on approach a matter of choice by the pilot. --82-- It is known that a bomb released from 300 feet at 180 knots enters the water at an angle of about 25°. Assume as a hit any bomb that lands between points "A" and "B," figure 2-79. By simple mathematics it is found the "QB," for this example, is about 100 feet and gives a total hitting space of approximately 250 feet in the line of flight. Suppose, however, the approach to the target is made from the stern. "PQ" now becomes the length of the ship and the hitting space increases to approximately 750 feet. This example illustrates the obvious fact that range errors become critical when approaches are made broad on the beam. c. Sources of error.--Deflection and range errors can be attributed to three causes (assuming level flight at the instant of release): Errors in the radar range, caused by limitations inherent in the radar, possible faulty calibration, and poor interpretation by the operator. Miscalculation of speed with which the aircraft approaches its target. Improper altitude at the instant of release. For any particular bombing run, the errors due to (1) cannot be compensated for. For (2) and (3) the following must be considered: Deflection errors are less critical for a beam approach to a target than for a head-on or stem-to approach. Range errors on the other hand, are less important for approaches to targets head-on or stern-to than for beam approaches. Range errors can be compensated for in large measure by dropping bombs in train. B. HIGH-ALTITUDE BOMBING. From a radar standpoint, high-altitude bombing (10,000 to 40,000 feet) requires the use of radars suitable for such work, or parent radars with requisite bombing attachments in order that the required degree of bombing accuracy may be obtained. As has been stated previously, the problem of high-altitude bombing is essentially similar to that of low-altitude bombing except that the manner of accomplishment does not ordinarily require observance of all of the factors inherent in low-altitude bombing. Because high-altitude radar bombing requires the use of specific radar equipments or attachments, their mode of use will dictate the manner in which the actual bombing run is to be accomplished. (See AN/APS-15, p. 41, and AN/APQ-5B, p. 91.) --83-- [BLANK PAGE] --84-- AIR-BORNE TAIL WARNING RADAR I. TYPES AND FUNCTION There are four separate sets of air-borne radar equipments which are designed for the specific purpose of warning aircraft personnel when other aircraft are at close range astern. These equipments, called tail-warning equipments, are: A. AN/APS-11, AN/ASP-13. The AN/APS-11 and AN ASP 13 are similar except for differences in the electrical circuit arrangement. These radar equipments convert reflected energy from aircraft targets to the rear of the fighter aircraft into an electric current which turns on a red light located near the gun sight and so shielded as not to interfere with other lights on the instrument panel. The red light warns the pilot that another aircraft is within the cone of coverage of the equipment i'.»o wide in a vertical direction and 60° wide in a horizontal direction) and within a range of 200 to 800 yards. The maximum range of 800 yards was purposely kept low. At ranges greater than 800 yards, the pilot is in no immediate danger from a i ailing plane. Within that distance, however, evasive action becomes necessary if the tailing plane proves to be hostile. B. AN/APS-16, AN/APS-17. The AN/APS-16 and AN/APS-17 are similar to each other but differ from the AN/APS-11 and AN/APS L3 in range and in the indications they produce. Intended for use in bombers, they are designed to warn the entire crew. 1. In the AN/APS-17, returning echoes from an aircraft astern produce a 1,000-cycle note that is pipped or interrupted at a rate inversely proportional to the range, from 1 pip per second at 3,000 yards to 10 pips per second at 200 yards. In other words, as the range between the two aircraft closes, the repetition rate of the signal pip increases. The warning signal is fed into the interphone system, thus warning all crew members at the same time. The maximum range is adjustable from 500 to 3,000 yards. The fixed antenna produces a beam whose coverage is a cone 120° wide. 2. The AN/APS-16 has the same range and beam coverage as the AN/APS-17. The indication is a 400- to 1,-100-cycle note which, since its power comes from the aircraft's generators, will vary in pitch with the r.p.m. of the aircraft's engines. The tone is interrupted at a rate inversely proportional to range, from 2 pips per second at 5,0,00 yards up to 10 pips per second at 200 yards. C. LIMITATIONS ON TAIL WARNING SYSTEMS. 1. One disadvantage in the use of the system lies in the fact that friendly escort planes trigger the warning system into operation every time they fly on or across the tail of the tail-warning-equipped bombers. To surmount this difficulty, the inclusion of identification equipment will make it possible to differentiate between friendly and enemy aircraft. 2. Another disadvantage which is common to all tail-warning equipments is that when the aircraft is flying at low altitudes (at an altitude within the horizontal range of the equipment), the sea or ground return produces a spurious warning signal which could be mistaken for a tailing aircraft. Recommendations have been made to sufficiently alter the design of tail warning equipments so as to eliminate this disadvantageous effect. --85-- AN/APA-16 --86-- ASSOCIATED EQUIPMENTS (Bombing Attachments and Airborne Fire Control Radar) AN/APA-16 I. FUNCTION The AN/APA-16 is a low-altitude radar bomb sight, designed to facilitate the bombing of surface vessels from aircraft. It operates, in conjunction with parent radars such as the ASB Series, the AN/APS-3 (ASD-1), and the AN/APS-4 (ASH), to compute the release range to a target and to automatically release the bomb when the aircraft reaches the release position. II. DESCRIPTION The AN/APA-16 assembly comprises three major units. They are: - Control unit. - Range marker unit. - Capacitor unit. In addition, a modification kit is required to adapt the AN/APA-16 to the particular radar installation employed. III. MODE OF OPERATION (for example, with ASB radar) In explaining the general operation of the AN/APA-16, it is assumed that the echo signal from the target which it is desired to bomb has been located on the screen of the radar indicator. The AN/APA-16 attachment is triggered from the radar trigger pulse and supplies pulses to the ASB indicator for tracking a selected target and for indicating the release range. The resulting presentation on the ASB indicator screen is in the form of a blanked-out section of the sweep trace. This blanked-out part of the trace is called the range mark. It causes the screen to present an appearance like that of "B" in figure 2-80, or, under conditions of adjustment, like those of "C," "D," or "E." The leading edge of the range mark, the end nearest zero range, indicates the bomb release range; the lagging edge of this pulse can be set coincident with the leading edge of the echo pulse and, by proper adjustment of the control unit, can be made to "track" the echo pulse, under which condition the computer in the control unit is supplied with the required information as to the relative velocity between the aircraft and the target. The release point is a function of the relative velocity between aircraft and target and the altitude of the aircraft, and is computed by means of electrical circuits. As the aircraft approaches the target, the blanked-out portion of the indicator sweep between the echo signal from the chosen target and the bomb release point grows shorter and shorter and disappears when the release point is reached. Short sharp pulses coincident with the leading (release range) edge and with the lagging (tracking) edge of the blanking or range marker pulse are also developed in the AN/APA-16 attachment. When these pulses become coincident, as they do on arrival of the aircraft at the release point, they act through a coincidence circuit to actuate the bomb release mechanism. This equipment, with automatic computer for determining the release point, is designed to operate at altitudes of from 50 to 500 feet and relative velocities of aircraft and target from 50 to 400 knots. Means are provided for adjustment of the release point to enable dropping the first of a train of bombs so that the bombs will "straddle" the target. This same control may be used to correct the dropping point for variations in bomb trajectory due to wind resistance. --87-- Figure 2-80.--Target indications and range marker blanking on the ASB indicator screen. --88-- [BLANK PAGE] --89-- AN/APQ 5-B --90-- AN APQ-5B I. FUNCTION AN/APQ-5B is a low- and high-altitude bombing attachment designed to operate in conjunction with certain airborne radar equipments for the purpose of automatically releasing a bomb (or train of bombs) at such a point that it will hit a desired target. (See Note, p. 84.) II. DESCRIPTION The main components of AN/APQ-5B are: - Synchronizer. - Control unit. - Tracking unit. - Indicator. - Compensator. - Rectifier, junction box and capacitor unit. (Also required as accessories are a bond) sight stabilizer and an adapter kit for connecting the AN/APQ-5B to the parent radar.) III. MODE OF OPERATION The basic operation of this equipment is as follows: 1. The associated radar equipment produces a train of extremely short hut intense pulses of high-frequency energy and transmits these in a highly directive beam. Remote objects in the path of the beam reflect some of this energy, and a portion of it is intercepted by the antenna. These received echoes ultimately appear as a train of pulses recurring at the same rate as the outgoing train but delayed in time depending upon the distance separating the target from the aircraft. By observing this time delay, it is possible to determine the distance to the target, and by noting the heading of the beam when echoes are received, to ascertain the relative hearing of the object. This information usually is presented on the B- or PPI-scope of the radar. 2. AN/APQ-5B also operates from these video pulses, together with a reference trigger pulse marking the time of the outgoing signal. A separate B-type scope is furnished on which is impressed the target echo and a reference range marker which appears in the center of a one-mile sweep. This range marker and associated sweep can be moved tow>ard zero range at a rate which corresponds to the relative velocity of the aircraft and target. This rate is adjusted during the bombing run so that the target coincides with the reference marker at all times. A release pulse or pip is also generated, which is adjustable in range to correspond to the proper release point of the bomb. This release point is a function of the relative velocity of the aircraft and target and the altitude of the aircraft. When the reference pulse or range marker which is coincident with the target echo arrives at the release point, the electrical addition of this pulse and the release point pulse energizes the elecrical bomb release mechanism and drops the bombs. An 18-mile alternative sweep for the scope is furnished and is employed at the beginning of the bombing run to prevent confusion in the case of multiple targets and to aid in the proper selection of a specific target. 3. The azimuth scale of the B-scope consists of three inscribed vertical cross hairs and the amplitude of the azimuth sweep covers an angle of approximately ±60°. Means are provided to enable the bombardier to set up the course of the bombing run so as to intersect the target regardless of drift conditions. The bombardier has only to maintain the target indication coincident with the central vertical cross hair of the B-type indicator by operation of the usual bomb sight turn controls. This operation controls the automatic pilot mechanism or the pilot direction indicator to alter the heading of the aircraft so that it travels along a collision course to the target. 4. AN/APQ-5B is designed to operate with an automatic computer for determining the release point at altitudes from 65 to 2,000 feet and relative velocities of aircraft and target from 100 to 400 miles per hour. Means are provided for adjustment of the release point to enable dropping a train of bombs centered on the target, and to correct the dropping point for variations in trajectory due to wind resistance. For higher altitudes up to a maximum of 50,000 feet, release distance is not determined automatically by the computer, but must be read by the bombardier from a chart provided for these conditions of altitude and velocity. The reading is set on the control dial manually. --91-- 5. For high-altitude bombing, AN/APQ-5B becomes a semisynchronous impact-predicting bombsight. A maximum of 70,000 feet slant release range is provided. Altitude information is set into the equipment by moving the release pip to a point coincident with the altitude ring. To this is added the B-factor. (The B-factor is the difference between slant release range and altitude and is affected very little by altitude changes from 10,000 to 50,000 feet.) This B-factor is presented in a tabular form and is selected by the operator for the conditions of altitude, velocity, and bomb type. IV. PERFORMANCE CHARACTERISTICS 1. General. AN/APQ-5B will operate day or night under temperature conditions ranging from -40° to +50° C. (-40° to +122° F.) with relative humidities as high as 95 percent. 2. Associated equipment.--AN/APQ-5B is designed for use with aircraft search radars such as AN/APS-2, AN/APS-3, AN/APS-1, AN/APS-15, AN/APS-15 A and AN/APS-15B types with a minimum of change in these installations. The attachment of the indicator equipment does not interfere with the normal functioning of the radars when properly installed. 3. Search distance.--This equipment may be used as a search indicator with an 18-statute-mile range to facilitate the location of a target at the beginning of the run. 4. Bomb spread.--Provision is made for releasing a train of bombs, centered on the target, with a normal spread of 830 feet between the first and last bombs of the train. 5. Altitude and airspeed.--AN/APQ-5B is designed to work between altitudes of 65 feet and 50,000 feet, and at airspeeds of 100 to 400 miles per hour. These limits are, however, dependent upon ground speed and altitude flown. 6. Operational accuracy with X- and S-band radar sets.--Tests made at an average air speed of 160 miles per hour indicate the radial error will be less than: The greater part of this error will be in deflection rather than range. A train of three bombs usually insures a direct hit. NOTE.--The AN/APA-5 is a radar bombing attachment which effects synchronous tracking and automatic bomb release on any target visible to the parent radar, replacing the AN/APQ-5B because of greater accuracy, altitude range and flexibility. This equipment operates with excellent accuracy at altitudes from zero to 35,000 feet and at closing speeds from zero to 400 miles per hour. Both range and azimuth tracking are provided. Accuracy is about 25 mils. It will operate with the AN/APS-2, AN/APS-3, AN/APS-15, and AN/APS-30 series radars. Modifications at an early date will provide for offset bombing and for rocket firing. --92-- [BLANK PAGE] --93-- AN/APG-4 --94-- AN APG-4 I. FUNCTION AN/APG-4 is an air-borne electronic low-altitude bomb release device designed to effect automatic release of a missile from an aircraft, directed toward an isolated or semiisolated marine surface vessel. It is complete in itself except for the employment of a radio altimeter (AYD, AN/ABN-1, AN/APN-1) which provides automatic compensation for altitude variation. After a collision course has been established visually, or by means of a search radar (separately operated), AN/APG-4, within the operating limits of the equipment, evaluates closing rate and altitude to automatically release the bombs at the proper point. II. DESCRIPTION The AN/APG-4 consists of: - A frequency-modulated transmitter-receiver unit. - An altitude compensation switch with controlling relays. - Two Yagi antennas (similar to ASB equipment). The antennas are moulded so as to direct a beamed lobe of radio energy forward and downward from the aircraft. One antenna is used for transmitting the frequency-modulated signals generated in the transmitter; the other antenna is used to receive the returning echo signal from surface targets. III. MODE OF OPERATION Figure 2-81 shows a functional block diagram of the AN/APG-4 equipment. With the aid of the radio altimeter, the aircraft is flown al any height between the altitude range of 10 to too feet. In the transmitter, a frequency-modulated signal is generated and delivered not only to the transmitting antenna but also directly to the receiver. The transmitted signal is beamed by the transmitting antenna ahead to a target and the reflected signal is received on the receiving antenna, from which it is fed to the receiver. The instantaneous difference between transmitted signal frequency and reflected signal frequency produces in the receiver a third or difference frequency which is a measure of the distance between the aircraft and the target, and the closing rate. This information is resolved in the receiver circuits into voltages proportional to distance and closing rate, with compensation for altitude variation, which, after evaluation by the AN/APG-4, actuate relays at the proper time to operate the bomb release mechanism. The factors of speed, altitude, and distance are thus automatically obtained by the equipment and evaluated to determine the point at which automatic bomb release takes place. Azimuth information must be obtained by pilot or operator either visually or through the use of airborne radar equipment. IV. PERFORMANCE CHARACTERISTICS 1. Operating limits.--AN/APG-4 is effective over a range of aircraft speeds of 100 to 300 knots and at altitudes of from 40 to 400 feet. 2. Range lead adjustment.--Provision is Figure 2-81.--In the APG-4 equipment, frequency-modulated signals are directed toward a target by the directional transmitting antenna. A small portion of the frequency-modulated signal is also fed directly to the receiver. Since a measurable period of time elapses from the time the signal (of a given frequency) was transmitted to the time it is received, the difference in frequency between the returned signal and the new frequency-modulated signal then being generated in the transmitter is resolod into terms of distance. --95-- made in the equipment for releasing a bomb so that its point of impact may be anywhere from zero to 100 feet ahead of the selected target (skip-bombing application). 3. Computation time.--Because the automatic computation of bomb release time requires only four-tenths of a second, a pilot may engage in extreme evasive action up to a point immediately prior to bomb release. 5. [sic--no #4] Target isolation.--The equipment tracks and releases on the nearest object in view; therefore, it is essential that the target be isolated from objects by a minimum of approximately 150 yards. NOTE.--The AN/APG-4 will be superseded by the AN/APG-17 and AN/AFG-17-17A. The AN/APG-17A is a low-altitude level-flight frequency-modulated automatic bomb release system using either visual or radar sighting and permitting evasive maneuvers up to point of release. Range of altitude operation is from 40 to 400 feet and the equipment obtains its altitude information from the AN/APN-1 radio altimeter. It requires no operator other than the pilot. Two faired-in parabolic antennas are required. This equipment will supersede the AN/APG-4 due to better aerodynamic characteristics and greater power. The AN/APG-17A will provide for operation up to 800 feet and will be suitable for rocket release. The weight is slightly less than the AN/APG-4 (5 to 10 pounds estimated). --96-- AN/APG-5 I. FUNCTION A. PRIMARY PURPOSE. AN/APG-5 is a lightweight air-borne fire-control radar equipment. It is an automatic ranging equipment which supplies target information to a lead computing sight, as used in the upper and lower gun turrets of aircraft. II. DESCRIPTION A. MAIN COMPONENTS. AN/APG-5 consists of the following: - A transmitter-receiver unit. - A directional antenna. - A range unit. - Servo amplifier and servo-geared motor unit. B. MODE OF OPERATION. The transmitter generates pulses of radio-frequency energy which are fed to an antenna having highly directional qualities. The antenna, mounted on the gun turret and rotatable with it, directs the radio-frequency pulses toward a selected target in a 28° beam. Echo pulses from the target return to the same antenna and are fed to the receiver and range unit. The range unit performs the function of measuring the time intervals between transmitted pulses and received echoes, continuously maintaining a voltage output proportional to the time interval, and therefore to range. The servo amplifier and its associated servo-geared motor unit converts this voltage output to a sha ft rotation proportional to range, which may, in turn, be fed into a lead computing sight. The AN/APG-5 is entirely automatic in operation and requires no separate operator. The gunner operates the necessary controls. Using his optical sight, the gunner operates his turret and his guns in such a manner as to visually train his guns on the target. This simultaneously aims the radar antenna at the target. AN/APG-5 then automatically tracks the target in range and will indicate to the gunner that it is feeding range information into his sight. This indication is in the form of a target indicator light which is placed on the sight so that it may be readily observed by the gunner without the need for shifting his view from the target. The gunner then merely keeps his guns trained on the target and fires when at the proper range. In the event there is more than one target present, the gunner may select a closer or a more distant target by pressing the IN or OUT switch of the target selector, which is mounted on the handle bar control. He can tell when he is on the proper target by noting the relative range as indicated by the reticule separation on the optical sight. In the event of radar failure, the target indicator will not operate. When this occurs, the gunner can immediately regain manual control. He operates his sight by using the target selector to drive the servo-geared motor unit, which, in turn, drives the range mechanism of the sight. In doing this, he uses the sight as he would if there were no radar equipment installed. C. PERFORMANCE FACTORS. The AN/APG-5 system will operate in ambients from minus 55° C. to plus 50° C. The system is capable of automatically tracking aircraft reliably to a minimum distance of 200 yards. The over-all system error, on the 1,800-yard scale, is less than ± 50 yards. The system is designed to operate between the limits of 6,000 to 40,000 feet altitude. --97-- Table of Contents Previous Part (I) Next Part (III)
http://www.ibiblio.org/hyperwar/USN/ref/RADTWOA/RADTWOA-2.html
CC-MAIN-2015-06
refinedweb
33,782
59.84
This section of the tutorial includes learning about Scala Regular Expressions, the detailed procedure, its coding and the output. Regular expressions are patterns that permit you to “match” various string values in a variety of ways. Scala uses import scala.util.matching.Regex to implement regular expression concept. We have the perfect professional Scala and Apache Spark Training Course for you! A pattern is simply one or more characters that represent a set of possible match characters. In regular expression matching, you use a character or set of characters to represent the strings you want to match in the text. A regular expression is a way of describing a set of strings using common properties, for example, strings that start with an “A” and end with an exclamation mark. Want to get certified in Scala! Learn Scala from top Scala experts and excel in your career with Intellipaat’s Scala certification! Procedure 1. Change the string representation of a regular expression into a Pattern object; 2. Create a Matcher object from the object which is created in the first step that applies to a particular string; Still, have queries? Come to Intellipaat’s Big Data Community, clarify all your doubts, and excel in your career! 3. Apply the various methods of the Matcher object to the particular string. These steps can be expressed in Scala as follows: import java.util.regex.{Pattern,Matcher} . . . . . . . . . . . . . . . . . . . . val i = Pattern.compile("regularExpression") val j = i.matcher(String) var foundMatch = j.find() Looking for a job change in Scala? Check out this informative blog on Jobs in Scala and excel in your career! e.g. import java.util.regex._ object HelloWorld { def main(args: Array[String]) { val p = Pattern.compile("i") val m = p.matcher("intellipaat") var found = false while (m.find()) { print("I found the text \""+ m.group()) print("\" starting at index " + m.start()+"\n") found = true } if (!found) println("No match found.") } } Output I found the text “i” starting at index 0 I found the text “i” starting at index 6 Interested in learning Scala? Check out the Scala Training in Sydney! Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment Name * Browse Categories
https://intellipaat.com/blog/tutorial/scala-tutorial/scala-regular-expressions/
CC-MAIN-2020-40
refinedweb
370
60.31
I'm trying to use getchar with scanf and I have this bug here that ignores the first whitespace character of the first loop. YES I want to use scanf and getchar TOGATHER in order to understand it better and NO i don't want to use fgets. :D I'm just curious what the bug is. I know its like < line of code: Code: /* This code prints out a multiplication table based on the user's input * it uses scanf and test for the num of return arguments so there is no undefined behaviour. It works on windows XP compiled with dev c++ 4.9.8.0. to port it simply remove the system calls. however you won't have the pretty interface ;) */ #include <stdio.h> #include <stdlib.h> #define MAXCLM 20 /* MAX number of clms (horizontal integers) */ #define MAXCHOICE 40 /*MAX number of rows so no bigger than the screen */ #define FLUSH while ( (getchar() !='\n')); void Winconsole(void); int main(void){ int choice, row, clm; Winconsole(); /* problems: ignores first input unless its a 'q' */ while ( (getchar()) != 'q'){ if( (scanf("%d", &choice) !=1) || choice > MAXCHOICE || choice < 1){ FLUSH printf("Invalid integer. Unable to print to screen\n"); printf("Try again: "); } else { for (row = 1; row <= choice; row++){ for (clm = 1; clm <= MAXCLM; clm++){ printf("%4d", row * clm); } printf("\n"); }//end of outer if printf("Enter another integer or [q] to quit: "); }//end of else }// end of while return 0; }//end of main void Winconsole(){ system("TITLE Multiplication Table 1.0 by bitshadow."); system("COLOR 64"); printf("What Multiplication table would you like: "); }
https://cboard.cprogramming.com/c-programming/54052-getchar-problem-printable-thread.html
CC-MAIN-2017-04
refinedweb
263
72.16
PROBLEM LINK: Author: Abhijeet Trigunait Tester: Abhijeet Trigunait DIFFICULTY: MEDIUM . PREREQUISITES: DP,Binary Search. PROBLEM: Chef has very beautiful garden.There are N flower plants arranged in row.The height and beauty of ith flower from left is hi and bi.In order to make garden less crowdy chef decided to pull out some flower plants. But chef has one condition in mind that needs to be met while pulling flower plants. - The heights of the remaining flower plants must be monotonically increasing from left to right. Your task is to help Chef to find the maximum possible sum of the beauties of the remaining flower plants. EXPLANATION: We can rephrase the above question as to find the increasing subsequence whose sum of beauty is maximum.We can solve this problem using 1d dp ,with each index representing the subsequence ending at ith element which is monotonically increasing and has maximum sum. so dp[i] =the sum of the increasing subsequence ending at ith index and has maximum sum, where 1<=i<=n. and the maximum of the dp array would be answer. for i=1 ,maximum possible sum would simply be beauty of first flower plant. so, dp[1]=bty[1]. and for all i dp[i] is given by: dp[i]=max( bty[i] , bty[i] + max(dp[j] ,where 1<=j<i and hj<hi) here dp[j] corresponds to the subsequence which we are extending with ith index. Time complexity for the above approach would be O(N^2),so its not the efficient one. We can reduce the complexity by using better approach to find dp[j] so as to calculate dp[i]. To get the best previous index which can be extended ,we can maintain a data structure (map) instead of iterating j from 1 to i ,that would only contain the meaningful indices that can be extended and can contribute for maximum sum ,and in process we would also delete unnecessary indices. For ex: if j<i dp[j]<dp[i] hj>hi hi: 1 2 3 bi 2 1 1 dp[i]: 2 3 4 so here we can see that ,removing the jth index won’t affect our answer ,in above example we can remove the index with height 1 . For searching the heights that is just less than the current index height and sum is maximum ,we can simply use lower bound function and will update dp[i] .Also if it result in meaningful indices we would push it on the map, and will also remove unnecessary indices at each step. Following above approach we can solve the problem inO(N logN). SOLUTIONS: Setter's Solution /* Author- Abhijeet Trigunait */ #include<bits/stdc++.h> #define lld long long int #define F first #define S second #define P pair<int,int> #define pb push_back #define mod 1e9+7 #define setbits(x) __builtin_popcountll(x) #define zerobits(x) __builtin_ctzll(x) #define gcd(x,y) __gcd(x,y) #define endl '\n' using namespace std; struct flower { lld hi; lld byt; }; lld solve(vector<flower>& vec, lld n) { vector<lld> dp(n + 1); map<lld, lld> mp; dp[1] = vec[1].byt; mp[vec[1].hi] = dp[1]; lld ans = dp[1]; for (lld i = 2; i <= n; ++i) { dp[i] = vec[i].byt; auto it = mp.lower_bound(vec[i].hi +1); if(it !=mp.begin()){ --it; dp[i] += it->second; } mp[vec[i].hi] = dp[i]; it = mp.upper_bound(vec[i].hi); while (it != mp.end() and it->second <= dp[i]) { auto temp = it; temp++; mp.erase(it); it = temp; } ans = max(ans, dp[i]); } return ans; } int main() { int t; cin>>t; while(t--){ lld n; cin >> n; vector<flower> vec(n + 1); for (lld i = 1; i <= n; ++i)cin >> vec[i].hi; for (lld i = 1; i <= n; ++i)cin >> vec[i].byt; cout << solve(vec, n) << endl;} }
https://discuss.codechef.com/t/error5-editorial/82036
CC-MAIN-2021-10
refinedweb
646
63.7
W3C Swell stuff This is a transcription of the taxonomy of fundamental objects in algernon. This is maintained as HTML but it's converted to RDF form using a transformation. Use the namespace name to refer to these properties and classes in RDF 1.0 syntax. @@TODO: express the relationship between algernon:subset and rdfs:subClass, etc. @@this is an if-added rule on things. We could use some "for all things x ... " notation. hmm.... here's a "slot rule", i.e. quantification over slots (predicates) or, in RDF terms, properties (isa ?x ?s) = ?x is a member of the set ?s. (member ?s ?x) = A member of ?s is ?x. (subset ?s1 ?s2) = A subset of ?s1 is ?s2. (superset ?s1 ?s2) = A superset of ?s1 is ?s2. The selfset of x is the set {x}. (less ?x ?y) = ?x less than ?y. (greater ?x ?y) = ?x greater than ?y. (equal ?x ?y) = ?x is equal to ?y. (least ?x ?s) = ?x is the least member of ?s. (greatest ?x ?s) = ?x is the greatest member of ?s.
http://www.w3.org/2000/07/hs78/algernon.html
CC-MAIN-2014-52
refinedweb
179
87.11
Overview Using a Raspberry Pi (even a Zero!) and an RFID Card reader, along with relay, LED, buzzer and some python code gives you the ability to install electronic locks in your house or office. A company called ControlEverything offers a number of interesting products for Arduino, Particle, Raspberry and their own $5 Onion Omega 2 (a Linux computer with WiFi built-in. Kind of a mix between an ESP8266 and a Raspberry Zero!). They offer a range of sensors and modules that use the I2C interface or UART interfaces on the Raspi. One item they have is the EM18 RFID reader module. The EM18 RFID Receiver for Raspberry Pi uses the UART interface for communication. RFID. Project This project lets you use a Raspberry Pi to act like an electronic door key & lock. You wave your RFID badge and the Raspi lights an LED, sounds a buzzer and engages a 120v/220v relay that could be connected to an electric lock or solenoid to unlock the door. After 5 seconds, the lock reengages and the LED and buzzer turn off. From inside the door, you can have a push button that unlocks the door for 5 seconds so you can leave without an RFID. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Parts - Raspberry Pi Zero, B+, 2, or 3 running Raspian Jessie - RFID module from ControlEverything which includes a Pi adapter and RFID key... You can substitute the Grove RFID instead. Be sure to set it to UART mode:...... - 5v to 120v/220v ControlEverything I2C relay... Or substitute a Grove Relay module - 5V Grove Buzzer Module... - LED with 220 ohm resistor or Grove LED module - Momentary contact push button - RFID tags (if more than the one included)-(125khz... - Circuit board or breadboard or Grove base shield, cables and connectors-... - Wire, box, etc. Step 2: Wiring Using the ControlEverything EM18 RFID (or Grove) hardware, it is a straight forward task to connect everything together. Note that on the Raspberry Zero, you will need to add a connector (i.e. solder!) so be aware. The other Pi’s come with pin connectors already installed. Note for Grove wiring, you will need to use Grove universal connectors and solder to a board as they no longer carry a Raspberry Pi shield. Take the ControlEverything Raspberry Pi I2C Adapter and line up with the P1 pin on the adapter facing the Pin 1 on the Raspberry Pi GPIO as shown in Fig.1 The RFID module uses the Raspberry’s UART connections (GPIO 14, 15, +5v, Gnd). If you use the Grove RFID, set it to use the UART. This is a Serial 9600 baud connection between the Raspi and the RFID module. Next plug the included grey cable into the Raspberry Pi Adapter. (Fig.2)Fig Adapter 6 pin port, making sure the red wire is facing the white indicator line on the adapter board. Then plug the grey cable into the RFID module, again with the red wire facing the white indicator line. Step 3: Simple Start NOTE: This assumes you already have a working Raspi using Raspian Jessie OS and can access via SSH or terminal. There are plenty of how-to's for setting up and using your Raspberry, so I'm not repeating them here. :) Once you have plugged in your RFID module to the Pi, you can try it out. Log into your Raspi using SSH or a terminal program. You can also go to GITHUB and use the files from there You may need to install some python modules first: $ sudo apt-get install python-pip $ sudo pip install pyserial Create a file called “rfidtest.py” $ nano rfidtest.py Edit the file using below. import serial # For PiB+ 2, Zero use: ser = serial.Serial('/dev/ttyAMA0', 9600, timeout=1) # For Pi3 use #ser = serial.Serial('/dev/ttyS0', 9600, timeout=1) while True: string = ser.read(12) if len(string) == 0: print "Please wave a tag" else: string = string[1:11] # Strip header/trailer print "string",string if string == '650033A2E5': print "hello user 1 Note for Raspberry Pi 3 owners: Note that for Pi3 only, you need to comment out the Pi2 line and uncomment the Pi3 line as Pi3 uses /dev/ttyAMA0 for Bluetooth. #ser = serial.Serial('/dev/ttyS0', 9600, timeout=1) Also note that on thePi3, you need to add the following line to /boot/config.txt: $ sudo nano /boot/config.txt Append: #Enable UART enable_uart=1 Then reboot your Pi3. All Raspberries: To run the python program, just type: $ sudo python ./rfidtest.py Tap your RFID card or fob against the EM18 module. The program should print out the ID string. You can edit the rfid.py to update the string with your ID’s string. Step 4: Part 2 Wiring Use the diagrams and photos to assist in wiring the LED, Buzzer, Relay and button. Note that the Grove wiring will be slightly different as it uses +5v, GND and Signal leads, instead of just two wires. If you are using the ConnectEverything I2C Relay module, you just plug it into the Pi Adapter I2C port. For the other devices (LED, Buzzer, Switch and relay) connect using the diagram and pins shown in the figures above. Note for Grove wiring, you will need to use Grove universal connectors and solder to a board as they no longer carry a Raspberry Pi shield. The GPIO pins for Raspberry Zero, B+, 2 and 3 are identical, so the same diagram will work for either. When you initially turn on your Raspberry, the LED, buzzer and relay should stay off, but it is possible that they come on. Assuming no wiring errors, these will turn off when you first run the Reader program described next. Step 5: More Advanced Reader Software There are two possible Relays supported here. The Grove version uses three wires (+5v, GND, GPIO Signal) while the ControlEverything uses I2C interface. For I2C, you must activate the I2C interface on your Pi. This is not needed for Grove. 1. $ sudo raspi-config 2. Select “Advanced Options” 3. Select “I2C” 4. Answer “Yes” to Would you like the ARM I2C interface to be enabled? 5. Answer OK and select “Finish” to exit. 6. $ sudo reboot After reboot, log back in and enter: $ sudo i2cdetect -y 1 If you have no I2C devices plugged in, you will see the above Fig.1 Once you plug in a device and rerun the command, you should see it show up in the list. (Probably need to reboot, if you didn’t power down before plugging it in.) Note that the EM18 RFID module isn’t an I2C device, so doesn’t show up. The ControlEverything 1 or 2 channel relay module is I2C, so should show up in the I2C devices list as 0X41. Get the software from GitHub: GITHUB $ git clone <a href="" rel="nofollow"></a> If you are using the I2C relay, you need to change the line I2C = OFF # Grove Relay to I2C = ON # I2C Relay If necessary, you can change which GPIO pins you are connecting your LED, Buzzer, Grove Relay and Push button as needed. But you cannot use the I2C or UART GPIO pins. Add your RFID card numbers to the CARDS list in the program. If you don’t have the number from the rfid.py program, any card you wave while running this program will be printed on the screen, but will not “open” the door. Run the program using: $ sudo python reader.py The program will display a message every few seconds watching for a card to read. When it sees one, it will turn on the LED, Buzzer and Relay for 5 seconds, then turn them all off. If you press the momentary button instead of waving a card, it too will “open” the door. If you exit the program (Ctrl-C), the relay will engage to “unlock” the door on failure. (The Jurassic Park mode!) This would keep you from being locked out if the Raspi failed. Note that I did not include error trapping or set up the program to restart on reboot. Also note that the relay will NOT be engaged if the power goes out. This means the door would be “locked”! Step 6: A Real System For a real Door lock system, you need to add the electric lock or solenoid to the Relay. You also need to harden the program and the Pi for failures. (i.e. error trapping and restart at reboot, a battery backup, a case for the RFID module, etc.) If you have many RFID cards, an external file or database would be useful, along with the user’s name.) See also the version “reader2.py” in GITHUB files for example of these extra features. Reader2.py Configuration The reader2.py is a more elaborate version of the door lock controller that stores cards in a file and logs date/time of entries and exits. To use this version, copy the cards.dat to your /home/pi directory and edit with your card numbers. $ cp cards.dat /home/pi/cards.dat $ nano /home/pi/cards.dat Edit with your card ids and names. Test the program from the command line using: $ sudo python reader2.py Once you verify operation, edit reader2.py and turn off DEBUG messages: $ nano reader2.py DEBUG = False Edit /etc/rc.local and add the following line $ sudo nano /etc/rc.local /home/pi/RFIDReaderPi/reader2.py & exit 0 This will run the program any time you reboot the pi. A log file of each door entry will be appended to: /home/pi/rfiduser.log 2016-08-31 13:11:51 00000000 Start... 2016-08-31 13:11:57 E0043DBB00 NOT FOUND 2016-08-31 13:12:05 00000000 Someone Exited 2016-08-31 13:19:29 E0043DBB00 Russell Grokett 2016-08-31 13:19:37 00000000 Stop... Have Fun! Discussions
https://www.instructables.com/id/Dont-Forget-Your-Raspberry-RF-ID-Card/
CC-MAIN-2019-43
refinedweb
1,676
73.98
Hi all, I hope I'm not infringing an unwritten copyright of Stefano's here, but this e-mail *definately* fits into the catagory of a random thought, rather than an immediate proposition. It's also somewhat off topic; this thought is NOT a suggestion for something we could do with Cocoon - I just don't think it makes sense in that environment. It's a random thought that's been bouncing around my head, and I thought I'd share it with you. Before I dive in and start wombling, note that my internet connection is currently as dead as a dodo, and so I can't check any namespaces or anything. This also has the side effect that I have absolutely no idea whatsoever when our mailserver will actually succeed in delivering it. I've simplified a lot of this, because (a) I don't want to cloud matters, if I can avoid it, and (b) it's too damned late for me to think straight ;) I've been thinking about content/presentation abstraction for a *long* time, and even more since I've been involved with Cocoon. One thing has always been in the back of my mind.. "Surely the server should be able to deliver content in whatever format the client wants? More to the point, surely I shouldn't always have to think about it?" Let's think about this. Currently in Cocoon we have either PIs (in Cocoon1.x) or a sitemap (Cocoon2). In each of these, we explicitly specify how to get from the source document to the browser. Now, in Cocoon2, we can use matchers to determine what format to send data to the client in, which is great. I can have exactly the same URI, and without changing the source document, I can pump it out in HTML, text, PDF, or if I'm feeling particularly adventurous (or plain sick - you decide) SVG, PNG or JPEG. Cocoon1.x offers similar facilities, albeit in a somewhat under-engineered fashion. Real Soon Now, we'll be able to use something called 'content negotiation' to work out what format browers would prefer without having to inferr it from the URL or User Agent. The way this works is that the client sends an 'Accept:' header to the server specifying what types of data it can understand. This feature is somewhat underdeveloped in current browsers, but it will improve, particularly as technologies such as Cocoon become prevalent. Now, the seed that has been growing in my head (albeit on the back burner) for a good few months now is that it is potentially possible to take this concept further. All XML documents have a 'namespace'. This is a unique URI, which allows us to be *sure* we're dealing with the set of semantics we were expecting. As the XML content within Cocoon flows from the generator to the serializer, this namespace changes. Now, take the following lump of (imaginary) config for a Cocoon-like system: <negotiate> <translate from="" to="" filter="xslt"> <parameter name="stylesheet" value="intro-layout.xsl"/> </translate> <translate from="" to="" filter="xslt"> <parameter name="stylesheet" value="subj-layout.xsl"/> </translate> <translation from=""> <parameter name="stylesheet" value="layout-xhtml.xsl"/> </translate> <translation from="" to="[...Iforget...]" filter="xslt"> <parameter name="stylesheet" value="layout-wml.xsl"/> </translation> </negotiate> The server could then build a node-edge graph of the translations in memory, and back propagate the target namespaces to preceeding nodes in the graph. For the above (very simple) config file, the graph would look like [namespaces shortened]: [uea/prosp/intro] \ / [xhtml] |-[uea/prosp/layout]-| [uea/prosp/subj] / \ [wml] The xml and xhtml namespaces would be back propagated towards the left of the diagram, so that the 'layout' node knows to go 'up' for xhtml, and 'down' for wml, and so that the 'intro' and 'subj' nodes know that they can reach 'layout', 'xhtml', and 'wml' by going 'right'. When a request comes in, it would be tagged with the destination namespace (xhtml, wml, svg, whatever...). When the source XML is parsed/generated, we discover its namespace from the root element, and go find ourselves that node in the graph. The node then looks at the destination namespace and forwards the SAX events to a filter and on to the destination node. This process continues until it gets to the destination node, at which point it's serialized, which brings me nicely onto the next point... I've got to admit, up to this point, I've made a rather large simplification. Great. We can transform from one namespace to another, but how on earth are we going to get a png or a jpeg out? The obvious answer is to treat mime types in a similar manner, and build them into the node graph. This causes problems, because suddenly we're not dealing with filters, we're dealing with serializers, which have a SAX input stream, but a *binary* output stream. To be frank, I'm still pondering this bit. I've explained *how* something like this could work (loosely, I admit), but the question now has to be "why on earth would you want to?". The answer is (and I did warn you this was *not* an immediate proposition) that at the moment, you *wouldn't* (this thing is *not* in the Cocoon2 target area, as far as I'm concerned). The sitemap handles pretty much anything most people are going to throw at it for the forseable (and a hefty wadge that most people aren't <grin>) Where I *can* see it being useful is where you're dealing with all kinds of different DTDs from a particular URI space, and matching becomes cumbersome. For example, imagine you had a project linked to CORBA. Everything in /objects/* was linked to a CORBA generator, so that /objects/<iiopID> retrieved the content of an object. You could potentially write a matcher, and put entries in the somewhat cumbersome, particularly if you're targeting WAP and HTML and PDF, for example. Using the directed graph, you don't worry about it - just let the server work out the easiest way to translate the document into what the client wants. I don't know, maybe this is a Really Bad Idea (tm), but I think in some situations; particularly inside large application servers where you've got lots of people administering the system. If people could upload a stylesheet, and just specify the source and destination namespaces and let the server work out when to use it, we save ourselves a lot of configuration nightmares. Anyway, that's my random thought for the evening, hope reading this e-mail wasn't a *total* waste of time for those who made it ;) Cheers, -- Paul Russell <paul@luminas.co.uk> Technical Director, Luminas Ltd.
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200005.mbox/%3C20000525024923.A31238@brit.luminas.co.uk%3E
CC-MAIN-2014-10
refinedweb
1,136
58.42
I now, it is possible to obtain the polynomial features as numbers by polynomial_features.transform(X) [1, a, b, a^2, ab, b^2] .get_params() import numpy as np from sklearn.preprocessing import PolynomialFeatures X = np.array([2,3]) poly = PolynomialFeatures(3) Y = poly.fit_transform(X) print Y # prints [[ 1 2 3 4 6 9 8 12 18 27]] print poly.powers_ This code will print: [[0 0] [1 0] [0 1] [2 0] [1 1] [0 2] [3 0] [2 1] [1 2] [0 3]] So if the i'th cell is (x,y), that means that Y[i]=(a**x)*(b**y). For instance, in the code example [2 1] equals to (2**2)*(3**1)=12.
https://codedump.io/share/Gszz188el8CP/1/sklearn-how-to-get-coefficients-of-polynomial-features
CC-MAIN-2016-50
refinedweb
119
76.22
Python tools to sample randomly with dont pick closest `n` elements constraints. Also contains a batch generator for the same to sample with replacement and with repeats if necessary. Project description Sampling Utils Python tools to sample randomly with dont pick closest n elements constraints. Also contains a batch generator for the same to sample with replacement and with repeats if necessary. Installation Simply install using pip pip install sampling_utils Usage Dont Pick Closest from sampling_utils import sample_from_list sample_from_list([1,2,3,4,5,6,7,8], dont_pick_closest=2) You are guaranteed to get samples that are at least dont_pick_closest apart# (in value, not in indices). Here you will get samples where sample - any_other_sample is always greater than 2. For example, if 2 is picked, no other item in range [2+ dont_pick_closest and 2- dont_pick_closest] will be picked Another example looped 5 times: for _ in range(5): sample_from_list([1,2,3,4,5,6,8,9,10,12,14], dont_pick_closest=2) # Output # [5, 10, 2, 14] # [9, 6, 14, 1] # [3, 8, 12] # [10, 3, 6, 14] # [2, 5, 8, 12] If 12 is sampled, sampling 10 and 14 are not allowed since dont_pick_closest is 2. In other words, if n is sampled, then sampling anything from [n-dont_pick_closest, ... n-1, n , n+1, ... n+dont_pick_closest] is not allowed (if present in the list). #Will be called as dont_pick_closest rule hereafter. Number of samples You can also specify how many samples you want from the list using number_of_samples parameter. By default, you get maximum possible samples (without replacement). for _ in range(5): sample_from_list([1,2,3,4,5,6,8,9,10,12,14], dont_pick_closest=2, num_samples=2) # Output # [8, 2] # [6, 3] # [12, 1] # [4, 10] # [9, 1] If you try to sample more than what's possible, you will get an error saying that it's not possible. Min and max samples You may want to just know how much you can sample from a given list obeying the dont_pick_closest rule from sampling_utils import get_min_samples, get_max_samples print(get_min_samples([1,2,3,4,5,6,8,9,10,12,14], dont_pick_closest=2)) print(get_max_samples([1,2,3,4,5,6,8,9,10,12,14], dont_pick_closest=2)) # Output # Min 3 # Max 4 Sampling without replacement successively / Generating batches of samples for one epoch If you want to successively sample without replacement i.e. sample as many samples from the list without repeating, you can use batch_rand_generator as shown below. This is particularly useful to generate batches of data until no more batches can be generated (equivalent to one epoch). from sampling_utils import batch_rand_generator from sampling_utils import get_batch_generator_elements batch_size = 2 brg = batch_rand_generator([1,2,3,4,5,6,8,9,10,12,14], batch_size=batch_size, dont_pick_closest=2) print(get_batch_generator_elements(brg, batch_size=batch_size)) # Output # [[1, 4], [8, 5], [14, 3], [2, 6]] Notice that the elements - within each batch obey the dont_pick_closest rule (e.g. 1 and 4 from batch 1) - from different batches need not obey the rule (e.g. 4 and 5 from batch 1 and 2 respectively). Contributing Pull requests are very welcome. - Fork the repo - Create new branch with feature name as branch name - Check if things work with a jupyter notebook - Raise a pull request Licence Please see attached Licence Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/sampling-utils/
CC-MAIN-2019-51
refinedweb
573
50.06
Hi, I want to create a plugin which should offer the ability to edit certain xml files using an own FileEditor (like source editor or GUI builder editor). But how to assoziate the file extension with my editor class, so a double-click at the xml-file in the project will open my editor? I've taken a look at ImageEditor from Alexey Efimov, but was not able to open an editor by double-clicking images :( Thanks in advance. -- Cheers, Tom Hi, It must open... Hmm... That version of IDEA you test? So, common logic is: 1. You implement FileEditorProvider 2. Check by IDEA request, can you edit file or not. 3. If your FileEditorProvider catch file for editing, then you may control how it will be edited. As main editor, and disable other editors and as additional editor - the editor will have tabs on botton. Thanks! Build 1178. I've created 3 implementations of com.intellij.openapi.components.ApplicationComponent, com.intellij.openapi.fileEditor.FileEditorProvider and com.intellij.openapi.fileEditor.FileEditor. The content of the META-INF/plugin.xml looks like: <![CDATA[ FormBuilder FormBuilder Plugin 0.0.1 Thomas Singer de.regnis.formbuilder.FormBuilderApplicationComponent ]]> But how do I tell IDEA the name of my FileEditorProvider? Tom Hello! You must do implement both ApplicationComponent and FileEditorProvider for your editor: This all that you need to create your ouw editor :) Thanks! Thomas Singer (MoTJ) wrote: The trick is that the ApplicationComponent also has to implement the FileEditorProvider, like this: public class ExampleFileEditorProvider implements ApplicationComponent, FileEditorProvider { } Then IDEA will find it by itself. Sascha > I check it just now, it work on double click and open images :) Sorry, this does not work - neither with my plugin nor with imageEditor. When I double click a file (which is shown as ) I get an "Register New Extension" dialog, but there is no imageEditor available... Maybe somebody from IntelliJ can comment on this issue? Tom Well, not here :( What icons your graphics have in IDEA's Project view? Tom Does it mean, IDEA calls accept(Project, VirtualFile) for all plugin's Application-/ProjectComponents which implement FileEditorProvider and takes the first one which returns true? Tom Thomas Singer (MoTJ) wrote: I'm pretty sure this is the way it works. However, for unknown file types you'll always get IDEA's file-type selection box first. As far as I know there's currently no way for an EditorProvider to register new file types, so this has to be done manually once. Sascha Tom, I have registered Custom type for binary... So, if you look for '?' as icon, then you need register custom type. Ah, OK, now it is working. I expected, that when IDEA shows the Register New Extension dialog without any hint of the imageEditor, that IDEA did not recognized that there is a custom file editor behind. Any related RFE to vote for? Tom Any words of API of "FileTypeManager" was in NNTP, but Open API still not allow do this from plugin. Greetings! I make it as workaround in PLUS. Take a look at PLUS 0.1.2 API. Thanks! See subject. Tom Hi, IDEA handles document saving internally. Also you may look at com.intellij.openapi.fileEditor.FileDocumentManager may be it helps. TIA, Dmitry Could you please be a little bit more precise about the details? In what relation FileEditor and Document stay? Could you please provide a simple example, e.g. by using a JTextArea for editing text files? Thanks in advance. Tom Hi, Probaly like this: By FileDocumentManager.getDocument(VirtualFile file) you'll get Document for your file. And after this you can do everithing you want with it. Hope it helps. TIA, Dmitry Where can I get this Image Editor code?
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206128759-FileEditor-questions?page=1#community_comment_206165545
CC-MAIN-2022-40
refinedweb
622
59.5
Dependencies are part of software development. It is unrealistic to expect running a project of any decent size without external dependencies. Not all code is worth writing, and a lot of clever people have written clever code which we would be clever to use in our projects. But managing dependencies can be tricky. In this article, I’ll share some thoughts on how we stayed sane with dependencies at N26. Documenting dependencies As a project grows bigger and older, it sometimes becomes difficult to know which dependency serves what purpose. For some of them, it’s pretty obvious, but for dependencies around tooling for instance, it can get tricky to keep track of what’s needed. A good practice could be to document when to add a dependency, and why a dependency is being used if not obvious. Our documentation on the matter contains the following cheatsheet to figure out how to decide to add a new package: - Auditing dependencies is important to make sure we do not use packages afflicted with known vulnerabilities. Various projects tackle this issue at scale such as Snyk or npm audit. I personally like npm audit because it’s baked by npm and free to use, but the console output can be daunting. That’s why I wrote a Node script wrapping npm audit to make the CLI output a little more digestable and actionable. It’s not published on npm because who has time for that, but it’s available as a GitHub Gist and then can be copied and pasted in a JavaScript file in one’s project. Cool features include: - Looking for unused dependencies is not very convenient. There might be projects out there doing it, but who has time to deal with thirt-party dependencies to manage dependencies. So I wrote a very small Bash script to look whether dependencies are referenced at all in a project. The idea is pretty straightforward: go through all the dependencies (or devDependencies), and then search within the project whether they are referenced, prefixed with an open quote (e.g. 'lodash). This specific search pattern will make sure to work for: requirecalls (e.g. require('lodash')) importstatements (e.g. import lodash from 'lodash') - non-root paths (e.g. import lodash from 'lodash/fp) If you happen to use double-quotes, you will need to update the script to reference a double-quote ( ") instead of single-quote ( '). When extracted as a little groom_deps function in one’s .zshrc or .bashrc file, it can be used within any project pretty conveniently. The type of dependencies ( dependencies, devDependencies or peerDependencies) can be passed as an argument and defaults to dependencies. function groom_deps { key=${1:-dependencies} for dep in $(cat package.json | jq -cr ".$key|keys|.[]"); do [[ -z "$(grep -r --exclude-dir=node_modules "'${dep}" .)" ]] && echo "$dep appears unused"; done } groom_deps devDependencies Note that some dependencies are required while not being imported anywhere in JavaScript code. For instance, @babel/polyfill, iltorb or other similar dependencies can be necessary while not being explicitly mentioned in JavaScript code. Therefore, tread carefully. Outdated dependencies You might be familiar with third-party tools like Dependabot or Greenkeeper to automatically submit pull-requests to update dependencies. They are nice, but they also have downsides: - They require some initial setup. - They create a lot of noise, and a permanent stream of updates. - They could fail to pass security approvals in some organisation. That’s why a long time ago I authored a small Node program to look for outdated dependencies. Similar packages exist as well, this is just my take on it. It works like this: it goes through the dependencies (and optionally devDependencies and peerDependencies) of the given package.json file. For each package, it requests information from the npm registry, and compares the versions to see if the one listed is the latest one. If it is not, it mentions it. The output could look something like this: Unsafe updates ============== Major version bumps or any version bumps prior to the first major release (0.y.z). * chalk @ 4.1.0 is available (currently ^2.4.2) * commander @ 6.2.0 is available (currently ^3.0.0) * ora @ 5.1.0 is available (currently ^3.4.0) * pacote @ 11.1.13 is available (currently ^9.5.8) * semver @ 7.3.2 is available (currently ^6.3.0) * ava @ 3.13.0 is available (currently ^2.3.0) * standard @ 16.0.3 is available (currently ^14.1.0) npm install --save chalk@4.1.0 commander@6.2.0 ora@5.1.0 pacote@11.1.13 semver@7.3.2 npm install --save-dev ava@3.13.0 standard@16.0.3 I actually never published the package on npm because I couldn’t be bothered to find a name that wasn’t already taken yet. The current recommended usage is to clone it locally and use it through Node or the CLI. I personally added the little snippet to my .zshrc file so it provides me a deps function I can run in a project to look for dependency updates. function deps() { node ../dependency-checker/bin -p package.json --dev --no-pr } The script is by no mean perfect: -?
https://www.scien.cx/2020/11/19/managing-npm-dependencies/
CC-MAIN-2022-05
refinedweb
866
67.25
Hey guys, I have a question regarding casting abstract classes to their derived counterparts. I'm trying to upcast from the base class to the derived class but both classes are encapsulated in smart pointers. Here's an example: #include <memory> #include <string> class Shape { //std::string * type; virtual int Area (); } class Box : public Shape { //char * m_pBuffer; virtual int Area (); } int Box::Area () { return 2; } int test () { std::shared_ptr <Shape> pShape (new Box ()); std::shared_ptr <Box> pBox = static_cast <std::shared_ptr <Box>> (pShape); // no conversion std::shared_ptr <Box> pSecondBox = pShape; // no conversion Shape * pStrongShape = new Box (); Box * pStrongBox = static_cast <Box*> (pStrongShape); // works } In the test function, at the beginning, i'm trying to upcast from Shape to Box by using the static_cast but I get an error telling me that there is no suitable conversion between the two. I can't use dynamic_cast either since it tells me to dynamic cast, the type must be of type pointer. What would be the proper way of achieving this using smart pointers? Shouldn't static casts do the conversion properly for me? Or am I forced to convert between normal pointers to smart pointers in order for this to work? Edited by D.V.D, 19 June 2014 - 10:02 PM.
http://www.gamedev.net/topic/657904-abstract-class-deriving-upwards/
CC-MAIN-2016-22
refinedweb
208
58.01
Using IMDbPY to search for movies : Python In this tutorial, we will be learning how to search for a movie using IMDbPY in Python. IMDb is an online database of information related to movies, television series, video games, streaming content online, documentaries, etc. It also includes cast, production team, biographies, plot, rating, and critical reviews. There are many instances when we need to search for a movie or television program and get some information like rating, review, or cast. Python provides us with a package that can do this task for us. The name of this package is IMDbPY. First, we need to install this package. Let’s do it by using the following command in command prompt or terminal. pip install IMDbPY Now we are ready to use it and its features in our Python program. We are going to use the search_movie() method to search for a movie. The syntax for this method is as follows: imdb_obj.search_movie(movie_name) In the above syntax, imdb_obj is the instance of IMDb and movie_name is the name of the movie that this method takes as an argument. The method returns a list of items for the searched title. Example Programs to search for a movie Here, you can see an example program for searching for a movie using the IMDbPY. import imdb imdb_obj = imdb.IMDb() item_list = imdb_obj.search_movie('Ford vs Ferrari') for i in item_list: print(i) Output: Ford v Ferrari Ford v Ferrari Ford v Ferrari Ford v Ferrari Shelby vs. Ferrari Ford GT40 vs. Ferrari Take Two Zakarian vs. Ferraro: Peach Desert Water, LED TVs, Ferraris Shelby Legendary Cars: Ford V Ferrari - 'CSX 8198' Cobra Behind the scenes: 'Ford V Ferrari' Reaction from stars on 'Ford V Ferrari' Supercar Road Trip - Ford GT vs. Ferrari F430 vs. Pagani Zonda Reaction from Stars on 'Ford V Ferrari' - Legendary Rendezvous at premiere of 'Ford V Ferrari' Sci Fi a Bomb, Ford v. Ferrari, Colorado Brown Stain Superformance LLC: Ford V Ferrari 'Ken Miles Edition' Cobra Once Upon A Time In Hollywood, Marriage Story, Ford V Ferrari Pagani Zonda Cinque vs McLaren P1vs Ferrari F40: Abdul's Garage //LTACY SPECIAL EDITION DUBAI Pt. 1 Let’s see another example. import imdb imdb_obj = imdb.IMDb() item_list = imdb_obj.search_movie('Agent Vinod') print(item_list) Output: [<Movie id:1395025[http] title:_Agent Vinod (2012)_>, <Movie id:0165610[http] title:_Agent Vinod (1977)_>] Thank you. Also read: Movie Recommendation System using Machine Learning in Python
https://www.codespeedy.com/using-imdbpy-to-search-for-movies-python/
CC-MAIN-2020-29
refinedweb
410
74.29
Using Google Contracts for Java with IntelliJ IDEA The first step after obtaining the Google Contracts for Java Jar and adding it to your project, is to enable Annotation processing support in IDEA. To do this go to open the Settings window (IntelliJ IDEA > preferences on Mac) and go to Compiler > Annotation Processors. Now do the following steps: - Check the Enable Annotation processing option. - Select the Obtain processors from project classpath option. - In the Processed Modules table click the Add button and select the Module for which you want to enable Contracts for. - Hit Apply and you are ready to rock. To test this you can add a basic contract annotation to a method in a class, here is one that I created using the @Requires annotation to ensure that the integer method parameter "c" has a value greater than zero: import com.google.java.contract.Requires; public class TestSomeContracts { @Requires({"c > 0"}) public void testContract(int c) { } } Now when you compile you wont get much feedback as to wether the annotation was processed or not, as the contract syntax is correct, so lets modify it a bit to generate a compile splat by changing the variable name of the precondition: @Requires({"cs > 0"}) public void testContract(int c) { } When you build the module, you will now get a compilation failure which kinda integrates into IDEA, in that you can click on the error in the messages window and it will take you to the line that failed. Unfortunately IDEA wont highlight the line in your Editor or anything fancy like what you get in Eclipse, but it's good enough to work with. Using a class with contracts. After reverting and compiling the class I created a simple test case to test the contract by passing in data that violates the contract (ie. an integer less than 1): import org.junit.Test; public class TestSomeContractsTest { @Test public void testContract() { new TestSomeContracts().testContract(-1); } } I run the test… and... it passes! In order to actually work, Google Contracts needs to do some bytecode shenanigans in order to actually enforce the contract definitions during runtime. Currently they have two modes of operation, an offline instrumenter which is a post compilation processor which weaves in the contracts into a compiled class, and a java agent. For development the most convenient method to use is the java agent. To use the agent in IDEA, click the Select Run/Debug Settings drop down and select the Edit configurations option. Expand the defaults entry and select the JUnit option (or TestNG or just plain Application) and add the following to the VM parameters field: -javaagent:[path to the Google Contracts for Java jar file] I Also remove an existing configuration so that it picks up the new option when I run the test again. Now I run the test again and - voila - nice big splat when I run the test and pass invalid data to the method: com.google.java.contract.PreconditionError: c > 0 at TestSomeContracts.com$google$java$contract$PH$TestSomeContracts$testContract(TestSomeContracts.java:9) at TestSomeContracts.testContract(TestSomeContracts.java) at TestSomeContractsTest.testContract(TestSomeContractsTest.java:9) com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:65) On a final note. Google contracts is still pretty fresh, so here's hoping IDE support will improve if the project takes off. I must also say that I'm not too fond of the current mechanisms for enforcing the contract. Hopefully Google might take a page from Lombok and do to weaving at compile time. From (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://java.dzone.com/articles/using-google-contracts-java
CC-MAIN-2013-20
refinedweb
611
51.07
Well, I had to make a program which used 2 hashing functions on keys that it gets from a file and stores them on a table. The first function should let the user define M (uisng function h(k) = k mod M) and then applies a second hashing function which is predefined (in this case, h2(k) = 8 - k mod 8). The following code that I whipped up seems to work fine, but this is however a kind of "rough draft" that I made. Hashing is a completely new concept to me and I am curious if I am doing it right (the first hash function gives me the same results as examples given in my book, but the book doesnt have any double hashing examples). Also, I am just looking for areas where I could improve. I am considering changing the arrays to linked lists, or perhaps even binary trees, but only if that alternative would prove more efficient. Any other ideas are welcomed. Code:#include<iostream> #include<cmath> #include<cctype> #include<fstream> using namespace std; int first_hash(int a[], int i, int x, int M) { if (i > -1) return x = (a[i] + 32 * first_hash(a, i-1, x, M)) % M; else return (0); } int second_hash(int a[], int i, int x) { if (i > -1) { x = (a[i] + 32 * second_hash(a, i-1, x)) % 8; return 8 - x; } else return (0); } int do_thehash(char a[], int c, int M) { char d; int h = 0, tmp[20]; for (int i = 0; a[i]; i++, h++) // reads each char and converts to a corresponding int (A = 1, B = 2, etc) { d = a[i]; if(isupper(d)) tmp[h] = int(d) - 64; else tmp[h] = int(d) - 96; } h = h - 1; c = first_hash(tmp, h, c, M) + second_hash(tmp, h, c); return c; } void table(int a[], int c, int i) { a[i] = c; i++; } void sort_table(int a[], int x) { int j = 0, tmp = 0; for (int i = 1; i < x; i++) { tmp = a[i]; j = i; while(a[j-1] > tmp) { a[j] = a[j -1]; j--; } a[j] = tmp; } } int main(void) { char input[20]; int c= 0, M, x = 0, hash_table[100]; /**** File stuf(will move later) ****/ char ofilename[16], ifilename[16]; ofstream write; ifstream read; cout << "Enter an input file: "; cin >> ifilename; read.open(ifilename); if(read.fail()) { cout << "That file does not exist!" << endl; exit(1); } cout << "Enter a value for M: "; cin >> M; while (read >> input) { c = do_thehash(input, c, M); table(hash_table, c, x); cout << c << endl; x++; } sort_table(hash_table, x); cout << "Enter an output file to write to: "; cin >> ofilename; write.open(ofilename); int j = 0; for(int i = 0; j < x; i++) { if (i == (hash_table[j]) - 1) { cout << 'x' << " -- " << x << " -- "<< hash_table[j] << endl; write <<'x'<< endl; j++; } else write << endl; } write.close(); cout << "File written" << endl; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/36841-double-hashing.html
CC-MAIN-2015-06
refinedweb
476
61.43
It was great fun and to make sure there was something fresh for the children to get their teeth into I came up with a new exercise called "LavaTrap" so I thought I would share it! You can download the worksheet which is great printed as an A5 booklet on a single piece of A4 or you can follow the exercise below. The Lava Trap Using Python and Minecraft: Pi edition we can change the game to do amazing things. You are going to create mini game - a Lava Pit will instantly appear and Steve will be put at the centre of it, soon through the block he is standing on will disappear so he will have to move, but hang on all the blocks keep disappearing! The first task is to start your program and get “Welcome to the Lava Trap” to appear on the screen: - Press ESC to go back to the Minecraft menu but leave the game playing. - Open Python IDLE by clicking Menu > Programming > Python 3. - Use File > New Window to create a new program and save it as ‘myprogram.py’. - Type the following code into the program to import the modules you will need. from mcpi.minecraft import Minecraft from mcpi import block from time import sleep - Create a connection to Minecraft using the code. mc = Minecraft.create() mc.postToChat("Welcome to the Lava Trap") - Run your program by clicking Run > Run Module. You should see your message appear in the Minecraft chat window. Tips Lava Next you will use the api to create the pit where the game will be played - when your program runs it will instantly appear under Steve before the game starts. First update your program so it creates a single block of lava under Steve, by adding the following code: - Put a 3 second delay into your program so that you can see what going on. sleep(3) - Find out where Steve is in the world. pos = mc.player.getTilePos() - Create a block of lava under Steve. mc.setBlock(pos.x, pos.y - 1, pos.z, block.LAVA.id) - Run your program by clicking Run > Run Module or by pressing F5. Your program will wait 3 seconds and then a block of Lava will appear under Steve - he’s going fall in and burn! Tips You need to create a lot more lava in order for the game to be a lot more fun - next you will program a large slab of STONE with LAVA on the top: - Use setBlocks() to create an area of STONE 2 blocks below Steve for the LAVA to sit on. mc.setBlocks(pos.x - 5, pos.y - 2, pos.z - 5, pos.x + 5, pos.y - 2, pos.z + 5, block.STONE.id) - Then create the LAVA under Steve. mc.setBlocks(pos.x - 5, pos.y - 1, pos.z - 5, pos.x + 5, pos.y - 1, pos.z + 5, block.LAVA.id) - Run your program to see the Lava ‘pit’. Make a DIAMOND_BLOCK platform in the middle for Steve to stand on: - Create the diamond block. mc.setBlock(pos.x, pos.y - 1, pos.z, block.DIAMOND_BLOCK.id) - Run your program Steve will be stuck in the middle of the Lava pit. Challenge 1 - Can you code yourself out of the Lava Pit? At the moment unless Steve flies or lays blocks down he can’t get out without getting burned - update your program so he can get out. Tips Make a game Update your program to make blocks under Steve disappear: - Post messages to the chat screen to warn the player the game is about to start. mc.postToChat("Get Ready") mc.postToChat("Blocks under you will keep disappearing") sleep(3) mc.postToChat("Go") - Create a variable called gameover and set it to False - it will be set to True at the end of the game. gameover = False - Create a loop which will continue until the game is over. while gameover == False: - Get Steve’s position. playpos = mc.player.getTilePos() - Turn the block under Steve to OBSIDIAN as a warning and wait for 2 seconds. mc.setBlock(playpos.x, playpos.y - 1, playpos.z,block.OBSIDIAN.id) sleep(2) - After the warning turn the block to AIR, if Steve is standing on it, he’s going to be in the Lava pit. mc.setBlock(playpos.x, playpos.y - 1, playpos.z, block.AIR.id) sleep(0.5) - Run the program, the game will start and you will have to put blocks down in the Lava pit to escape because otherwise they are going to disappear and Steve will fall in. Game over The game is over if Steve falls into the Lava, you need to modify your program to check if he has fallen into the Lava and put a message on the screen: - Use an if statement to see if Steve’s height (y) is not equal to where he started, if it is set the gameover variable to True. if playpos.y != pos.y: gameover = True - Put a message on the screen to let the player know they have been caught in the lava trap. mc.postToChat("Game over.") - Run your program and see how long you can stay out of the lava. Challenge 2 - Make the game your own. This game is just the start, can you finish it? Here are some challenges: - Make the game harder? - Make a better game arena, perhaps build a stadium or walls around it so Steve can get out. - Add points to the game, each time Steve doesn’t fall in he gets a point. - Change the game so it starts easy but gets harder the longer you play. - Add a 2 player (or even multiplayer!) option. Would like to use this at an meet up at my local library some time. and i can almost Imagen have fun it must have been at Pycon. Feel free, its all licensed creative commons :) Haha, sounds verrry familiar :)
http://www.stuffaboutcode.com/2015/09/minecraft-game-tutorial-lavatrap-pycon.html
CC-MAIN-2017-17
refinedweb
992
81.93
Important: Please read the Qt Code of Conduct - QDataStream problem from 4.8 to 5.1 on MacOS Hello, I am getting error on this: @ QByteArray ba; QDataStream ds(&ba, QIODevice::WriteOnly); //This will write the hash into the byte array (through the data stream) ds << hash;@ The error output is: licensefile.cpp:185: error: variable has incomplete type 'QDataStream' QDataStream ds(&ba); ^ Any help please? did you perhaps forgot the #include <QDataStream>? Badger tkx, stupidity :) The funny thing is on 4.8 this was working :) Now gave me the follow error: Qt5.1.0/5.1.0/clang_64/include/QtCore/qhash.h:110: error: call to 'qHash' is ambiguous Q_DECL_NOEXCEPT_EXPR(noexcept(qHash(t))) ^~~~~ Sorry I have no idea about that, can you perhaps post the code that generates this error? This is from qhash.h , i think this sould be something I need to put on my .pro. I even not work with QHash.h ... @template<typename T> inline uint qHash(const T &t, uint seed) Q_DECL_NOEXCEPT_EXPR(noexcept(qHash(t))) { return (qHash(t) ^ seed); }@ - SGaist Lifetime Qt Champion last edited by Hi, Are you using a custom hash ? no ... - SGaist Lifetime Qt Champion last edited by Could you show the part of your code that triggers this ?
https://forum.qt.io/topic/31480/qdatastream-problem-from-4-8-to-5-1-on-macos
CC-MAIN-2020-40
refinedweb
208
66.03
Author: Michael Knyszek The need for a new API for unstable metrics was already summarized quite well by @aclements, so I'll quote that here: The runtime currently exposes heap-related metrics through runtime.ReadMemStats(which can be used programmatically) and GODEBUG=gctrace=1(which is difficult to read programmatically). These metrics are critical to understanding runtime behavior, but have some serious limitations: MemStatsis hard to evolve because it must obey the Go 1 compatibility rules. The existing metrics are confusing, but we can‘t change them. Some of the metrics are now meaningless (like EnableGCand DebugGC), and several have aged poorly (like hard-coding the number of size classes at 61, or only having a single pause duration per GC cycle). Hence, we tend to shy away from adding anything to this because we’ll have to maintain it for the rest of time. - The gctraceformat is unspecified, which means we can evolve it (and have completely changed it several times). But it‘s a pain to collect programmatically because it only comes out on stderr and, even if you can capture that, you have to parse a text format that changes. Hence, automated metric collection systems ignore gctrace. There have been requests to make this programmatically accessible (#28623). There are many metrics I would love to expose from the runtime memory manager and scheduler, but our current approach forces me to choose between two bad options: programmatically expose metrics that are so fundamental they’ll make sense for the rest of time, or expose unstable metrics in a way that's difficult to collect and process programmatically. Other problems with ReadMemStats include performance, such as the need to stop-the-world. While it's otherwise difficult to collect many of the metrics in MemStats, not all metrics require it, and it would be nice to be able to acquire some subset of metrics without a global application penalty. Conversing with @aclements, we agree that: Given the requirements, I suggest we prioritize the following concerns when designing the API in the following order. I propose we add a new standard library package to support a new runtime metrics API to avoid polluting the namespace of existing packages. The proposed name of the package is the runtime/metrics package. I propose that this package expose a sampling-based API for acquiring runtime metrics, in the same vein as runtime.ReadMemStats, that meets this proposal‘s stated goals. The sampling approach is taken in opposition to a stream-based (or event-based) API. Many of the metrics currently exposed by the runtime are “continuous” in the sense that they’re cheap to update and are updated frequently enough that emitting an event for every update would be quite expensive, and would require scaffolding to allow the user to control the emission rate. Unless noted otherwise, this document will assume a sampling-based API. With that said, I believe that in the future it will be worthwhile to expose an event-based API as well, taking a hybrid approach, much like Linux's perf tool. See “Time series data” for a discussion of such an extension. Firstly, it probably makes the most sense to interact with a set of metrics, rather than one metric at a time. Many metrics require that the runtime reach some safe state to collect, so naturally it makes sense to collect all such metrics at this time for performance. For the rest of this document, we're going to consider “sets of metrics” as the unit of our API instead of individual metrics for this reason. Second, the extendability and retractability requirements imply a less rigid data structure to represent and interact with a set of metrics. Perhaps the least rigid data structure in Go is something like a byte slice, but this is decidedly too low-level to use from within a Go application because it would need to have an encoding. Simply defining a new encoding for this would be a non-trivial undertaking with its own complexities. The next least-rigid data structure is probably a Go map, which allows us to associate some key for a metric with a sampled metric value. The two most useful properties of maps here is that their set of keys is completely dynamic, and that they allow efficient random access. The inconvenience of a map though is its undefined iteration order. While this might not matter if we're just constructing an RPC message to hit an API, it does matter if one just wants to print statistics to STDERR every once in a while for debugging. A slightly more rigid data structure would be useful for managing an unstable set of metrics is a slice of structs, with each struct containing a key (the metric name) and a value. This allows us to have a well-defined iteration order, and it's up to the user if they want efficient random access. For example, they could keep the slice sorted by metric keys, and do a binary search over them, or even have a map on the side. There are several variants of this slice approach (e.g. struct of keys slice and values slice), but I think the general idea of using slices of key-value pairs strikes the right balance between flexibility and usability. Going any further in terms of rigidity and we end up right where we don't want to be: with a MemStats-like struct. Third, I propose the metric key be something abstract but still useful for humans, such as a string. An alternative might be an integral ID, where we provide a function to obtain a metric‘s name from its ID. However, using an ID pollutes the API. Since we want to allow a user to ask for specific metrics, we would be required to provide named constants for each metric which would later be deprecated. It’s also unclear that this would give any performance benefit at all. Finally, we want the metric value to be able to take on a variety of forms. Many metrics might work great as uint64 values, but most do not. For example we might want to collect a distribution of values (size classes are one such example). Distributions in particular can take on many different forms, for example if we wanted to have an HDR histogram of STW pause times. In the interest of being as extensible as possible, something like an empty interface value could work here. However, an empty interface value has implications for performance. How do we efficiently populate that empty interface value without allocating? One idea is to only use pointer types, for example it might contain *float64 or *uint64 values. While this strategy allows us to re-use allocations between samples, it's starting to rely on the internal details of Go interface types for efficiency. Fundamentally, the problem we have here is that we want to include a fixed set of valid types as possible values. This concept maps well to the notion of a sum type in other languages. While Go lacks such a facility, we can emulate one. Consider the following representation for a value: type Kind int const ( KindBad Kind = iota KindUint64 KindFloat64 KindFloat64Histogram ) type Value struct { // unexported fields } func (v Value) Kind() Kind // panics if v.Kind() != KindUint64 func (v Value) Uint64() uint64 // panics if v.Kind() != KindFloat64 func (v Value) Float64() float64 // panics if v.Kind() != KindFloat64Histogram func (v Value) Float64Histogram() *Float64Histogram The advantage of such a representation means that we can hide away details about how each metric sample value is actually represented. For example, we could embed a uint64 slot into the Value which is used to hold either a uint64, a float64, or an int64, and which is populated directly by the runtime without any additional allocations at all. For types which will require an indirection, such as histograms, we could also hold an unsafe.Pointer or interface{} value as an unexported field and pull out the correct type as needed. In these cases we would still need to allocate once up-front (the histogram needs to contain a slice for counts, for example). The downside of such a structure is mainly ergonomics. In order to use it effectively, one needs to switch on the result of the Kind() method, then call the appropriate method to get the underlying value. While in that case we lose some type safety as opposed to using an interface{} and a type-switch construct, there is some precedent for such a structure. In particular a Value mimics the API reflect.Value in some ways. Putting this all together, I propose sampled metric values look like // Sample captures a single metric sample. type Sample struct { Name string Value Value } Furthermore, I propose that we use a slice of these Sample structures to represent our “snapshot” of the current state of the system (i.e. the counterpart to runtime.MemStats). To support discovering which metrics the system supports, we must provide a function that returns the set of supported metric keys. I propose that the discovery API return a slice of “metric descriptions” which contain a “Name” field referring to a metric key. Using a slice here mirrors the sampling API. Choosing a naming scheme for each metric will significantly influence its usage, since these are the names that will eventually be surfaced to the user. There are two important properties we would like to have such that these metric names may be smoothly and correctly exposed to the user. The first, and perhaps most important of these properties is that semantics be tied to their name. If the semantics (including the type of each sample value) of a metric changes, then the name should too. The second is that the name should be easily parsable and mechanically rewritable, since different metric collection systems have different naming conventions. Putting these two together, I propose that the metric name be built from two components: a forward-slash-separated path to a metric where each component is lowercase words separated by hyphens (the “name”, e.g. “/memory/heap/free”), and its unit (e.g. bytes, seconds). I propose we separate the two components of “name” and “unit” by a colon (“:”) and provide a well-defined format for the unit (e.g. “/memory/heap/free:bytes”). Representing the metric name as a path is intended to provide a mechanism for namespacing metrics. Many metrics naturally group together, and this provides a straightforward way of filtering out only a subset of metrics, or perhaps matching on them. The use of lower-case and hyphenated path components is intended to make the name easy to translate to most common naming conventions used in metrics collection systems. The introduction of this new API is also a good time to rename some of the more vaguely named statistics, and perhaps to introduce a better namespacing convention. Including the unit in the name may be a bit surprising at first. First of all, why should the unit even be a string? One alternative way to represent the unit is to use some structured format, but this has the potential to lock us into some bad decisions or limit us to only a certain subset of units. Using a string gives us more flexibility to extend the units we support in the future. Thus, I propose that no matter what we do, we should definitely keep the unit as a string. In terms of a format for this string, I think we should keep the unit closely aligned with the Go benchmark output format to facilitate a nice user experience for measuring these metrics within the Go testing framework. This goal suggests the following very simple format: a series of all-lowercase common base unit names, singular or plural, without SI prefixes (such as “seconds” or “bytes”, not “nanoseconds” or “MiB”), potentially containing hyphens (e.g. “cpu-seconds”), delimited by either * or / characters. A regular expression is sufficient to describe the format, and ignoring the restriction of common base unit names, would look like ^[a-z-]+(?:[*\/][a-z-]+)*$. Why should the unit be a part of the name? Mainly to help maintain the first property mentioned above. If we decide to change a metric‘s unit, which represents a semantic change, then the name must also change. Also, in this situation, it’s much more difficult for a user to forget to include the unit. If their metric collection system has no rules about names, then great, they can just use whatever Go gives them. If they do (and most seem to be fairly opinionated) it forces the user to account for the unit when dealing with the name and it lessens the chance that it would be forgotten. Furthermore, splitting a string is typically less computationally expensive than combining two strings. Firstly, any metric description must contain the name of the metric. No matter which way we choose to store a set of descriptions, it is both useful and necessary to carry this information around. Another useful field is an English description of the metric. This description may then be propagated into metrics collection systems dynamically. The metric description should also indicate the performance sensitivity of the metric. Today ReadMemStats forces the user to endure a stop-the-world to collect all metrics. There are a number of pieces of information we could add, but one good one for now would be “does this metric require a stop-the-world event?”. The intended use of such information would be to collect certain metrics less often, or to exclude them altogether from metrics collection. While this is fairly implementation-specific for metadata, the majority of tracing GC designs involve a stop-the-world event at one point or another. Another useful aspect of a metric description would be to indicate whether the metric is a “gauge” or a “counter” (i.e. it increases monotonically). We have examples of both in the runtime and this information is often useful to bubble up to metrics collection systems to influence how they‘re displayed and what operations are valid on them (e.g. counters are often more usefully viewed as rates). By including whether a metric is a gauge or a counter in the descriptions, metrics collection systems don’t have to try to guess, and users don't have to annotate exported metrics manually; they can do so programmatically. Finally, metric descriptions should allow users to filter out metrics that their application can‘t understand. The most common situation in which this can happen is if a user upgrades or downgrades the Go version their application is built with, but they do not update their code. Another situation in which this can happen is if a user switches to a different Go runtime (e.g. TinyGo). There may be a new metric in this Go version represented by a type which was not used in previous versions. For this case, it’s useful to include type information in the metric description so that applications can programmatically filter these metrics out. In this case, I propose we use add a Kind field to the description. While the metric descriptions allow an application to programmatically discover the available set of metrics at runtime, it's tedious for humans to write an application just to dump the set of metrics available to them. For ReadMemStats, the documentation is on the MemStats struct itself. For gctrace it is in the runtime package‘s top-level comment. Because this proposal doesn’t tie metrics to Go variables or struct fields, the best we can do is what gctrace does and document it in the metrics package-level documentation. A test in the runtime/metrics package will ensure that the documentation always matches the metric's English description. Furthermore, the documentation should contain a record of when metrics were added and when metrics were removed (such as a note like “(since Go 1.X)” in the English description). Users who are using an old version of Go but looking at up-to-date documentation, such as the documentation exported to golang.org, will be able to more easily discover information relevant to their application. If a metric is removed, the documentation should note which version removed it. The API as described so far has been a sampling-based API, but many metrics are updated at well-defined (and relatively infrequent) intervals, such as many of the metrics found in the gctrace output. These metrics, which I‘ll call “time series metrics,” may be sampled, but the sampling operation is inherently lossy. In many cases it’s very useful for performance debugging to have precise information of how a metric might change e.g. from GC cycle to GC cycle. Measuring such metrics thus fits better in an event-based, or stream-based API, which emits a stream of metric values (tagged with precise timestamps) which are then ingested by the application and logged someplace. While we stated earlier that considering such time series metrics is outside of the scope of this proposal, it's worth noting that buying into a sampling-based API today does not close any doors toward exposing precise time series metrics in the future. A straightforward way of extending the API would be to add the time series metrics to the total list of metrics, allowing the usual sampling-based approach if desired, while also tagging some metrics with a “time series” flag in their descriptions. The event-based API, in that form, could then just be a pure addition. A feasible alternative in this space is to only expose a sampling API, but to include a timestamp on event metrics to allow users to correlate metrics with specific events. For example, if metrics came from the previous GC, they would be tagged with the timestamp of that GC, and if the metric and timestamp hadn't changed, the user could identify that. One interesting consequence of having an event-based API which is prompt is that users could then to Go runtime state on-the-fly, such as for detecting when the GC is running. On the one hand, this could provide value to some users of Go, who require fine-grained feedback from the runtime system. On the other hand, the supported metrics will still always be unstable, so relying on a metric for feedback in one release might no longer be possible in a future release. Given the discussion of the design above, I propose the following draft API specification. package metrics // Float64Histogram represents a distribution of float64 values. type Float64Histogram struct { // Counts contains the weights for each histogram bucket. The length of // Counts is equal to the length of Bucket plus one to account for the // implicit minimum bucket. // // Given N buckets, the following is the mathematical relationship between // Counts and Buckets. // count[0] is the weight of the range (-inf, bucket[0]) // count[n] is the weight of the range [bucket[n], bucket[n+1]), for 0 < n < N-1 // count[N-1] is the weight of the range [bucket[N-1], inf) Counts []uint64 // Buckets contains the boundaries between histogram buckets, in increasing order. // // Because this slice contains boundaries, there are len(Buckets)+1 total buckets: // a bucket for all values less than the first boundary, a bucket covering each // [slice[i], slice[i+1]) interval, and a bucket for all values greater than or // equal to the last boundary. Buckets []float64 } // Clone generates a deep copy of the Float64Histogram. func (f *Float64Histogram) Clone() *Float64Histogram // Kind is a tag for a metric Value which indicates its type. type Kind int const ( // KindBad indicates that the Value has no type and should not be used. KindBad Kind = ) // Value represents a metric value returned by the runtime. type Value struct { kind Kind scalar uint64 // contains scalar values for scalar Kinds. pointer unsafe.Pointer // contains non-scalar values. } // Value returns a value of one of the types mentioned by Kind. // // This function may allocate memory. func (v Value) Value() interface{} // Kind returns the a tag representing the kind of value this is. func (v Value) Kind() Kind // Uint64 returns the internal uint64 value for the metric. // // If v.Kind() != KindUint64, this method panics. func (v Value) Uint64() uint64 // Float64 returns the internal float64 value for the metric. // // If v.Kind() != KindFloat64, this method panics. func (v Value) Float64() float64 // Float64Histogram returns the internal *Float64Histogram value for the metric. // // The returned value may be reused by calls to Read, so the user should clone // it if they intend to use it across calls to Read. // // If v.Kind() != KindFloat64Histogram, this method panics. func (v Value) Float64Histogram() *Float64Histogram // Description describes a runtime metric. type Description struct { // Name is the full name of the metric, including". // // A complete name might look like "/memory/heap/free:bytes". Name string // // Kind is the kind of value for this metric. // // The purpose of this field is to allow users to filter out metrics whose values are // types which their application may not understand. Kind Kind // StopTheWorld is whether or not the metric requires a stop-the-world // event in order to collect it. StopTheWorld bool } // All returns a slice of containing metric descriptions for all supported metrics. func All() []Description // Sample captures a single metric sample. type Sample struct { // Name is the name of the metric sampled. // // It must correspond to a name in one of the metric descriptions // returned by Descriptions. Name string // Value is the value of the metric sample. Value Value } // Read populates each Value element in the given slice of metric samples. // // Desired metrics should be present in the slice with the appropriate name. // The user of this API is encouraged to re-use the same slice between calls. // // Metric values with names not appearing in the value returned by Descriptions // will simply be left untouched (Value.Kind == KindBad). func Read(m []Sample) The usage of the API we have in mind for collecting specific metrics is the following: var stats = []metrics.Sample{ {Name: "/gc/heap/goal:bytes"}, {Name: "/gc/pause-latency-distribution:seconds"}, } // Somewhere... ... go statsLoop(stats, 30*time.Second) ... func statsLoop(stats []metrics.Sample, d time.Duration) { // Read and print stats every 30 seconds. ticker := time.NewTicker(d) for { metrics.Read(stats) for _, sample := range stats { split := strings.IndexByte(sample.Name, ':') name, unit := sample.Name[:split], sample.Name[split+1:] switch value.Kind() { case KindUint64: log.Printf("%s: %d %s", name, value.Uint64(), unit) case KindFloat64: log.Printf("%s: %d %s", name, value.Float64(), unit) case KindFloat64Histogram: v := value.Float64Histogram() m := computeMean(v) log.Printf("%s: %f avg %s", name, m, unit) default: log.Printf("unknown value %s:%s: %v", sample.Value()) } } <-ticker.C } } I believe common usage will be to simply slurp up all metrics, which would look like this: ... // Generate a sample array for all the metrics. desc := metrics.All() stats := make([]metric.Sample, len(desc)) for i := range desc { stats[i] = metric.Sample{Name: desc[i].Name} } go statsLoop(stats, 30*time.Second) ... /memory/heap/free:bytes KindUint64 // (== HeapIdle - HeapReleased) /memory/heap/uncommitted:bytes KindUint64 // (== HeapReleased) /memory/heap/objects:bytes KindUint64 // (== HeapAlloc) /memory/heap/unused:bytes KindUint64 // (== HeapInUse - HeapAlloc) /memory/heap/stacks:bytes KindUint64 // (== StackInuse) /memory/metadata/mspan/inuse:bytes KindUint64 // (== MSpanInUse) /memory/metadata/mspan/free:bytes KindUint64 // (== MSpanSys - MSpanInUse) /memory/metadata/mcache/inuse:bytes KindUint64 // (== MCacheInUse) /memory/metadata/mcache/free:bytes KindUint64 // (== MCacheSys - MCacheInUse) /memory/metadata/other:bytes KindUint64 // (== GCSys) /memory/metadata/profiling/buckets-inuse:bytes KindUint64 // (== BuckHashSys) /memory/other:bytes KindUint64 // (== OtherSys) /memory/native-stack:bytes KindUint64 // (== StackSys - StackInuse) /aggregates/total-virtual-memory:bytes KindUint64 // (== sum over everything in /memory/**) /gc/heap/objects:objects KindUint64 // (== HeapObjects) /gc/heap/goal:bytes KindUint64 // (== NextGC) /gc/cycles/completed:gc-cycles KindUint64 // (== NumGC) /gc/cycles/forced:gc-cycles KindUint64 // (== NumForcedGC) // Distribution of pause times, replaces PauseNs and PauseTotalNs. /gc/pause-latency-distribution:seconds KindFloat64Histogram // Distribution of unsmoothed trigger ratio. /gc/pacer/trigger-ratio-distribution:ratio KindFloat64Histogram // Distribution of what fraction of CPU time was spent on GC in each GC cycle. /gc/pacer/utilization-distribution:cpu-percent KindFloat64Histogram // Distribution of objects by size. // Buckets correspond directly to size classes up to 32 KiB, // after that it's approximated by an HDR histogram. // allocs-by-size replaces BySize, TotalAlloc, and Mallocs. // frees-by-size replaces BySize and Frees. /malloc/allocs-by-size:bytes KindFloat64Histogram /malloc/frees-by-size:bytes KindFloat64Histogram // How many hits and misses in the mcache. /malloc/cache/hits:allocations KindUint64 /malloc/cache/misses:allocations KindUint64 // Distribution of sampled object lifetimes in number of GC cycles. /malloc/lifetime-distribution:gc-cycles KindFloat64Histogram // How many page cache hits and misses there were. /malloc/page/cache/hits:allocations KindUint64 /malloc/page/cache/misses:allocations KindUint64 // Distribution of stack scanning latencies. HDR histogram. /gc/stack-scan-latency-distribution:seconds KindFloat64Histogram /sched/goroutines:goroutines KindUint64 /sched/preempt/async:preemptions KindUint64 /sched/preempt/sync:preemptions KindUint64 // Distribution of how long goroutines stay in runnable // before transitioning to running. HDR histogram. /sched/time-to-run-distribution:seconds KindFloat64Histogram Note that although the set of metrics the runtime exposes will not be stable across Go versions, the API to discover and access those metrics will be. Therefore, this proposal strictly increases the API surface of the Go standard library without changing any existing functionality and is therefore Go 1 compatible.
https://go.googlesource.com/proposal/+/master/design/37112-unstable-runtime-metrics.md
CC-MAIN-2020-24
refinedweb
4,233
53.51
Cookie Constructor (String, String, String) Assembly: System (in System.dll) Parameters - name - Type: System.String The name of a Cookie. The following characters must not be used inside name: equal sign, semicolon, comma, newline (\n), return (\r), tab (\t), and space character. The dollar sign character ("$") cannot be the first character. - value - Type: System.String The value of a Cookie. not cannot be passed in the value parameter unless the string passed in the value parameter is enclosed in double quotes: semicolon, comma. So the following example constructor would succeed but when you try to add this Cookie to a CookieContainer instance with the Add or Add methods the operation will fail and throw an exception: System.Net.Cookie cookie = new System.Net.Cookie("contoso", "123,456", ""); However, the following constructor with these special characters escaped will create a Cookie that can be added to a CookieContainer instance: System.Net.Cookie cookie = new System.Net.Cookie("contoso", "\"123,456\"", ""); The comma character is used as a delimiter between separate cookies on the same line. Available since 4.5 .NET Framework Available since 1.1 Portable Class Library Supported in: portable .NET platforms Silverlight Available since 3.0 Windows Phone Silverlight Available since 7.0 Windows Phone Available since 8.1
https://msdn.microsoft.com/en-us/library/te81b83k.aspx?cs-save-lang=1&cs-lang=csharp
CC-MAIN-2016-30
refinedweb
210
52.87
Writing a plugin for KOffice technically means the following things are created; The concept is really based on what Object Orientated design is saying, it means that you can create a programmer-interface and encapsulate that in a class without being very clear as to what that implementation is suppost to do. You can then supply several classes that actually contains the implementation of the programmer-interface, each in their own way. The actual way that the class responds to the same environment is thus the implementation and not the programmer-interface. Plugins are a way to provide a different implementation for a known interface. KOffice has defined several classes as base classes and has made the programmer interface of those base classes generic enough so a wide range of functionality can be accomplished by providing a plugin with a new implementation for such a base class. An example; one plugin type that KOffice supports is a docker widget. A docker widget is really a widget that can be shown on screen like a button or a text field are widgets as well. The plugin can ship a new implementation of the QDockWidget class by inheriting from that QDockWidget and providing text/buttons and functionality using the full API of the Qt and KDE/KOffice libraries to do their work. This means I can provide a plugin that ships a docker. And the docker shows the time of day, for example. By installing the plugin according to the instructions for that type of KOffice-plugin the KOffice application are able to find it and the docker will be created and shown on screen. One unique property of a plugin is that it is self contained. Which means that the end result of the plugin is one library which exports just the 'plugin object'. This means that whatever code and classes is changed in one plugin can not possibly affect code in another plugin or even in the KOffice applications. This has the benefit that development becomes more separated and changes in one place will not give problems in another. It is therefor suggested to use any number of classes that fit your design and programming style, and you are free to avoid using namespaces or class name prefixes, as you wish. In other words; your plugin is walled off, nobody will be bothered by what you do, or how do you it. The first component is the class that inherits from the interface this plugin represents. Each plugin type will go into details on this as each interface has different features and requirements. The factory component is also unique per plugin, but quite generic in setup and idea. The reason for the existance of a factory is that it makes it possible to create any number of the plugin objects at the time that the host applications want to do this, probably at the request of the user. The workflow in general is this; This means that our main advertisement to the outside world is the factory. As the factory is used to show the user what kind of things this plugin can do. Each plugin type has its own factory, with at minimum a create() like method which then returns a new instance. The factories typically have a large set of user interface components you can register on them. Like a name, or an icon etc. It is wise to fill as many of them with relevant information as possible. The next component is the Plugin component. This is a QObject inheriting class that has a specific constructor signature and, interrestingly, will be deleted immediately after it is created. The trick therefor is to do all the work of creating and registring the factories in the constructor. The code for this class is pretty simple, and you copy paste it with just a few modifications; in the header file: class MyPlugin : public QObject { Q_OBJECT public: MyPlugin(QObject * parent, const QStringList &list); }; in the cpp file; K_EXPORT_COMPONENT_FACTORY(libraryname, KGenericFactory<MyPlugin>("MyPlugin")) MyPlugin::MyPlugin(QObject * parent, const QStringList &) : QObject(parent) { KoToolRegistry::instance()->add(new MyToolFactory(parent)); KoShapeRegistry::instance()->add(new MyShapeFactory(parent)); } This example registeres 2 factories, but you can register any number you want to ship in 1 plugin. Note how only the factory is created, and nothing else is done. This makes loading of the plugin as fast as possible. If you copy paste this code make sure you update the 'libraryname' to be the (all lowercase) name of your plugin-library. Which is also repeated in the desktop file. Note that the name is without any extentions. In the K_EXPORT_COMPONENT_FACTORY macro the 'MyPlugin' is repeated twice. Its easiest to just replace that with the class name you choose for your project. The library is what will contain all the compiled classes of your plugin. A plugin is technically implemented as a library, at least on Linux and other unixes. It is loaded as a library by KOffice near the start of the application start. As the plugin is a library all development software can create them using a linker. It is worth noting that it is possible for your plugin to depend on external shared libraries, apart from the koffice ones. But you should install that shared library as well as your plugin. In cmake the creation of a library goes like this; ########### Flake Plugin library ############### project( myPlugin) include_directories( ${KOFFICEUI_INCLUDES} ) SET ( myplugin_SRCS MyPlugin.cpp # other cpp files ) kde4_add_plugin(textshape ${myplugin_SRCS}) target_link_libraries(myplugin kofficeui) install(TARGETS myplugin DESTINATION ${PLUGIN_INSTALL_DIR}) install( FILES myplugin.desktop DESTINATION ${SERVICES_INSTALL_DIR}) And last is the desktop-file component after which the plugin will be found and used by KOffice. A desktop file is basically a database record for the KDE database of what is installed. In this case the component is a 'Service'. For our running example here is myplugin.desktop [Desktop Entry] Encoding=UTF-8 Name=My Shape ServiceTypes=KOffice/Flake Type=Service X-KDE-Library=myplugin X-Flake-Version=1 A couple of things are important;
http://techbase.kde.org/index.php?title=Development/Tutorials/Generic_KOffice_Plugin_Creation&diff=13310&oldid=12109
CC-MAIN-2013-48
refinedweb
1,003
52.29
Difference between revisions of "Scout/Concepts/Sql Lookup Service" From Eclipsepedia Latest revision as of 16:52, 26 February 2013 A SQL lookup Service is a specific type of Lookup Service that works with a database. [edit] Description A SQL lookup Service provide a way to implement a Lookup Service that is very efficient to configured, if the call is resolved with a database. Instead of implementing the 4 methods ( getDataByKey(LookupCall call), getDataByText(LookupCall call), getDataByAll(LookupCall call), getDataByRec(LookupCall call)), it is possible to defined the behavior of the lookup service with some configuration properties and events. [edit] Properties Defined with getConfiguredXxxxxx() methods. SqlSelect The property SqlSelect expects a SQL query which is used to load the records for the lookup call. Here is an example of a such SQL query: SELECT language_id, name, NULL, NULL, NULL, NULL, NULL, 1, NULL, 1 FROM languages <key>WHERE language_id = :key</key> <text>WHERE UPPER(name) LIKE UPPER('%'||:text||'%')</text> Looking at this implementation we see that for each record returned by our lookup service we provide both a (unique) key and a text, which is a general characteristic of lookup services. Further we see that parts of the SQL statement are enclosed in tags. This is because a lookup can be performed in several ways: - Key-Lookup: Single-result lookup based on a unique key (e.g. when loading a form with a smartfield containing a value). - Text-Lookup: Multi-result lookup based on a textual search-term (e.g. when entering text into a smartfield). - All-Lookup: Unrestricted lookup that returns all available key-text pairs (e.g. when clicking the magnifier button on a smartfield). Depending on the way the lookup is performed, only one SQL part in tags is used. If for example a Text-Lookup is performed, only the SQL code in the corresponding <text> tag is used, whereas the SQL code in the other tags is ignored. As you might have noticed, the SQL statement contains two binding variables :key and :text. These bindings are available because the LookupCall itself is bound to the SQL statement. Therefore every public property of the LookupCall can be used as binding variable in the query. ( :key for call.getKey(), :text for call.getText(), :master for call.getMaster() ...) The above example showed a SQL statement with a complete variable where part. In a more complex query you probably have an additional fix where part which you do not want to add in the key, text and all section. If that's the case you should move the WHERE key word out of the tags like in the following example. public class CompanyLookupService extends AbstractSqlLookupService implements ICompanyLookupService { @Override public String getConfiguredSqlSelect() { return "SELECT C.COMPANY_NR, " + " C.NAME " + "FROM COMPANY C " + "WHERE C.ASSET > 10000 " + "<key> AND C.COMPANY_NR = :key </key> " + "<text> AND UPPER(C.NAME) LIKE UPPER(:text||'%') </text> " + "<all> </all> "; } } When the SQL Statement is executed, the Lookup-call is passed as bind-variable. You can access every properties of your call (see lookupCall members): public String getConfiguredSqlSelect() { return "language_id, name, null, null, null, null, null, 1, null, 1 " + " from languages" + " <key>where language_id = :key</key>" + " <text>where upper(name) like upper('%'||:text||'%')</text> " + " and nvl(start_date, to_date('19000101', 'yyyymmdd')) < NVL(:validityTo, to_date('99990101', 'yyyymmdd')) " + " and nvl(end_date, to_date('99990101', 'yyyymmdd')) > NVL(:validityFrom, to_date('19000101', 'yyyymmdd')) "; } SortColumn [edit] Events Defined with execXxxxxx() methods. LoadLookupRows
http://wiki.eclipse.org/index.php?title=Scout/Concepts/Sql_Lookup_Service&diff=329775&oldid=243607
CC-MAIN-2014-41
refinedweb
565
54.12
#include <curses.h> int wresize(WINDOW *win, int lines, int columns); The wresize function reallocates storage for an ncurses window to adjust its dimensions to the specified values. If either dimension is larger than the current values, the window's data is filled with blanks that have the current background rendition (as set by wbkgndset) merged into them. The function returns the integer ERR upon failure and OK on success. It will fail if either of the dimensions less than or equal to zero, or if an error occurs while (re)allocating memory for the window. The only restriction placed on the dimensions is that they be greater than zero. The dimensions are not compared to curses screen dimensions to simplify the logic of resizeterm. The caller must ensure that the window's dimensions fit within the actual screen dimensions. resizeterm(3X). Thomas Dickey (from an equivalent function written in 1988 for BSD curses). wresize(3X)
http://www.syzdek.net/~syzdek/docs/man/.shtml/man3/wresize.3x.html
crawl-003
refinedweb
156
52.6
Guillaume Laurent On GTK And The New Inti 149. Re:Reinventing QT ... (Score:2) Red Hat spent a lot of money in the early days rolling out a propretary CDE release that was supposed to be their cash cow. (Tri-Teal CDE). They had it all positioned, along with ApplixWare, etc. to take over the Linux market. Along came KDE and LessTiff, which drew all the energy away from the closed-source stuff that Red Hat had all the shrinkwrapped boxes printed up for at great cost. Red Hat ended up taking a bath as a result, out of reaction they hyped up Gnome as a counter-attack. All the Tri-Teal boxes probably ended up about 140 feet above the PC Junior parts in the same landfill. O well. Ray Noorda at Caldera is primarily involved in Linux for revenge purposes, too. Re:Reinventing QT ... (Score:2) Aaaahhh, but you are aware of it. I reported the inability to create an LS120 boot disk for 6.0, and although I haven't tried it for 6.2, I guess from the above comment that it still isn't fixed. Re:From MFC (Score:1) Re:C++ is the wrong language to write GUI's in any (Score:1) #include <qlabel.h> int main( int argc, char **argv ) { QApplication app( argc, argv ); QLabel hello( "Hello World" ); hello.show(); return app.exec(); }:1) I reported problems back when I used Red Hat *a lot*. I pretty much stopped after 5.0-5.2 and trying out 6.0 made me feel like Red Hat was a pointless endeavor. Well, I went through quite a few versions of Red Hat (started with 4.2 waaaaay back when and went to 6.0). Now I pretty much install it, play with it for an hour, then trash it and install a distro I feel I can trust. I don't try the betas of Red Hat (as I've kind of gotten the feeling that the 'releases' are beta enough). Well, I'm guessing this isn't a good way to make friends and influence people. Telling a user he is a moron when the software screws up without listening to what *was* done with it should be a no-no. I was speaking of my own personal experience. It sure felt like a screwing for money to me. I was speaking of the 'KDE sucks!' campaign that seemed to start with Red Hat. (Considering the first indication I got of that little sentiment was on a RH hosted site bragging up GNOME, I always figured that was a Red Hat sentiment.) Sorry to be an ass about this, but I did get burned. Just because you happen to work there (and you seem reasonable) doesn't mean you know the entire story. I'm sure you are being true to yourself in your work with/for Red Hat, and I don't begrudge you that, but Red Hat in general has pissed me off in enough ways that I'm not going to say, "Oh you're right. I was the asshole all along. And Red Hat is the greatest thing in the world." Sorry, that's not going to happen. If any non-Linux company had screwed up this badly I would have the same reaction (or any other Linux company for that matter). I feel I had problems. I explain those problems when it's necissary. And I'm not going to 'give another chance' to Red Hat. I have real work to do. I've been using SuSE, Debian, and Caldera since that time without problems at all, and have even traipsed into the BSD realm without problems. So my personal feeling is that there is something particular about the Red Hat way that just doesn't work (at least it doesn't for me). So that's that. Re:ROTFL (Score:2) Re:GTK-- was okay except for completeness and docs (Score:1) Down in the layers that do the actual work, I tend to use STL containers for reasons like uniformity with low-level libraries etc. I have yet to encounter a situation where that approach causes actual ugliness in my code - YMMV, of course.. Re:From MFC (Score:1) Eh? How is that relivent? Re:Happily insane (Score:1) Marcel Proust would have been proud. -- Re:FLTK (Score:1) If you look at the javadocs [javasoft.com], it isn't too hard to figure out. (If the link doesn't work, it's because of /.'s buggy long-line breaker) And there are versions of JOptionPane.showInputDialog that take fewer parameters. They're just less flexible. You have the choice: use the simple, but less flexible function, or the more flexible, but also more complex one. I think Einstein said it best: Re:ROTFL (exceptions) (Score:1) Not really. Exceptions are merely events that are not part of the "normal" operation of the program. Oftentimes, you have at least some idea of what could go wrong, and can either handle it or complain.try <get some data> except on ERecordNotFound do try <get data from somewhere else> except on ERecordNotFound do raise ENoData.Create('Could not locate the requested data'); end; end; end; You can also do things liketry <do some stuff> except on EArea1Error do <handle that error> on EArea2Error do <handle this error> on EArea3Error do <complain to the user because this was their problem> end; Hopefully, the vendors documented their code properly so that you know what exceptions their functions can throw. At the very least, all of their exceptions should be children of a single, vendor-specific exception so that you can at least catch exceptions by vendor. As for handling completely unexpected exceptions, there's alwaystry <call the top level functions> except on E: Exception do MessageDlg('Exception "' + E.Message + '" occurred. Please tell <contact person> about it.', mtError, [mbOk], 0); end; --Phil (The code is all Delphi, typed from memory, so don't complain too much about typos.) Correct me if I`m wrong... (Score:1) > just copy Red Hat Linux and add/remove/change > some stuff? Can't be about acceptance - SuSE > don't GPL their installer and yet they're > widely accepted. I think suse _did_ GPL their installer. And a while ago, too.. fltk (Score:1) Roll your own (Score:2) Yesterday we were complaining about DLL hell/lack of code reuse, and here we are today talking about another implementation of something that already:Happily insane (Score:1) Well, to each his own, of course. However, just as C has its well-known idioms (the on-line copy loop being an example), so does C++. It's just that the C++ idioms tend to be more abstract (things like design patterns and such). These are usually well-explained in articles, because a book on the subject would either be too short or contain too many disparate ideas to be of any use. I recently read a couple articles about traits that helped me solve a problem I'd been struggling with for a week. Through the judicious use of templates and traits, I was able to greatly reduce both the size and complexity of the system I was designing. While I understand your concern, I think it is unfounded. It's how I started out, and while I don't consider myself a guru, I certainly have learned enough on large-scale projects to consider myself more than competent in C++. C++ was designed to be like C to ease the transition for C programmers, and I think it's one of its greatest strengths. One can slowly add language features to the designer's toolkit without upsetting things too much. That fact that you can make use of all the existing C libraries is a godsend. I hear this one a lot, and frankly (please don't take this the wrong way), I think it reveals a fundamental misunderstanding of the purpose of a high-level language. The language exists to help the programmer. It is a tool and nothing more. There is nothing "magical" about encapsulation or inheritance. As you said, one can write OO programs in any language. It's just that some languages make it easier than others. C++ is designed to catch errors at compile-time. Strong typing and templates make the use of void pointers almost completely unnecessary (they are useful in the implementations of templates in some cases). Compiler-enforced encapsulation makes it impossible to get at data that shouldn't be got at. I had to chuckle at the comment (in another thread) about const being a horrible component of the language. I know it has saved me countless times from obscure bugs that otherwise would have to be tracked down in the debugger. Why write a big, clunky switch statement when a couple of virtual functions will do the job just as well and better separate various components of the design? Another thread contains a good analogy: in C, a loop is an abstraction of a backward goto. In the same way inheritance and virtual functions are an abstraction of unions and switches. -- I wish we could get the GUI toolkits sorted out (Score:2) At the time I started writing it, QT was not free enough for me to contribute (application) code for it and plain Gtk programming (in C) required such an insane amount of pointer casts that it was pretty obvious to me that I wanted to avoid it. I think Gtk-- is a very nice toolkit although it requires some discipline as (like Guillaume said), it lets you shoot yourself in the foot in a variety of ways. That said, I really think Gtk needs a well supported and clean C++ wrapper. I dont care which one it is but I dont want to be faced with the task of rewriting the GUI part of my application every 2 years. This kind of stuff makes me consider using QT, even though I prefer Gtk. Oh well, this might just be a case where the beauty of open source (lots of options and development in many directions) has come back to haunt me. I dont care what the solution is, but we need something that is the official Gtk C++ wrapper and is guaranteed to be around a few years from now. Re:Gtk-- is bad?!! Try the crappy CORBA C++ mappin (Score:1) Re:ROTFL (Score:2) Re:Reinventing QT ... (Score:1) I've posted this story before, and since I am now posting in the 'company' of a Red Hat employee will probably end up with a lawsuit on my hands to boot, but here goes. I 'purchased' a copy of Applixware back when Red Hat first started selling it from their web store. I didn't receive it for a long time (~month or so) so I called and asked. I was told that it had been shipped to the wrong address (because of a bad number entered in the zip code by the person writing the shipping label or some such) and that they would re-ship it. Another few weeks pass and I get my credit card bill. I'm billed twice for the product I have yet to recieve. I call again. I'm told it got shipped to the same place again and when they recieved it back as a bad address (just like the first time) they just put it back and figured "You'll call us when you don't get it!" So, I ask about the second charge on my credit card and am told that it will be taken care of and the product shipped overnight to me. The product is shipped to the right address (finally), but is in fact the Red Hat OS 5.1 (or 5.2, can't remember) which was (at that time) a lot cheaper than the product I had purchased. I called my credit card to check if I had been reimbursed for the second shipment, and I hadn't. But I had been charged a third time for the same thing. I finally called Red Hat and said, forget it, and reimburse me (and told them I would send the OS back unopened). I was told I could keep the OS (the one positive thing that happened through the entire thing), but I had to accept that product that I purchased from them from a 'legal' standpoint. I said that would be fine if I ever actually *got* the product. Anyway, I fought it for another month or so and finally gave up completely. I then called my credit card to stop all payments to Red Hat (as they were still charging me over and over for Applix which I never recieved) and I was able to recover all but the first two payments (it had been too long for me to 'stop payment' on those charges by the time my frustration had boiled to that point). That's my problems with the company. Now for the software... Red Hat is the only distro that I have ever seen where network services would just fail after a random amount of time. I used to run Red Hat on a 'headless' server in my room used for network testing/file serving/print serving and other network functions. About once every two weeks (more or less sometimes) I would not be able to log in over the network, I would not be able to do anything to the system other than hit the 'reset' button. I was told by Red Hat supporters that it had to be a hardware problem. I guess that explains why I've been able to use SuSE, Debian, FreeBSD, and OpenBSD on that same box with absolutely no problems at all. In fact, the reason I finally gave up on the reboot/restart situation (I didn't know better at the time, I was used to Windows) was that Red Hat ate it so bad that I could not boot the system up to a usuable state at all. And it was not cracked. It was not attached to any outside networks (I used a system seperated from my network for browsing and such). I slipped a vid card in it before reloading and tried to boot. It came up OK (supposedly) but refused to accept any form of input. Then I reloaded with another distro and it was fine. I've also had problems with Red Hat desktop systems. I don't know what it is, but it just seems to steadily deteriorate (kind of like Windows does under heavy use). You can say it's all my own fault, that's what I've been told over and over again by Red Hat supporters and employees. But the fact remains that I don't run into this problem with *any* other distro (other than Corel) or any other OS except for Windows. From the above examples I hope you understand that I have valid reasons for feeling that Red Hat wants to be the next MS. I got burned by customer-service/sales. I got burned by the software repeatedly (the desktop system I tried running just continuously had problems), and I see no reason other than spite for the attacks I saw Red Hat slinging towards KDE when they first attached themselves to GNOME. Red Hat appears to me to operate on the same principles that drive Microsoft. Make crappy software, blame the user when it doesn't work, screw the user if you can make a buck, and slign mud when you have no facts. I realize by posting this I've opened myself up to a huge liability. Online postings are now fodder for lawsuits. And I'm quite sure Red Hat is not above that. As long as you realize that the only things I have of monetary value (in my own name) are my computers and my guitars (and I doubt Red Hat could justify legal fees for the monetary value of those) then you can understand why I don't really care if I get sued over it. I was hosed, and feel I have the right to complain. Normally I just say *really bad experiences* as I said before. But you asked... Re:Moderate Parent UP!! +5 INSIGHTFUL!! (Score:1) Summing it up.... (Score:1) Where does this leave the C++ programmer wanting to develop Gnome apps? I don't see much to work with. Of course developing your own wrappers and classes is not so bad as a way of making the most offensive aspects of Gtk+ more palatable. But why bother when there is Qt, which even the gnomers seem to prefer behind closed doors. Meanwhile, Qt came out with yet another enhancement to its toolkit, including a free gui builder. Am I going to check that out? You bet. Where is the story on Qt 2.20? That seems more newsworthy than Gnomers throwing mudpies at each other about the "correct" way to build a C++ toolkit for gtk. Even if the Inti project is feasible and does succeed in catching up to where Gtk-- was, extending it to incude Gnome is another matter. Since Gnome is built on top of Gtk, how many layers of wrapping are required? Yet another reason to move Gnome functions which are more generic into Gtk where they belong, but I understand those who have tried such a sensible policy have had a brick wall thrown up in their faces and there does seem to be a hidden agenda to cripple gtk by moving some much needed functionality into gnome, requiring the installation of gnome to access them. Sounds a lot like MS style tactics to me - moving needed functionality for Windows into MS Office and IE. So it seems that yet another year will go by, at least, before we have a usable C++ class library for developing Gnome apps. That's what this is all about - the pressure from RedHat to develop such a class library but nobody seems very attentive to that. Re:Happily insane (Score:2) Re:GTK-- was okay except for completeness and docs (Score:1) -- Re:1st (Score:1) K+L+S! Blintz. Way to show that GTK knowledge! I'm personally waiting to see where GTK/GNOME go now that this fellow has left. I use KDE now but I've seen reviews that show GNOME runs alot faster so I may be switching... Re:ROTFL (Score:1) Abnormal program termination in Mozilla: Access Violation (0x00000000) Situations where the person does not check the return value of new (because they write it quickly, and then 99.8% of the time during their testing, it succeeds) abound. And what should you do in a constructor if a new used to initialize a member pointer variable returns NULL? You have to restructure your entire object so it is either valid or invalid, and the users of your object must check its valid/invalid state before performing operations on it. This way lies increased complexity and decreased reliability. Why the hell do references exist. References make it quite clear that a NULL object is not allowed, whereas it is quite common for a function to allow a NULL or non-NULL pointer. While you can write code that dereferences a NULL pointer and assigns that to a reference, it is the responsibility of the programmer who does the dereference to check that the pointer wasn't NULL before doing so. In contrast, it isn't clear whose responsibility it is to check that a pointer argument isn't NULL, and that way lies many, many bugs. (See the above bold, for example.) You can use new (std::nothrow) if you really want NULL returns on failure. Re:GTK-- was okay except for completeness and docs (Score:1) Re:GTK sucks (Score:1) Re:How much for QT? (Score:1) Likewise, if MS purchases TrollTech, then, AFAIK, Qt would still be out there, and free for noncommerical use...however, MS could simply stop selling Qt for commerical use, or make it 5,000 dollars per app sold, and, pretending we didn't have Gnome, completely destroy the Linux commerical software industry. This, BTW, is why I support Gnome. And, this is why glibc is LGPL and everything. Yes, it's better to have OSS then non-OSS software on your system, but having anything is better then nothing. -David T. C. Re:Reinventing QT ... (Score:1) I'm glad there's always someone like you around to tell me what a worthless piece of shit I am. It keeps me from thinking too highly of myself. Re:Happily insane (Score:2) That would be great. Now if I can just find the time (besides what gets wasted on /. [slashdot.org]) for it. Sounds like a lot of man pages and texinfo files I know :-) Most of the documentation out there for so much stuff is written with the idea of sequential reading in mind. I don't have the time to do that in most cases, so documentation that gives an introductory concept explanation (without the usual sales talk that most use as introductions), and has all the rest as a well indexed reference, would do better for me (and a lot of people I know). I still think of what is being compiled to the machine level (carefully thinking about diverse machines) even as I write code in C. I was able to write a set of functions to write and read arbitrary sized chunks of bits (up to the size of a long) and made it not only work on both big endian and small endian platforms with no tests for endianess anywhere, but the two different platforms could even exchange data between them correctly. Sometimes abstractions just get in the way, especially when dealing with a real world. There's always more than one way to code it in just about any language, especially assembly. Re:Reinventing QT ... (Score:1) Re:How much for QT? (Score:1) According to TrollTech, Open Source and cross-platform are mutually exclusive. If I'm developing for Linux only, I can use Qt Free Edition. If I want to do cross-platform code for Linux and Windows, even if it is Open Source, I must also buy Qt Professional for Windows, at the single-developer price of $1,550.00 TrollTech isn't pro-Open/Free as much as they are anti-Microsoft. Proof? Questions 20 and 21 in the Qt Free Edition FAQ [trolltech.com]. Share all you want. But if you want to share with Windows users, you gotta pay. Not very neighborly of them, is it? Every day we're standing in a wind tunnel/Facing down the future coming fast - Rush Re:How much for QT? (Score:2) Yes, but not all Red Hat's customers write free software. Red Hat need a competitive solution for them too. And currently Qt is much more expensive than any of the other widespread GUI tools. You can buy Delphi _and_ C++Builder _and_ Visual C++ (with MFC) for less than the price of a single Qt license. Re:ROTFL (Score:2) You're definitely misguided about the pass-by-reference comment. Sure, pointers have their uses, and sometimes (maybe even often, depending on what you're doing) pointers are the sane choice and references aren't. But references guarantee you one thing: they always reference something, there's no such thing as a null reference. Functions taking pointer arguments should check for (assert() or properly handle) null pointers. A function taking a reference _knows_ that it's a valid addressable object (object as in struct, simple variable, or whatever). Besides, why do you mention STL along with ``dangerous interfaces'' ? STL is type-safe, and it's a proven implementation (usually - but there could be bugs in glibc too remember). Using std::map<something> is *a lot* safer than re-implementing your balanced tree every time. Sure you can use glib, but STL is type safe and C-casts/void-pointers aren't. Did I mention that std::map<foo,bar> will also often be faster than your generic C tree ? if the comparison operator for the foo type is simple (eg. integer compare or similar), it can be (and will be) inlined in the core map implementation, something you cannot do in C without either re-implementing your map every time, or implementing it in a macro. You simply save a function call for every compare - something that is noticable when the compare is a simple operation. The real problem with C++ is that people tend to think of it as object-oriented C. Well, it is, but it's also *much*much* more. A C programmer trying to bake up a ``pretty'' API i C++ will often fail - we've all seen that. But good interfaces are definitely possible - take a look at STL. And note, that STL is not just object-oriented wrappers, it's a type-safe interface of objects and functions relying heavily on parameterized types, actually allowing you to write non-trivial programs that run as fast, or faster, than equivalent C code. The other real problem with C++ that I have to admit to, is compilation time. It is the *only* real drawback that I can point my finger at. But I can accept it. When the compiler builds type-safe trees of lists of strings for me, that run as fast as they do, I can accept the extra hardware cost as ``fair''. "Informative" Link (Score:2) FLTK (Fast Light ToolKit) is available at [fltk.org] -- Floyd:Clarification please (Score:2) True. And loops are just glorified gotos when one looks beneath the surface. So? I don't want to look beneath the surface too often. -- Qt (Score:1) noah Re:Happily insane (Score:1) I have tried languages like CLU [...]and summed it all up like: I tried to learn a clue but couldn't. -- Inti sounds like a new GNOME (Score:2) Re:Happily insane (Score:1) These brainwashed OOP zealots are the insane ones. There is nothing that can be done in C++ that cannot be done in C just as elegantly. All it does is hide the true functionality of the code. In case you haven't noticed, all the best programmers work in C. That's not because they're too stupid to learn C++. It is because they'd rather be in control. It's kinda like how serious driving enthusiasts prefer a manual transmission. Re:Happily insane (Score:1) This past weekend I picked up The C++ Standard Library [bookpool.com] by Josuttis. I've found this to be a wonderful reference, with sections not only covering the STL, but also strings, numerics, iostreams, i18n and allocators. It has a good TOC and index. I've not read it straight through (or even made an attempt), but it is very easy to find what I need. Explanations are clear and concise. Reading one page of the iostreams chapter halped me successfully derive a new stream buffer and class in five minutes. All previous documents were either too esoteric or verbose -- I couldn't get my head around the problem. In a previous post, you suggested: Perhaps you'd be interested in the following books: I've only read D&E. This is probably where you should start. It is very small. It's whole purpose is to explain why things are they way they are (i.e. you don't pay for what you don't use). In addition, journals like DDJ [ddj.com] and the (now-defunct) C++ Report [creport.com] have good articles about practical software development. I hear many of the C++ Report folk are heading over to the C++ User's Journal [cuj.com]. The most important thing to remember about C++ is that it is complicated. But only as complicated as you make it. For all intents and purposes, you can write C in C++. A good place to start is using it as "C with classes" to get encapsulation, then move on to polymorphism. It's also important to understand when to use language features (i.e. templates and specialization vs. inheritance) and books like Effective C++ [bookpool.com] help in that regard. Hope this helps! -- Re:ROTFL (exceptions) (Score:1) Which means that if you want highly portable C++ code now, you employ the subset of implemented standards. Hence, the use of exceptions makes less sense, as it is not feasable when portability is a requirement. yup, fire exits that allow for the interuption/confusion of normal program flow...a situation, which in many cases, bypasses memory management (or other resource reference handling) facilities. i agree it can be used effectivly, but the cost on design can be high. this is a consideration becomes more important as component object models are used from languages that support exceptions...resources management becomes a shared entity between threads, processes, and machines. this is merely a complication of exceptions, not an invalidation of the concept. Re:FLTK (Score:1) And I did specify free (as in beer). MFC (last I checked) also falls under the "Poor integration with std C++" clause. As does VC++, BTW. And, ironically, so does std::fstream, among others in std...argh! -- Re:Clarification please (Score:2) I'm happy the design will be reused in Inti, it probably represents a much larger part of the work put into Gtk-- than the main code. I haven't used Gtk-- in anything but a pilot project, but it as a joy to use. Compared to Qt i liked that it was just a GUI library (which Inti unfortunately won't be), that it used STL, and that it didn't depend on a preprocessor. I believe Gtk-- has always been driven by technical goals, not commercial or political goals, so the Qt license or "Stallmans decrees" have never been important. Inti will probably be driven by commercial goals, but if they keep the good design from Gtk-- and add full-time programming ressources, that will be fine with me. Troll Tech Buyout (Score:2) Re:Did you read the FAQ? (Score:1) -- Thrakkerzog Re:How much for QT? (Score:1) Uwe Wolfgang Radu Re:ROTFL (Score:1) What do you have against exceptions? They are immensely useful. Unfortunately I have yet to see an open source C++ project that even mentions exception safety, and that's a bad thing. In particular, wrappers around callback-based C libraries have almost no chance of being exception safe. And if you use an e-unsafe library in your program, and that library utilizes callback architecture (every OO library does BTW), you can practically forget about using exceptions in your code. Too bad, no freedom of choice :( -- Re:A zillion one-man projects... (Score:2) Re:Storm in a teacup (Score:1) Uwe Wolfgang Radu Reminds me of Miguel (Score:2) As an amateur developer, I have encountered tools where an interface was provided, but every FAQ, on-line doc, and expert on the tool would tell you to avoid it. It always makes me wonder why in hell the interface was built in the first place. Many of these dangerous interfaces may be there by accident, or because the original designer did not forsee the danger in their inclusion. But that still leaves them as bad interfaces; you should try to avoid providing such interfaces in a new library or tool. My $2E-2 (Actually, I think it's Miguel's $0.02, because I hadn't thought about it until I read his paper yesterdaySteve :-) Red Hat's Bright Idea (Score:1) Whom... (Score:1) "whom" in that short article. So many grammatical mistakes, I wonder if such lack of attention to important detail is why GTK-- sucked. Re:ROTFL (Score:2) The reason that the kernel, support, netowork and storage kits are non-OO are because it makes more sense that way. That's a big thing in a well designed API. YOu have a unifying concept, but don't use it in places where it really doesn't make sense. It really doesn't make sense to have an object simply for translation loading purposes. You could put them into an object, but then you end up with an object that has members that have no other relation aside from the fact that they are in the support kit. For more important than that, though, is the fact that many functions in those kits need to be accessible from kernel drivers, where C++ isn't really allowed. Memory management policy (Score:2) First, there are really many alternatives to choose to decide what kind of memory management to use for a C++ program. Telling is that the C++ standardization committee could only agree on one memory management class (auto_ptr [awl.com]<>). It uses gross hacks for ensuring the type checker does the right thing (And I'm not convinced it's right as it is). Ok, to get to my real point, here is a list of all memory management policies I could remember having seen used in C++: 1) explicit deallocation (programmer responsible for deleting; e.g. C++ plain pointers) 2) strict ownership (e.g. a creation function returning a smart pointer ) 3) transferrable ownership (e.g. auto_ptr) 4) Stack (objects created first are deleted last) 5) Static allocation (memory for object always exists) 6) no deallocation (sometimes you just can leave memory as leaks) 7) garbage collection (The garbage collector [sgi.com] takes care of deallocation) 8) Cluster allocator (see "Ruminations on C++ [att.com]" by Andrew Koenig; basically objects are deallocated in clusters, and whenever the cluster is deallocated, all the objects in it are deallocated as well). 9) reference counting with explicit ref/unref. 10) Intrusive reference counting (the objects being pointed to contain a reference count) 11) Non-intrusive reference counting (the reference count is separate from the object, e.g. like boost [boost.org] shared_ptr template) 12) Handle-Body idiom (you write a specialized handle for managing memory for your class) 13) Container-managed (like Gtk-- manage()) 14) Containment (like Gtk-- containment based solution) 15) Library-owned objects (library only returns references without ownership to users) 16) Distributed garbage collection 17) Evictor (the objects are maintained in a fixed size array, and the least used objects are deleted when new objects are created that would o verflow the array When the object is next needed after being deleted, it's re-created). 18) Copy semantics (you always do a copy) 19) Lazy copy semantics (you make a copy when you have to) 20) Reaper (The memory is scanned at fixed intervals for freed-up objects, and any objects marked to be deleted are freed). 21) Shared memory allocation 22) Persistent allocation (You mmap() some disk space for your objects, and leave it there to allow it to be used on subsequent invocations of your program) 23) Class allocator (overloading operator new and operator delete for allocating small objects efficiently) 24) Self-managed allocation (the object deallocates itself) 25) Singleton (The object is allocated when it's first used, and deallocated at the end of the program) 26) Mixture of several of the above policies The design space for memory allocation of C++ objects is really HUGE. So it's no wonder there is some disagreement on what is the preferred way to handle memory management, especially as many of these alternatives are actually contradictory in that it is hard to combine many of these strategies. I personally prefer auto_ptr combined with a non-intrusive reference counted pointer class and creation functions that return memory wrapped in auto_ptr. You do need some solution for putting references to objects in containers, plain auto_ptr doesn't work for that. Re:Summing it up.... (Score:2) Re:Inti sounds like a new GNOME (Score:2) Re:ROTFL (Score:2) Ah, but the 'func(*(int*)NULL);' part is illegal in and of itself (in C and C++), or more accuratly undefined. Most implmentations of C++ references just lazy evaulate the core dump :-) There really is no such thing as a NULL reference really means I don't find those things to be a huge advantage for references. Nor do I find that they make my code cleaner looking (except as a return type so I can have lvalue functions). I find it a modest disadvantage that I don't have the "out of bound" signaling path of passing a NULL to mean something "specail" (default value, or skip that part, or something). Left to my own devices I use pointers. Then again I kinda grew up on C (and APL, but we'll not mention that again). On the other hand I really really love the STL. One of the few saving graces of C++. And a huge one at that. Frequently even signifagantly faster then C. Re:Reinventing QT ... (Score:2) So, to me, something that leveraged tempates to acomplish what was, in Qt, little more that a macro hack, was a bonus-- though I never did look at the internals. I learned C++ by first tackling the STL, and then moving on to classes and inheritance. Gtk-- fit my (odd) style more so than Qt... As for Gtk+: yuck... Re:GTK-- was okay except for completeness and docs (Score:2) Eh? I wrote a whole (modestly) big Gtk-- program and never used strings other the the standard C++ string, and standard C char*'s. Never used a vector other then the STL's. Gtk-- might (or might not) have it's own string and vector classes, but I definitly didn't do anything to avoid their use, and yet I'm also not using them. Re:FLTK (Score:3) Not sure what it is, perhaps the JX toolkit? Re:FLTK (Score:2) Trying to solve it in fltk would bloat it up. And a worse problem: any solution we did would not match solutions used by other programmers, so when somebody says "use the font called 'Helvetica'" they may get different results depending on the program. X is crap and everybody should realize that. Re:Reinventing QT ... (Score:2) No way, unless you're inventing this to harm our reputation (which I don't think, shit happens). Sorry for the trouble with ApplixWare - this is probably too long ago to track down and see what caused it, so I'll take it for a typical "shit happens" thing and make sure it gets fixed at least now. For the software, all I can say is that I can't reproduce it and neither can our other customers - if you send me some details of the hardware (and the version of Red Hat Linux you were using), I'll check what's causing the problems. (We have upgraded some drivers; it may be a problem with one of them). These problems should go into bugzilla [redhat.com], where actual developers can read them and take care of them. (Unless you have a support contract, the support people will help you with installation; some of them aren't qualified to fix bugs.) Make crappy software We're trying not to do that - and I'd really like to know if any of your problems are still occuring with the 7.0 beta version. blame the user when it doesn't work I guess every software company is a bit guilty about this one - if something works perfectly for you, then someone tells you it doesn't work, what would you blame first, if you don't know how much the other person knows about the piece of software? screw the user if you can make a buck If that was our intention, we'd be making proprietary software. and sling mud when you have no facts That's Microsoft's job, not ours. We'll never get anywhere (Score:2) Until we can get out of the kindergarten-like pi$sing matches that characterise anyone in a "scene", we're sunk. There's ABSOLUTELY NOTHING WRONG with multiple languages, multiple toolkits, multiple libraries, etc. No one library, language, or whatever can do everything. Although I'm sure there's a sarky reply coming that C# is the best thing since sliced bread. I'm not making any judgments about the people involved in this, but part of me really wishes that the whole "Qt people hate GNOME people, LINUX people hate FreeBSD people, Inti people hate GTK-- people" thing would stop. Being "Lord High God" of a given project gives one a certain measure of prestige, sure. But shouldn't hackerdom be about skills, not who's got final signing authority over Project X. There's a HAPPY MEDIUM between anarchy and "well, if you aren't going to recognise my genius I'm going to go play somewhere else, and take my ball with me." You see this in ANY crowd, from volunteer paramedics to Goths to football players to insurance salesmen. Re:A zillion one-man projects... so what? (Score:2) FS development is *not* done by a corperation. A manager can't tell me what I should do (and neither can a zillion users). If I want to start a new project, even if it doesn't further any imaginary goal of *other* people, even if a zillion other programmers have done it before me, and maybe even better; quite frankly, that's my time and my choice. Re:ROTFL (Score:2) No, no, no, god no. Read last months' C++ users Journel, or Strousup (damm those forginers and their hard to spell names!). As "NULL References" are not defined, making one can do anything. They can crash you right as they are created. They can crash you when you try to take their address. What you show works on some implmentations, but is a programming error, every bit as much as using memory after you free it is (which normally works until you allocate more memory). But any garbage collected implmetation, or an implmentation that does anything odd with references (perhaps because the CPU is odd, or the ABI) can show the breakage, even if you try to check for the NULL reference, because the bug is not the use of the NULL reference, but the mere existance, the bug starts when you create the reference, and an implmentation can crash at any moment after (or as) you do it, attempts to find and not use the NULL reference are just masking the bug, and not allways masking it. Expect a future compiler rev to break such code. Expect porting the code to make the bug come live. In other words, you are living the life of a VAX programmer that assumes '0 == *NULL' works, and will forever work (and that 'strlen(NULL) == 0' works as well). Soon everyon will want your code ported to that hot new Sun3 platform, and you will have to find and fix all those damm bugs. Re:How much for QT? (Score:2) Troll Tech is spending their ressources to make their product (Qt) more attractive to their customers, and part of their strategy is to give it away for free to people who wouldn't be able to pay anyway. Both strategies make commercial sense, and both strategies benefit both the free software community, and Linux(/Unix) users in general. I don't see the need to take side, or declare one for the moral winner.). How much for QT? (Score:2) Clarification please (Score:2) The author keeps harping on how much he likes Qt, yet was working on C++ wrappers for Gtk. There was a disagreement about how to wrap Gtk, so he leaves. But does he join the new Inti project, or does he happily code wth Qt and Kde while working for RedHat? I thought RedHat didn't approve of non-free tools like Qt. You mean there were only 3 people at the project's peak working on Gtk-- ? Now there is one, plus this new Inti project which hasn't produced anything by white papers so far. I bet Qt is rolling on the floor.... Why should Gtk-- team and Gnomers have to study Qt to do their own C++ ? I thought Qt was non-free and off limits to Gnomers. How much of Qt have they copied? There are certain ethics to reverse enginering such as you don't look at the source even if it is available. What's wrong with plain old C with the object system imposed by Gtk+? I thought that it was Stallman's decree to use plain old C allocating objects with pointers, which works for most people. How many apps have been written with Gtk-- to date? Would some of those who have used Gtk-- in applications which have progressed beyond the pre-alpha stage care to comment? Lesson Learned? (Score:2) How many open source project suffer from this type of problem? They decide to be increadibly flexible, and try and please everyone, but in the end they end up being too complicated. Commercial products have external pressurers (eg, deadlines) that help them avoid this. Not that I'm saying flexibility is bad - just flexibility should be measured against ease of implementaion, ease of use and maintainability. Re:Happily insane (Score:2) It's these zealots you speak of that give OOP its bad name. For the most part, they don't really understand OO design, and don't know when to an not to use it, or how to use it when it is appropriate. The result is a lot of crappy OO software that shouldn't be. I, for one, find that the Object Oriented philosophy frequently results in much elegant solutions, both in design and in implementation. Just because many (self-proclaimed) Object Oriented Programmers don't understand Object Oriented Programming doesn't make it a bad paradigm. -- Re:GTK-- was okay except for completeness and docs (Score:2) Re:ROTFL (Score:2) > always reference something, there's no such thing > as a null reference. Not quite true: void func (int &a) { a += 5; } func(*(int*)NULL); That should crash quite reliably =] -Matt Re:Clarification please (Score:2) Because if someone else has attacked the problem before you are rock stupid to not look closely at what they have done so you can see what they did right, or wrong. So what you know might be good to copy (if it fits into your fraework), and what to avoid at all costs. Do you thnk AMD isn't one of the first buyers of Intel CPUs? That they don't cut them open with surgens saws as soon as possiable? Do you think that Linux kernel hackers arn't looking at NetBSD's USB system? Do you think McDonald's marketing teams don't eat at Burger King? That GM doesn't buy Ford cars and take them apart? I doubt any code was copied. Qt is a toolkit. Gtk-- wraps an existing toolkit. The slot/connection model in Gtk-- is all done within C++ while Qt uses a preprocessor, which makes Qt programs "not quite C++", which I think would sometimes be a pain (not an insurmountable obsticle, but still a pain). Gtk-- makes templates fit the task, which seems to work quite well. I'm not sure if Qt could have done the same with the state of C++ compilers when they started. If I were to do it now I would definitly do it the Gtk-- or Inti way. Being second sometimes has huge advantages. The rest of the slot/connection model is similar in design between Qt and Gtk--, but it is also similar to Smalltalk and other systems that have come before (and I would claim both did a good thing copying a previous succsuful solution). If you attack a strawman at least attack the right one. I don't think Stallman has a lot to do with Gtk+, I think it was Havroc or one of the other GTK+ develeprs who asserts that GTK+ being in C is a major huge good thing. Personally I could give a crap less what language it is written in if it (a) works, and (b) has good bindings to the language I want to use. Now I realise that things never actually allways work, so I will avoid using a tool if it is written in Intercal, but just because I wouldn't pick C to do an OO program in doesn't mean I'll avoid it if, if it has good C++ (or Java, or whatever) bindings. More importantly I think the "offical" argument has allways been that writing in C makes it easyer to have Python/Perl/Java/Whatever bindings then if it were in C++. I'm not positave I agree. I also don't think I care a whole lot. If I write a GUI app I'll probbably write it in C++ (or Java, but if I do it in Java I'll use Swing anyway). But the offical argument has never been "C rulz, C++ blows goats" Beats me how many have passed alpha. Bu I can say w3juke [sourceforge.net] was really really far easyer to code up then any toolkit I had used in the past (note I havn't used Qt), including some non-Unix ones. But w3juke has not passed alpha. Not really because of the toolkit, but because of lack of documentation, and available time, and the abundent lazyness of the programmer. And of corse now I want to come up with another smal task to go code up in Inti just so I know how it compairs... Re:FLTK (Score:2) Reliability was dreadful, lots of functions don't behave as logic dictates they should, several things plain broken. It takes ages to find these problems and work around them. Your prototypes look good, but then several months down the line you find peculiar bugs popping up - for instance pop-up menus that occasionally just decide to stay visible for ever. Complexity - huge amount to learn String label = (String)JOptionPane.showInputDialog(frame, "Input label for ", "Info", JOptionPane.QUESTION_MESSAGE, null, null, field.substring(i)); what are all those args for - I can't remember just now, but you need to know it. Verbosity - it took me 5000 lines of fltk to replace 25k of swing. Performance - this was the worst. Memory footprint was horrendous, CPU useage was bad. Stick a breakpoint in and examine the call stack sometime - it'll be at least 18 levels deep. This is a sign of shitty design, no matter what anyone tells you. But it's still better than MFC. Re:ROTFL (Score:2) Guillaume Laurent vs. Karl Nelson (Score:2) Re:ROTFL (Score:2) Re:How much for QT? (Score:2) In fact, Trolltech have said that they would GPL Qt, but they don't believe that it provides protection against non-free software dynamic linking with GPLed libraries. There are still a few clauses people don't like in the QPL, but as I understand it, these are being fixed. I know some people have a hard time with this concept, but not everybody who believes in Free Software thinks that the GPL is the only or even the best license GTK-- was okay except for completeness and docs (Score:2) First, the program you wrote was plain C++, that's one advantage over poor QT and MFC. That meant you weren't constrained by a foolish macro/preprocessing system. (The preprocessor that was used building the library is no problem, IIRC) The signaling system also was not at all bad, although it is another bloated C++ hack. OTOH, I found a lot of things that gtk-- simply didn't care covering. I've had to plug in lots of C calls in my proggys to get the right behaviour. Anyway, it seemed to work. But try writing a drawing program in gtk-- and you're going to blow up. Another very disappointing mistake is in the apparent lack of documentation. This goes for the GTK+/GNOME people on their high horses as well: NO LIBRARY WITHOUT COMPLETE DOCUMENTATION!! Get it? That's why Qt still seems to hold the edge despite the zillion disadvantages that it entertains. When you're writing with Qt, if you have a bit of experience with MFC, you can have your browser at your documentation and dive right in. Thanks,. ROTFL (Score:2) I know that many of C++ lovers here will disagree, but it is just because they don't ran into those walls _yet_. C++ gave them enough rope to that to hang themselves *and* hang their users. At least Guillaume Laurent understand part of the issue now... What makes me wonder is its feeling that C is obsolete. That's fun: the C implementation is working very well, and all the C++ wrappers failed... Cheers, --fred++.) Re:FLTK (Score:2) Re:Happily insane (Score:2) Sounds like the C++ version of Plauger's like book for C. Something very useful. I have D&E and read some of it. It didn't seem to be useful for learning C++ at all. I saw LSC++SD in a "brick and mortor" bookstore, but wasn't impressed enough to hold on to it for more than about 20 seconds. However, Inside the C++ Object Model [bookpool.com] sounds from the title like it might be worth looking into. I've rarely ever found magazine articles to be much helpful in things. That I can write C in C++ is probably one of the big negatives for C++ for me. I would be so tempted to just do what I know. Why do I need to "get" encapsulation when I already have it in the abstract sense of the design? C is just the vehicle I use to bridge the abstraction-to-reality gap. Don't assume that because I code in C, that I didn't do anything object oriented in the design (I do to varying degrees in many projects). Re:Happily insane (Score:2) I got you to read it :-) It's not an issue of could not learn something; it's an issue of making effective use of time and deciding when it is ineffective to learn something that doesn't present itself with enough apparent benefit to be worth the cost in time to acquire that knowledge. I could spend all my life learning new stuff, and there is plenty out there to learn. But I would have accomplished nothing more than learning, and most certainly given nothing back in that process. I prefer the balance I have already taken by learning what is effective to learn, and using what I do learn, and being creative in the process. Re:Happily insane (Score:2) Since you are at university, you are in early learning mentality. You are spending less time using what you do know, because there is less that you know. This isn't a bad thing, and in fact it is an advantage. But I do wonder if your cgi app project included full and complete documentation on the administration and maintenance of the system. Did it account for future upgrade needs? Is it scalable? Can it be run effectively in a "five 9's" environment? You'll find there are a whole lot of new things to learn in the world beyond academia. Don't misunderstand me; what you learn there is important. But it is not everything. For me to invest time time is significantly different than for you to invest the same time. In a sense, you may be right as I don't have any great burning desired to learn C++ but I don't have any reason to shun it besides those I have mentioned. Where I am in life is entirely different than where you are. If C++ was there when I was in school, I have no doubt I would be coding in it today, unless something had replaced it (the time since I was in school way exceeds the lifetime of C++ so far). And by the time I did learn C, which I learned only because I was desperately seeking an alternative to assembly, I had already written over 800 apps, programs, utilities, or tool kits, in assembly, Fortran, and PL/1. I prefer OO for larger projects. But the OO I learned is rather different than the way it is expressed in C++ based on the fact that the two different schemes didn't mesh. Don't be foolish to assume that just because someone doesn't code in an OO language that they didn't design the project using OO methodologies. Now I don't always do OO, but for larger things I do because it helps organize things. But from so much practice, I can code the OO design into C quite effectively, and don't have any big need to acquire a new language just to be able to code the same design a different way. I do look forward to finding, some day, a clean yet effective object oriented language. C++ isn't it for me. Java could have been and it was quite close, but the run time environment, and political/legal issues, ruled it out for me. For all I know C# might well be, but I won't be interested in it until it is at least available in a standard form on Unix (not likely by Microsoft). Or maybe it won't be. If it has an obese run time environment, I will walk away from it quite happily. As I said, the books are written specifically for a certain kind of learning style which I no longer (and will never again) do. The sad thing is they could be written in the style I would need. Indeed another reply to my post suggested a book that may have potential. Re:Happily insane (Score:2) That reminds of me of a time when I asked a programmer who did all his application development in C++ and remarked that everything he did was object oriented, "what is object oriented?". He was stumped. He knew he was doing it, because he was using an OO language (which isn't true). But he couldn't define exactly what it was. That's not to say that all C++ programmers don't know what OO is. But it does say that some don't, and worse, are using the language as a crutch to cover them from ineffective design. The sad thing about OO design is that there are no "OO whiteboards" to make sure you do it right. Re:Reinventing QT ... (Score:2) Becuase that's what the filesystem standards say. /opt is for local add-ons. The latest FHS standard has loosened that restriction a bit, but still says upgrades may not overwrite anything in My personal preference would be it seems from Miguel's recent white paper "Let's Make Unix Not Suck" that he wants Gnome to be the one and only desktop system for Linux, enforced at a deeper level, even by policy embedded in the kernel. Miguel is not Linux and Miguel is not Gnome. This sort of stuff can't and won't happen. Even if the kernel or X guys should play some Microsoftish tricks in the X server to prevent other stuff from running (purely hypothetical, this won't happen), someone would just fork them and maintain a clean version. I don't think all of Miguel's ideas are bad (though I don't think any of this stuff should go into the kernel, that's part of what causes the crashes in MS OSes) - we could probably use some sort of improved component model, but only if it is generally useful and not specific to either Gnome or KDE. Re:Reinventing QT ... (Score:2) As for the LS120 drive, [redhat.com] is the right place to talk about this - we can't fix problems we aren't aware of. There were a couple of problems in the 6.1 and 6.2 installers; most of them should be fixed in the 7.0 beta.
http://slashdot.org/story/00/08/10/123221/guillaume-laurent-on-gtk-and-the-new-inti
CC-MAIN-2014-41
refinedweb
10,124
70.73
Ask Kathleen Learn how to work around a couple bugs in Excel to return double values; drill down on lists with anonymous types; and learn the difference between Build and Rebuild. Q I'm creating a library that will return double values to Excel 2003, but I encounter a problem when assigning double types through COM Interop to an Excel cell when the double value is PositiveInfinity, NegativeInfinity, or NaN -- I'm alarmed to see the cell value set to 65535! Think how bad things could get when Infinity is meant, but 65535 is given and then 65535 is used in a formula! I've reproduced this problem using Excel 2007 under Windows Vista using with this code: [Guid("F9E15EF1-0F32-4153- A04A-597EFD9DB95A")] [ComVisible(true)] public interface ITestDouble { double GetPosInfinity(); } [Guid("E34EFC46-33AF-4e95- AE4C-BAF83BAA3718")] [ComVisible(true)] [ClassInterface( ClassInterfaceType.None)] [ComDefaultInterface( typeof(ITestDouble))] public class TestDouble : ITestDouble { public double GetPosInfinity() { return Double.PositiveInfinity; } } A The problem isn't in the COM Interop, which correctly assigns the double value. You can see this if you set a breakpoint in your VBA code and use the Immediate window to check the value (c is an instance of the TestDouble class): ? c.GetPosInfinity 1.#INF Depending on how you assign the cell, one of two wrong values are assigned. The default assignment results in 65535, while assigning Val() results in 1 -- both of which are spectacularly wrong and represent a bug in Excel. There are three solutions, all of them terrible if you want IEEE behavior. You could raise an error when encountering an infinity or NaN value in your .NET library. This might require an extra wrapper so any .NET callers get a correct result, while you return an error to Excel or other Office applications that can't manage the infinity value. The benefit of this approach is that it will work with all spreadsheets using your library. Another alternative is wrapping the call to your .NET library in VBA code that recognizes the infinity condition and places something more intelligent into the cell (probably an error value). This allows customized behavior, but if the spreadsheet neglects to take special steps, the invalid value is contained in the cell, which makes further calculations incorrect. The third possibility is to return a string value. This allows you to capture the information for display, but spreadsheet users are quite likely to use simplistic conversion code that places a 1 in the target cell: Val(A1) These are all terrible solutions, but the first approach best protects your spreadsheets from calculations based on bogus values. Q I'm trying to create a list of employees and include an "All Employees" item at the top of my list. Here's how I retrieve the employees using LINQ to SQL: Dim dc As New _ BizLibrary.AdventureWorks _ LTDataContext Dim employees = From c _ In dc.Employees Distinct _ Select c.EmployeeID, name = _ c.LastName & ", " & c.FirstName Then I create another anonymous type for the "All Employees" item and shove it into a collection so that it's queryable: Dim allEmpOption = New With _ {.employeeid = 0, .name = _ "*All Employees*"} Dim coll As New Collection coll.Add(allEmpOption) Next, I try to Union these two queries: Dim fullList = employees.Union( _ from e in coll select e) But I get this exception when I do this: Error 3 Option Strict On disallows implicit conversions from 'Microsoft.VisualBasic.Collection' to System.Collections.Generic. IEnumerable(Of <anonymous type>)' A There are a couple of things going on here. First, the Collection contains an item of type Object, which can't be implicitly cast to the correct type. The Visual Basic Collection class also has performance issues, so avoid using it. The easiest way to deal with anonymous types is to pass them to a generic function. You can then create a new list or an array of the anonymous type and use it to perform the union: Private Function AddItem2(Of T)( _ ByVal list As IEnumerable(Of T), _ ByVal item As T) _ As IEnumerable(Of T) Dim items = New T() {item} Return list.Union(items) End Function You can create the array explicitly, but that limits the method to adding only a single item. To simplify the method and add multiple items, use a parameter array. The Union operator combines two IEnumerable(Of T) collections by placing items contained in the parameter at the end of the main list. By switching which list is passed as the parameter, you can control whether new items are included at the beginning or end of the list: Private Function AddItem(Of T)( _ ByVal list As IEnumerable(Of T), _ ByVal atTop As Boolean, _ ByVal ParamArray items() As T) _ As IEnumerable(Of T) If atTop Then Return items.Union(list) End If Return list.Union(items) End Function Calling this function exposes the second problem in your sample code because it exposes a type error: Dim allCustomers = New With {.CustomerId = 0, _ .name = "----- All -----"} AddItem(customers, allCustomers) This problem occurs because Visual Basic lets you choose the level of immutability of your anonymous type. Unfortunately, the slight variation in how you've created these two anonymous types results in different immutability -- the two anonymous types have the same fields, but differ by immutability. If you create a version of your application that compiles by commenting out the failing code and use Reflector to view the IL, you'll find you have anonymous types VB$AnonymousType_0<T0, T1> and VB$AnonymousType_1<T0, T1>. The IL fragments differ by whether they override Equals, GetHashCode, and IEquatable, as well as whether the CustomerId and name are read only (the immutable version won't have property setters). Immutability describes whether you can change the values of the anonymous type's objects. This is linked to equality and hash codes because changing the value of hash codes after objects have been placed in a dictionary would wreak havoc. To avoid this, VB supports reference type semantics and also provides the Key keyword to let you specify what values within anonymous types are immutable, and only these values are used in GetHashCode. In most cases, if you define an anonymous type without the Key keyword, the object reference is used as the hash. This is the default behavior of reference types, so Equals, GetHashCode, and IEquatable aren't overridden, and you can change any value in the object. You can choose which to use, but you can't combine them as they're fundamentally different types. When you create an anonymous type outside a LINQ expression, and define it without the Key keyword, the resulting anonymous type is fully mutable and uses reference type semantics. Apparently for consistency with C#, when you create an anonymous type without the Key keyword within a LINQ expression, it's fully immutable -- as though the Key keyword had been added to every field. You can't combine objects of anonymous types with different immutability, so you must force the two types to have the same immutability. You can either make the LINQ query produce a mutable anonymous type or force the constructor-based creation of the anonymous type to create an immutable type. You force LINQ to create a mutable type by calling the anonymous type's constructor explicitly: Dim customers = From c In dc. _ Customers Distinct Select _ New With {c.CustomerID, _ .name = c.LastName & ", " & _ c.FirstName} You force creation of an immutable anonymous type by including the Key keyword on every field: Dim allCustomers = New With {Key _ .CustomerId = 0, Key .name = _ "----- All -----"} Make either (but not both) of these changes, and the new method will add the "All Employees" entry to your list, where it will be available to your bound combo box. Q I'm converting a WinForms application to Windows Presentation Foundation (WPF), but I can't find a replacement for the Timer control. How do I add a timer in WPF? A The WPF timer was added as the DispatcherTimer class in the System.Windows.Threading namespace, along with the Dispatcher class it works with. One key point about this timer: When you're working with WPF, the handlers fire on the UI thread, allowing you to make changes to UI components. You can change the background color of a button after a delay using code like this: System.Windows.Threading.DispatcherTimer timer = new System.Windows.Threading.DispatcherTimer(); private void Button_Click(object sender, RoutedEventArgs e) { timer.Interval = new TimeSpan(0, 0, 2); timer.Tick += Timer_Tick; timer.IsEnabled = true; } private void Timer_Tick(object sender, EventArgs e) { this.Background = Brushes.Red; } Q What's the difference between Build and Rebuild All in Visual Basic? A The Visual Basic background compiler runs on a separate thread and moves all the projects of your solution through a series of states from "No state" to "Compiled." When you ask for a build on a project, the background compiler finishes up the specific projects you've requested and writes the required information to disk, updating the .DLL, and potentially, a .PDB and other files. This leverages the work of the background compiler. Hitting Rebuild on an assembly resets its state to No State, and by necessity, sets all the dependent projects to No State. This effectively throws out the work of the background compiler. Rebuild also differs because it's dependent on project build order. If the build order is incorrect, the compiler winds up rebuilding the same assembly multiple times, which is inefficient and can cause performance problems. In general, you can save time by using Build. If you get different results with Rebuild, it's a bug. If you can reproduce these different results, please submit it through Help/Report a Bug. If Rebuild takes a long time, check whether the dependency order of your projects is correct. Q I can't get the immediate window to return the correct value for Guid variables in Visual Basic. I have a variable named CustomerId, which is a Guid. When I use the immediate window, it always says that it is empty, even though I know from running the code that it's not empty. ? CustomerId {System.Guid} Empty: Nothing A Immediate has a quirk (bug) where it doesn't return the value for ToString on the Guid instance; rather it returns the name of the class and its sole shared field (Empty). It's not telling you that you're variable value is empty, but that your Guid has a shared field called Empty. This is not helpful and is likely to be fixed in a future version of Visual Studio. In the meantime, you'll need to call the ToString() method explicitly, so you can see the Guid value: ? CustomerId.ToString() "62a6185a-7746-4e95-a81f-225f95b75367". > More Webcasts
https://visualstudiomagazine.com/articles/2008/07/01/return-double-values-in-excel.aspx
CC-MAIN-2019-22
refinedweb
1,811
61.36
You need to line up your text output vertically. For example, if you are exporting tabular data, you may want it to look like this: Jim Willcox Mesa AZ Bill Johnson San Mateo CA Robert Robertson Fort Collins CO You will probably also want to be able to right- or left-justify the text. Use ostream or wostream, for narrow or wide characters, defined in <ostream>, and the standard stream manipulators to set the field width and justify the text. Example 10-1 shows how. Example 10-1. Lining up text output #include <iostream> #include <iomanip> #include <string> using namespace std; int main() { ios_base::fmtflags flags = cout.flags(); string first, last, citystate; int width = 20; first = "Richard"; last = "Stevens"; citystate = "Tucson, AZ"; cout << left // Left-justify in each field << setw(width) << first // Then, repeatedly set the width << setw(width) << last // and write some data << setw(width) << citystate << endl; cout.flags(flags); } The output looks like this: Richard Stevens Tucson, AZ A manipulator is a function that operates on a stream. Manipulators are applied to a stream with operator<<. The stream's format (input or output) is controlled by a set of flags and settings on the ultimate base stream class, ios_base. Manipulators exist to provide convenient shorthand for adjusting these flags and settings without having to explicitly set them via setf or flags, which is cumbersome to write and ugly to read. The best way ... No credit card required
https://www.oreilly.com/library/view/c-cookbook/0596007612/ch10s02.html
CC-MAIN-2018-43
refinedweb
239
60.75
The. Mission Accomplished? I originally set out to make some simple widgets that would reduce the amount of JavaScript I would have to learn and write. In that regard, Pyxley was wildly successful. Since the the release, the React ecosystem has evolved and changed in so many ways. Most notably for Pyxley, PyReact was abandoned, triggering much needed updates. Goodbye PyReact! (and other stuff) Admittedly, a lot of the JavaScript dependencies in the previous version of Pyxley were a bit clunky. There were two package managers: NPM and Bower. Then require.js was needed so that the modules loaded properly. Removing PyReact provided the opportunity to re-examine this process and simplify some dependencies. When creating a new app, it’s not clear what these things do or why you should care about them. That merited fixing, which has been incorporated in the update. Hello, Webpack! Webpack is a module bundler that allows us to bundle all of the JSX code created by Pyxley. Previously, you were required to install the various packages with Bower and then list those dependencies in the app. Your application may have had something like the code snippet below. # List all of your JavaScript dependencies scripts = [ "./bower_components/react/react.js", "./bower_components/react-bootstrap/react-bootstrap.min.js", "./bower_components/pyxley/build/pyxley.js", ] # List all of your css dependencies css = [ "./bower_components/bootstrap/dist/css/bootstrap.min.css", "./css/main.css" ] In this release, we’ve provided a pyxley.utils module that wraps two important functions: npm install and webpack. Starting a new app is now as simple as typing pyxapp --init .. This command will now create a default package.json and will install all the dependencies. The code snippet above is now completely optional. The latest release of PyxleyJS now includes all of the necessary style sheets. The following code snippet will create a new file called webpack.config.js in your project folder. from pyxley.utils import Webpack wp = Webpack(".") # Set '.' to be the root directory # Create the webpack.config.js wp.create_webpack_config( "layout.js", # Look for a file called layout.js "./demo/static/", # look for layout.js here "bundle", # This is what we are going to call our bundle "./demo/static/" # This is where we will put it ) # Run webpack wp.run() After this code executes, any JSX code that had been created by Pyxley will be bundled into ./demo/static/bundle.js. The NPM package.json and webpack.config.js can then be modified to remove or append packages as needed. Simply remove the wp.create_webpack_config function call and only the bundle will be created. Running the App All of the examples have been updated to handle the changes. To run an app, simply go to the example directory (e.g. pyxley/examples/plotly) and type # This will create package.json and run npm install pyxapp --init . # This will run the flask app and build the webpack. python project/app.py The NFL Meets Pyxley A More Complex Example On the weekend of February 20th, I participated in the first-ever NFL Hackathon hosted by Angelhack and the NFL Engineering team. While I was intrigued by the opportunity to play with some NFL data, I was more excited about the potential of building an app with Pyxley. Although I didn’t win, with Pyxley’s help I was able to build a pretty useful web app. The Code The code and instructions are available at nfl-hack.
https://multithreaded.stitchfix.com/blog/2016/03/10/pyxley-update/
CC-MAIN-2021-04
refinedweb
574
59.8
Hey, I have a new error that I can't seem to fix that is occuring whilst I try to delete contacts from my basic Contact App. At the moment I am defining the destroy action in my contacts controller as: def destroy current_user.contacts.delete(params[:id]) redirect_to index_path end In my view I have: <%= button_to 'Delete', contact, :method => :delete %> and the error I am getting is: ActiveRecord::AssociationTypeMismatch in ContactController#destroy Contact(#24465528) expected, got String(#10155204) Any help people can offer on this really would be much appreciated! Thanks,Tom okay turned out I managed to work it out myself in the end. The working implementation I have gone with is this: Controller def destroy contact = current_user.contacts.find(params[:id]) contact.destroy redirect_to index_path end and View Hopefully this will help someone who is stuck in the future This topic is now archived. It is frozen and cannot be changed in any way.
http://community.sitepoint.com/t/association-type-mismatch-error-deleting-contacts/15271
CC-MAIN-2015-18
refinedweb
157
50.87
_WinHttpSimpleFormFill alternative By AutID, in AutoIt General Help and Support Recommended Posts This topic is now closed to further replies. Similar Content -) - By AutID Hello, I am trying to make an auto log in script that will log in on our server with all the accounts of the people working on our domain read from the database(arround 20 accounts). The reason for this is everyone to recive update e-mail. We recive update e-mail with the latest updates everytime time we log in so that is the reason that i am creating this. Here is a small reproducer: #include "WinHttp.au3" OnAutoItExitRegister("_sExit") Global $sRead, $hOpen, $hConnect Global Const $sUsername[3] = ["name1", "name2", "name3"] Global Const $sPassword[3] = ["pass1", "pass2", "pass3"] Global Const $sUrl = "" Global Const $sUserAgent = "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)" Local $a Local $b $hOpen = _WinHttpOpen($sUserAgent) $hConnect = _WinHttpConnect($hOpen, $sUrl) For $i = 0 To UBound($sUsername)-1 $a = $sUsername[$i] $b = $sPassword[$i] $sRead = _WinHttpSimpleFormFill($hConnect, _ Default, _ "login_1", _ "name:name", $a, _ "name:pass", $b) If @error Then ConsoleWrite(@error & @LF) Else If StringInStr($sRead, $a) Then ConsoleWrite($a & " succesfully logged in." & @LF) EndIf Next Func _sExit() _WinHttpCloseHandle($hConnect) _WinHttpCloseHandle($hOpen) EndFunc If i put the $hOpen and $hConnect within the For/Next loop it will work but it will take more than 1 min to log in on our server with all the accounts. If not it won't work for all the accounts. If i use $hOpen and $hConnect only once then the server will fire error after 4-5 log ins and the winhttpsimpleformfill function will return that there are no forms, @error 1. Any other way to perform a log in without filling the form with data and submitting it? Cheers.
https://www.autoitscript.com/forum/topic/157073-_winhttpsimpleformfill-alternative/
CC-MAIN-2018-43
refinedweb
302
50.26
The more your Oracle8i applications increase in complexity, the more you must tune the system to optimize performance and prevent data bottlenecks. This chapter describes how to configure your Oracle8i installation to optimize its performance. It contains the following sections: Oracle8i is a highly-optimizable software product. Frequent tuning optimizes system performance and prevents data bottlenecks. Before tuning the system, observe its normal behavior using the SGI IRIX tools described in the next section. SGI IRIX provides performance monitoring tools that you can use to assess database performance and determine database requirements. In addition to providing statistics for Oracle processes, these tools provide statistics for CPU usage, interrupts, swapping, paging, and context switching for the entire system. Use the sar command to monitor swapping, paging, disk, and CPU activity, depending on the switches that you supply with the command. The following command displays a summary of paging activity ten times, at ten second intervals: $ sar -p 10 10 The following example shows sample output from the command: 14:14:55 atch/s pgin/s ppgin/s pflt/s vflt/s slock/s 14:15:05 0.00 0.00 0.00 0.60 1.00 0.00T 14:15:05 0.00 0.00 0.00 0.60 1.00 0.00T 14:15:15 0.00 0.00 0.00 0.10 0.60 0.00 14:15:25 0.00 0.00 0.00 0.00 0.00 0.00 14:15:35 0.00 0.00 0.00 0.00 0.00 0.00 14:15:45 0.00 0.00 0.00 0.00 0.00 0.00 14:15:55 0.00 0.00 0.00 0.00 0.00 0.00 14:16:05 0.00 0.00 0.00 0.00 0.00 0.00 14:16:15 0.00 0.00 0.00 0.00 0.00 0.00 Average 0.00 0.00 0.00 0.07 0.16 0.00 i Use the following command to report information on swap space usage. A shortage of swap space can result in the system hanging and slow response time: $ swap -l 1 /dev/swap 0,402 0 0 1048576 1026528 1048576 0 2 /dev/dsk/dks0d1s3 0,408 0 0 7168000 7147520 7168000 0 3 /dev/dsk/dks0d1s4 0,411 0 0 7168000 7151488 7168000 0 4 /dev/dsk/dks0d1s5 0,414 0 0 2662400 2642240 2662400 0 Oracle8i release 3 (8.1.7) includes a set of packages for database tuning called STATPACKS. For more information on STATPACKS, see Oracle8i Designing and Tuning Performance. The utlbstat.sql and utlestat.sql scripts are used to monitor Oracle database performance and tune the System Global Area (SGA) data structures. For more information on these scripts, see Oracle8i Designing and Tuning for Performance. On SGI IRIX, the scripts are located in the $ORACLE_HOME/rdbms/admin/directory. Start the memory tuning process by measuring paging and swapping space to determine how much memory is available. The Oracle buffer manager ensures that the more frequently accessed data is cached longer. Monitoring the buffer manager and tuning the buffer cache can have a significant influence on Oracle performance. The optimal Oracle buffer size for your system depends on the overall system load and the relative priority of Oracle over other applications. Try to minimize swapping because it causes significant UNIX overhead. Use the sar -w command on SGI IRIX to check for swapping. If your system is swapping and you must conserve memory: On SGI IRIX use the swap -l command to determine how much swap space is currently in use. Use the swap -a command to add swap space to your system. Consult your SGI IRIX documentation for further information. Paging might not present as serious a problem as swapping, because an entire program does not have to be stored in memory to run. A small number of page-outs might not noticeably affect the performance of your system. To detect excessive paging, run measurements during periods of fast response or idle time to compare against measurements from periods of slow response. Use the sar -p command to monitor paging. The following columns from the output of this command are important: If your system consistently has excessive page-out activity, consider the following solutions: You cannot start the database without sufficient shared memory. If necessary, reconfigure the UNIX kernel to increase shared memory. I/O bottlenecks are the easiest performance problems to identify. Balance I/O evenly across all available disks to reduce disk access times. For smaller databases and those not using the Parallel Query option, ensure that different datafiles and tablespaces are distributed across the available disks. Oracle offers asynchronous I/O, multiple DBWR processes, and I/O slaves as solutions to prevent database writer (DBWR) activity from becoming a bottleneck. Asynchronous I/O allows processes to proceed with the next operation without having to wait after issuing a write and therefore improves system performance by minimizing idle time. SGI IRIX supports asynchronous I/O to both raw partitions, XLV or XVM volumes, and filesystem datafiles. I/O slaves are specialized processes whose only function is to perform I/O. They replace the Oracle7 feature, Multiple DBWRs. In fact, they are a generalization of Multiple DBWRs and can be deployed by other processes as well. They can operate whether or not asynchronous I/O is available. They are allocated memory from the LARGE_POOL_SIZE parameters, if set, otherwise they are allocated memory from shared memory buffers. I/O slaves include a set of initialization parameters that allow a degree of control over the way they operate. Table 2-1 lists the initialization parameters that control the operation of asynchronous I/O and I/O slaves. There might be times when the use of asynchronous I/O is not desirable or not possible. The first two parameters in Table 2-1, DISK_ASYNCH_IO and TAPE_ASYNCH_IO, allow asynchronous I/O to be switched off respectively for disk and tape devices. Because the number of I/O slaves for each process type defaults to zero, no I/O slaves are deployed unless set. Set the DBWR_IO_SLAVES parameter to greater than 0 if DISK_ASYNCH_IO or TAPE_ASYNCH_IO is disabled, otherwise DBWR becomes a bottleneck. In this case, the optimal value on SGI IRIX for DBWR_IO_SLAVES is 4. DB_WRITER_PROCESSES replaces the Oracle7 parameter DB_WRITERS and specifies the initial number of database writer processes for an instance. If you use DBWR_IO_SLAVES, only one database writer process is used, regardless of the setting for DB_WRITER_PROCESSES. See "Customizing the initsid.ora File" for information on other initialization parameters. SGI IRIX allows a choice of file systems. File systems have different characteristics, and the techniques they use to access data can have a substantial affect on database performance. Typical file system choices are: The EFS file systemThe EFS file system To monitor disk performance, use the sar -d command. Table 2-2 describes important sar -d output fields. High values for %busy, avque, avwait, and avserv can indicate I/O bottlenecks. Correlate busy devices from the sar -d output with the Oracle datafiles stored on those devices. Determine which Oracle datafiles are causing the I/O bottleneck and spread these datafiles across additional devices. Oracle block sizes should either match disk block sizes or be a multiple of disk block size. If possible, do a file system check on the partition before using it for database files, then make a new file system to ensure that it is clean and unfragmented. Distribute disk I/O as evenly as possible, and separate log files from database files. The following sections describe how to tune CPU usage. Oracle is designed to operate with all users and background processes operating at the same priority level. Changing priorities cause unexpected effects on contention and response times. For example, if the log writer process (LGWR) gets a low priority, it is not executed frequently enough and LGWR becomes a bottleneck. On the other hand, if LGWR has a high priority, user processes can suffer poor response time. In a multi-processor environment, use processor affinity and binding if it is available on your system. Processor binding prevents a process from migrating from one CPU to another, allowing the information in the CPU cache to be better utilized. You can bind a server shadow process to make use of the cache as it is always active, and let background processes flow between CPUs. If your system is CPU-bound, move applications to a separate system to reduce the load on the CPU. For example, you can off-load foreground processes such as Oracle Forms to a client system to free CPU cycles on the database server system. Oracle processes usually use semaphores to coordinate access to shared resources. If a shared resource is locked, a process suspends and waits for the shared resource to become available. One way to improve shared resource coordination is to use a post-wait driver instead of semaphores. A post-wait driver is a faster, less expensive synchronization mechanism than a semaphore. The Oracle Post-Wait driver implements an optimized mechanism of inter-process communication, without the overhead of signal handlers or semaphores. It improves performance for Oracle8i. To enable the Oracle Post-Wait driver, set the ORA_USE_PW environment variable to 1 before starting the database: $ ORA_USE_PW= 1 If you must transfer large amounts of data between the user and Oracle8i (for example, using the export and import utilities), it is efficient to You can improve performance by keeping the UNIX kernel as small as possible. The UNIX kernel typically pre-allocates physical RAM, leaving less memory available for other processes such as Oracle. Traditionally, kernel parameters such as NBUF, NFILE, and NOFILES were used to adjust kernel size. However, most UNIX implementations dynamically adjust those parameters at run time, even though they are present in the UNIX configuration file. Look for memory-mapped video drivers, networking drivers, and disk drivers that could be de-installed, freeing more memory for use by other processes. This section describes how you can improve the performance of Oracle8i by optimizing the size of Oracle blocks for the files in your database. On SGI IRIX, the default Oracle block size is 2 KB and the maximum block size is 32 KB. You can set the block size to any multiple of 2 KB up to a maximum of 16 KB, inclusive. The optimal block size is typically the default size, however, it depends on the applications. To create a database with a different Oracle block size, add the following line to the init sid .ora file before creating the database: db_block_size=new_block_size To take full advantage of raw devices, adjust the size of the Oracle8i buffer cache and, if memory is limited, the SGI IRIX buffer cache: The SGI IRIX buffer cache holds blocks of data in memory while they are being transferred from memory to disk, or vice versa. The Oracle8i buffer cache is the area in memory that stores the Oracle database buffers. Since Oracle8i can use raw devices, it does not need to use the SGI IRIX buffer cache. If you use raw devices, you must increase the size of the Oracle8i buffer cache. If the amount of memory on the system is limited, make a corresponding decrease in the SGI IRIX buffer cache size. It is possible to increase or decrease the Oracle8i Buffer Cache by modifying the DB_BLOCK_BUFFERS parameter in the init sid .ora file and restarting the instance. Use the sar command to determine which buffer caches you must increase or decrease. Table 2-3 shows the options of the sar command. To adjust the cache size, perform one of the following: The following sections describe the IRIX_CPU_AFFNITY and IRIX_SCHEDULER initialization parameters. The IRIX_CPU_AFFINITY initialization parameter enables you to specify processor affinity on an IRIX system. This parameter provides more control over system resources. The syntax is as follows: IRIX_CPU_AFFINITY="NAME=MASK [NAME=MASK...]" In the preceding example: The following specification of the IRIX_CPU_AFFINITY parameter assigns the PMON, SMON, and CKPT processes to CPU 1, the DBWR process to CPU 2, and the LOCK process is to CPU 3: IRIX_CPU_AFFINITY="pmon=1 dbwr=2 smon=1 lock=3 ckpt=1 other=4:7" All other Oracle8i processes are assigned in round-robin fashion (based on Oracle process id) to CPU 4 through 7. The following specification of the IRIX_CPU_AFFINITY parameter assigns all the Oracle8i system processes to CPUs 0 through 4 in round-robin fashion based on the Oracle process id and assigns all other Oracle8i processes to CPUs 5 through 19 in round-robin fashion based on the Oracle process id: IRIX_CPU_AFFINITY="sys=0:4 other=5:19" The IRIX_SCHEDULER initialization parameter allows you to specify certain IRIX scheduler options. Default value is none. The syntax is as follows: IRIX_SCHEDULER="NAME=MASK [NAME=MASK...]" In the preceding example: The $ORACLE_HOME/bin/oracle executable might require you to set super user ID root privilege, depending on the scheduler options used. See the schedctl man page for the details of the schedctl command for information on the LOCK_SGA parameter. The following examples demonstrate using the IRIX_SCHEDULER parameter: IRIX_SCHEDULER="slice=50" IRIX_SCHEDULER="renice=39 slice=50" The primary function of the SGA is to cache database information. If the SGA begins paging to disk, caching becomes an overhead rather than a benefit. If you have installed Oracle8i on SGI IRIX, you can use the LOCK_SGA parameter to lock the memory pages associated with the SGA. Such locking prevents paging and helps asynchronous I/O on raw files to work efficiently. Locked memory is not available for use by other applications. This option should be used only if there is enough physical memory on the system to support the Oracle instance, the applications, and other users. If the amount of physical memory on the system is insufficient, performance degradation may occur due to increased memory paging or swapping. To lock the SGA: initsid .orafile. initsid .orafile: LOCK_SGA=TRUE root. $ systune maxlkmem 2050 The following message appears: maxlkmem=current_setting Do you really want to change maxlkmem to 2050? Yto change the setting to 2050. Any changes made in this way do not affect the kernel default setting for this parameter. For more information, type man systune(1M) at the command prompt or consult your system administrator. This section describes the trace (or dump) and alert files Oracle8i creates to diagnose and resolve operating problems. The file name format of a trace file is processname_sid_unixpid .trc, where: The alert_ sid .log file is associated with a database and is located in the directory specified by the init sid .ora parameter BACKGROUND_DUMP_DEST. The default directory is $ORACLE_HOME/rdbms/log. This section describes the use of raw devices on Oracle8i. Raw devices/volumes have the following disadvantages when used on SGI IRIX: In addition to the disadvantages described in the previous section, you should consider the following issues when deciding whether to use raw devices/volumes: Use raw devices/volumes for Oracle files only if your site has at least as many raw disk partitions as Oracle datafiles. If the raw disk partitions are already formatted, match datafile size to partition size as closely as possible to avoid wasting space. With logical volumes, you can create logical disks based on raw partition availability. Because logical disks can be moved to more than one disk, the disk drives do not have to be reformatted to obtain logical disk sizes. You can optimize disk performance when the database is online by moving hot spots to cooler drives. Most hardware vendors who provide the logical disk facility also provide a graphical user interface that can be used for tuning. You can mirror logical volumes to protect against loss of data. If one copy of a mirror fails, dynamic re-synchronization is possible. Some vendors also provide the ability to replace drives online in conjunction with the mirroring facility. Consider the following items when creating raw devices: oracleand oinstall, respectively. .
http://docs.oracle.com/html/A87435_01/ch2.htm
CC-MAIN-2015-27
refinedweb
2,674
54.42
> > Apply the following patch > > > > --- lib/libxview/attr/attr.c.org Sun Sep 24 22:22:04 2000 > > +++ lib/libxview/attr/attr.c Sun Sep 24 22:21:34 2000 > > @@ -93,7 +93,11 @@ > > */ > > #if (__GLIBC__ > 2) || (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 1) > > __va_copy(valist, valist1); > > +#if defined(__powerpc__) > > + avlist = avlist1; > > +#else > > __va_copy(avlist, avlist1); > > +#endif > > #else > > valist = valist1; > > avlist = avlist1; > > > > and it just works (for me, YMMV, yadda). > > > > Sample packages in > > Huh? Your mileage WILL vary. That defeats the entire point of using > __va_copy(). avlist1 is, I assume, a va_list; a va_list is an array > type, and so *avlist1 means avlist1[0], which does not dereference an > uninitialized pointer. This patch is wrong. avlist is _not_ a va_list. Please look up the definition of avlist in lib/libxview/attr/attr.h yourself. avlist is a pointer to some u32 chunk of data, either a pointer to a avlist attribute or value IIRC. As such, __va_copy just so happens to work for other archs by accident. I am well aware of the nature of PPC va_list, and I've had trouble with this specific type before. Same symptoms: doing bad things with va_lists results in segfaults. I'd like to hear from anybody who still gets segfaults from libxview, maybe we can establish just how much YMMV. > A FAQ on this sort of problem has been posted to debian-powerpc about a > dozen times now, and also to debian-arm, IIRC. Feel free to point out a URL (searchable Debian ML archives would be nice to have, sometimes). Michael
https://lists.debian.org/debian-powerpc/2000/09/msg00461.html
CC-MAIN-2017-04
refinedweb
256
74.69
Running maximum of numpy array values Running maximum of numpy array values I need a fast way to keep a running maximum of a numpy array. For example, if my array was: x = numpy.array([11,12,13,20,19,18,17,18,23,21]) I'd want: numpy.array([11,12,13,20,20,20,20,20,23,23]) Obviously I could do this with a little loop: def running_max(x): result = [x[0]] for val in x: if val > result[-1]: result.append(val) else: result.append(result[-1]) return result But my arrays have hundreds of thousands of entries and I need to call this many times. It seems like there's got to be a numpy trick to remove the loop, but I can't seem to find anything that will work. The alternative will be to write this as a C extension, but it seems like I'd be reinventing the wheel.
https://prodevsblog.com/questions/149635/running-maximum-of-numpy-array-values/
CC-MAIN-2020-40
refinedweb
157
72.76
Limits of the Android Geocoder Limits of the Android Geocoder Join the DZone community and get the full member experience.Join For Free Android provides a Geocoder in the standard APIs which is pretty simple to use, as this code excerpt demonstrates: import android.content.Context; import android.location.Address; import android.location.Geocoder; import it.tidalwave.geo.Coordinate; Context context = ... ; Coordinate coordinate = ... ; Geocoder geocoder = new Geocoder(context); List<Address> addresses = geocoder.getFromLocation(coordinate.getLatitude(), coordinate.getLongitude(), 100); Unfortunately, tests on the field proved it to be almost useless, at least in my typical scenarios. The problem is that it seems that a single geocoder service is available (and of course it is the one provided by Google by means of its web services) and the following problems arise: - I've repeated this a number of times in the past years: the "always connected" thing is a myth. Italy's mobile network coverage is excellent and just an epsilon below the 100%. Still, in the countryside, or in a dense wood, it's easy to find places where the radio signal is so weak that the connection doesn't work or takes a very long time to transfer data. Usually it's a matter of moving around to pick a better signal, but this is not the thing you want to do while observing birds. Thus, I need to have a geocoder capable to resolve at least a subset of common locations without requiring a network connection. - The quality of data provided by the Google geocoder in my country is very variable. Sometimes it's able to pick even the name of the nearest, small fraction of houses around; other times it just resorts to returning "Province of XYZ", which is too vague. Being able to aggregate multiple results from different geocoders (e.g. GeoNames, or Yahoo!) might resolve the issue or at least improve things here. - I've realized that for my purposes I also need to rely on user-provided location names. For a birder, the indication "Observatory #5 in the FooBar Wildlife Sanctuary" might be extremely useful, but I don't think it's even possible to have the Google geocoder capable to resolve it (while it would be possible, for instance, with GeoNames that accepts user-contributed data). Another case for multiple geocoding services, one of which should be local. Indeed, I'm a bit surprised that Android doesn't provide a service to register custom locations in the preferences, as it is possible e.g. to have a database of contacts. In other words, the Location APIs in Android are too closed and Google-centric. I'll have to import the code used in forceTen, which implements pluggabe GeoCoders and already has got a GeoNames provider. Fortunately, the fact I'm able to run at least a subset of the NetBeans Lookup API on Android should make the task not too hard to achieve. First Screencast of blueBill Mobile I'm able to provide a first decent screencast of blueBill Mobile, even though I've still to learn and improve the way screencasts of Android applications can be made: Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/android-geocoder-limitations
CC-MAIN-2020-29
refinedweb
544
52.6
This lab is an introduction to Fundamental OpenGL Functions After the lab lecture, you have one week to: Pixel Format Function Description ------------------------------------------------------------------ ChoosePixelFormat() Obtains a DC's pixel format that's the closest match to a pixel format you've provided. SetPixelFormat() Sets a DC's current pixel format to the pixel format index specified. GetPixelFormat() Returns the pixel format index of a DC's current pixel format. DescribePixelFormat() Given a DC and a pixel format index, fills a PIXELFORMATDESCRIPTOR data structure with the pixel format's properties. The main properties of pixel format include: The following structure can be found in the Windows include file WINGDI.H: typedef struct tagPIXELFORMATDISCRIPTOR { WORD nsize; WORD nVersion; BYTEumBlueBits; BYTE cAccumAlphaBits; BYTE cDepthBits; BYTE cStencilBits; BYTE cAuxBuffers; BYTE iLayerType; BYTE bReserved; DWORD dwLayerMask; DWORD dwVisisbleMask; DWORD dwDamageMask; } PIXELFORMATDISCRIPTOR, *PPIXELFORMATDISCRIPTOR, FAR *LPPIXELFORMATDISCRIPTOR; For the first part of the following discussion it may be useful to try out the OpenGL Shapes Demo — instructions. glBegin(GL_POINTS);There are also other types of points. 3D vertex can be similarly drawn. They are listed below:(); glVertex2d, glVertex2f, glVertex2i, glVertex2s,The postfix specifies the format of parameters used by each function: Example: using vector type parameter. double P1[2], P2[2], P3[2];Refer to online manual for details. /* set values to P1, P2, P3. */ P1[0] = 1.5; P1[1] = -0.3; ...... glBegin(GL_POINTS); glVertex2dv(P1); glVertex2dv(P2); glVertex2dv(&P3[0]); /* This is equivalent. */ glEnd(); Three different line primitives can be created: draws a line segment for each pair of vertices. draws a connected group of line segments from vertex v0 to vn connecting a line between each vertex and the next in the order given. similar to GL_LINE_STRIP, except it closes the line from vn to v0, defining a loop. glBegin(GL_LINE_LOOP); //make it a connected close line segment(); You may refer to Page 59 of the reference book to view the ten OpenGL primitive types The following code constructs a filled in parallelogram on the x-y plane: glBegin(GL_POLYGON);There are also other types of points. 3D vertex can be similarly drawn.(); Refer to online manual. The following draws a rectangle: glRectf(0f, 0f, 1f, 1f); // x0, y0, x1, y1: two opposite cornersit is the same as: // of the rectangle. glBegin(GL_QUADS); glVertex2f(0.0f, 0.0f); //note 2D form glVertex2f(1.0f, 0.0f); glVertex2f(1.0f, 1.0f); glVertex2f(0.0f, 1.0f); glEnd(); glColor*( ) For example, glColor3f() function takes three floating-point values for the red, green and blue color to select. A value of 0 means zero intensity; a value of 1.0 is full intensity, and any value in between is a partial intensity. Refer to Redbook for ranges for other types like i, b ,s ,ui, ub ,us. // pass in three points, and a vector to be filled void NormalVector(GLdouble p1[3], GLdouble p2[3], GLdouble p3[3], GLdouble n[3]) { GLdouble v1[3], v2[3], d; // calculate two vectors, using the middle point // as the common origin v1[0] = p2[0] - p1[0]; v1[1] = p2[1] - p1[1]; v1[2] = p2[2] - p1[2]; v2[0] = p2[0] - p3[0]; v2[1] = p2[1] - p3[1]; v2[2] = p2[2] - p3[2]; // calculate the cross-product of the two vectors n[0] = v1[1]*v2[2] - v2[1]*v1[2]; n[1] = v1[2]*v2[0] - v2[2]*v1[0]; n[2] = v1[0]*v2[1] - v2[0]*v1[1]; // normalize the vector d = sqrt(n[0]*n[0] + n[1]*n[1] + n[2]n[2]); n[0] /= d; n[1] /= d; n[2] /= d; } // end of NormalVector glClearColor(0.0f, 0.0f, 0.0f); glClearDepth(1.0f); // once the clear color and clear depth values have been // set, both buffers can be cleared, the following command // is usually issued just before you begin to render a // scene, usually as the first rendering step to a // WM_PAINT message // clear both buffers in the mean time glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); It is good to get a feeling for where you can put points on the scene. The following instructions are meant to get you started (based on the program you completed last week). Modify the OpenGLView.cpp code: glColor3f(1.0f, 0.0f, 0.0f); //draw in redNote: these values may vary slightly depending on the machine you are using. glBegin(GL_LINE_LOOP); glVertex2f(5.0f,3.5f); glVertex2f(5.0f,-3.5f); glVertex2f(-5.0f,-3.5f); glVertex2f(-5.0f,3.5f); glEnd(); Note: you will be looking directly down the z axis. The positive z axis is pointing towards you. /3 - Draw a picture that contains at least three of the 10 OpenGL primitives. /1 - Use at least 3 different colors /2 - Create examples of dashed and solid lines /3 - Use anti-aliasing to create smooth GL_POINTS (nice round circles) /2 - Create and use a stipple pattern for the polygons /1 - Use 3 different glVertex types (for instance, glVertex2i, glVertex3fv, glVertex3f) /5 - Artistic impression. (Lab instructor's discretion) You can find sample code at:. If you get this error when you compile the sample code: error C2381: 'exit' : redefinition; __declspec(noreturn) differstry removing: #include <stdlib.h> Pay particular attention to: You might want to look up glPointSize in the Online OpenGL Manual Taken from page 46 and 47 and 64 of Fosner's book, OpenGL Programming for Windows 95 and Windows NT
http://www.cs.uregina.ca/Links/class-info/405/WWW/Lab2/
CC-MAIN-2016-44
refinedweb
896
51.18
Learning objectives Deep learning is an efficient technique to solve complex problems, and the “science” part of data science is all about experimenting with different settings and comparing results. Using Watson Studio, you can easily architect a neural network using a friendly GUI, download the model as code in your favorite framework’s settings, and create experiments to compare between different hyperparameter optimization settings. In this tutorial, we’ll build a model that detects signature fraud by building a deep neural network. You will learn how to use Watson Studio’s Neural Network Modeler to quickly prototype a neural network architecture and test it. You will also learn how to download code generated from Neural Network Modeler and modify it to plug in and work with Watson Studio’s Experiments Hyperparameter Optimization. The dataset contains images of signatures, some genuine and some were simulated (fraud). The original source of the dataset is the ICFHR 2010 Signature Verification Competition) [1]. Images are resized to 32×32 pixels and are stored as numpy arrays in a pickled format. Prerequisites An IBM Cloud Account A running Cloud Object Storage Service Instance from the IBM Cloud catalog A running Machine Learning Service Instance from the IBM Cloud catalog A Watson Studio Service Instance from the IBM Cloud catalog Download the zip folder containing the assets Estimated time It takes approximately 1 hour to read and follow the steps in this tutorial. Steps 1. Upload the dataset to IBM Cloud Object Storage Before we start building our neural network, we’ll need to upload files containing the data to our Object Storage instance on the cloud. To do that, unzip the assets folder you downloaded as a prerequisite, and make sure you can locate 3 different data files: training_data.pickle, validation_data.pickle and test_data.pickle. Go to your dashboard on IBM Cloud and click on Cloud Object Storage instance under Services. Select Create Bucket to store the data. This step will make it easier finding the data when working with Watson Studio’s Neural Network Modeler. The bucket’s name must be globally unique to IBM Cloud Object Storage. It is suggested to use your name and some sort of identifier for the project. Also, make sure that Cross Region Resiliency is selected, then click Create bucket. Start adding files to your newly created bucket. You can do that by clicking on Upload button on the top right corner of the page and selecting Files option from the drop-down menu. Select the Standard Upload option and click the Select Files button. Choose the three files named training_data.pickle, validation_data.pickle and test_data.pickle from the unzipped assets folder on your local disk. You should see a dialog asking you to confirm the file selections. Click Upload to proceed with the upload process. Once the upload process is done, you should see the page updated and displaying the files you just uploaded. 2. Build a neural network using Watson Studio Neural Network Modeler Select Create a project in Watson Studio. Select the Standard option on the following page. Name your project and associate a Cloud Object Storage instance. If you followed the previous step, your Object Storage instance should be detected and selectable from the dropdown. You’re now ready to work with Watson Studio. Create a Modeler Flow. You can find this option under Add to project in top right of the project page. Type a name for your model, select Neural Network Modeler, then click Create. Once the previous step is successful, you’ll be presented with the Modeler Canvas. This is where you’ll build your Neural Network which will be represented in a graphical form instead of code. You’ll find a sidebar on the left of the screen containing all possible components of neural networks, named Palette. The whole idea will be dragging and dropping nodes representing the different layers of a Neural Network and connecting them to create a Flow. First, we need to provide data for our neural network. To do that, select Image Data from the Input section in the Neural Network Modeler Palette. Drag and drop the Image Data node on to the canvas, then double-click it to modify its properties. Notice that this will trigger another sidebar on the right. To define the data source, create a new connection to your Object Storage instance (COS) by clicking Create a New Connection under the Data sction, or select a connection if one already exists. Choose the bucket that contains your data assets (Refer to Step 1). Choose training_data.pickle as the Training data file, test_data.pickle as the Test data file and validation_data.pickle as the Validation data file. Now close the Data section and switch to the Settings section in the same right-side panel. Adjust all settings as described here and as shown in the screenshot below: - Set Image height to 32 - Set Image width to 32 - Set Channels to 1since the images are grayscale - Set Tensor dimensionality to channels_last - Set Classes to 2since we are trying to classify signature images into 2 classes, genuine and fraud - Set Data format as Python Pickle - Set Epochs to 100, this is how many times the Neural Network will iterate over the data in order to learn more and adjust weights to reach better accuracy - Set Batch size to 16, and this is how many images will enter and go through the Neural Network at a time Once you have all these settings in place, click Close to save them and to close the right sidebar. Now let’s start building the neural network. The first layer we will add is a 2D Convolutional Layer. Select Conv 2D node from the Convolution section in the left sidebar and drag and drop it on to the canvas. Note: This is a sample architecture, please feel free to try different or more advanced ones Connect the two nodes, double-click on Conv 2D node to edit its properties. In the right sidebar, change the settings to the following: - Set Number of filters to 32, this is the number of feature maps we want to detect in a given image - Set Kernel row to 3, this is the width of the filter (think of a window) that will slide across an image and perform feature detection - Set Kernel col to 3, this is the height of the filter - Set Stride row to 1, this is the amount by which the filter will slide horizontally - Set Stride col to 1, this is the amount by which the filter will slide vertically Continue editing the Conv2D node properties: - Set Weight LR multiplier to 10, this is a value multiplied by the learning rate (which we will define later in the Neural Network hyperparameters). This is introduced to modify the learning rate value for each layer separately - Set Weight decay multiplier to 1, this is a value multiplied by the decay (which we will define later in the Neural Network hyperparameters). This is introduced to modify the weight decay rate for each layer separately - Set Bias LR multiplier to 10 - Set Bias decay multiplier to 1 We only edited the required parameters here. There are other optional parameters that have default settings, such as Initialization, which is the initial weights values. You can set an intial Bias value and set whether it’s trainable or not. You can choose a regularization method to minimize overfitting and enhance model generalization. This is a way to penalize large weights and focus on learning small ones as they are lower in complexity and provide better explanation for the data; thus, better generalization for the model. Once you have all these settings in place, click Close to save them and to close the right sidebar. Next, we’ll add the third node, which is an activation layer. We’ll choose ReLU (Rectified Linear Unit) as the activation function in our architecture. ReLU gives good results generally and is widely used in Convolutional Neural Networks. Drag and drop the ReLU node, you can find it under the Activation section in the left sidebar. Then we’ll add another Convolutional layer, drag and drop a Conv2D node found in the Convolution section in the left sidebar. Make sure you connect the nodes after dropping them on to the canvas. Double click on the second Conv2D node to trigger the right sidebar so we can edit its properties. Change the settings to the following: - Set Number of filters to 64 - Set Kernel row to 3 - Set Kernel col to 3 - Set Stride row to 1 - Set Stride col to 1 Continue editing the Conv2D node properties: - Set Weight LR multiplier to 1 - Set Weight decay multiplier to 1 - Set Bias LR multiplier to 1 - Set Bias decay multiplier to 1 Once you have all these settings in place, click Close to save them and to close the right sidebar. Add another Activation layer, drag and drop a ReLU node from the Activation section in the left sidebar. Now, we’ll add a Max Pooling layer, with the purpose of down-sampling or dimensionality reduction of the features extracted from the previous convolutional layer. This is achieved through taking the maximum value within specific regions (windows) that will slide across the previous layer’s output. This step helps aggregate many low-level features, extracting only the most dominant ones thus reducing the amount of data to be processed. Drag and drop a Pool 2D node from the Convolutional section in the left sidebar. Double-click Pool 2D node to edit its properties. Change the settings to the following: - Set Kernel row to 2 - Set Kernel col to 2 - Set Stride row to 1 - Set Stride col to 1 Once you have all these settings in place, click Close to save them and to close the right sidebar. Next, we’ll add a Dropout layer. This layer’s purpose is to help reduce overfitting, mainly by dropping or ignoring some neurons in the network randomly. Drag and drop a Dropout node from the Core section in the left sidebar. Double click the Dropout node and change its Probability to 0.25, then click Close. Now let’s move on to the fully connected layers. To do that, we need to flatten the output we have up till now into a 1D matrix. Drag and drop a Flatten node from Core section. Drag and drop a Dense node from Core section in the left sidebar. Double click the Dense node and change the number of nodes in the settings to 128 and click Close. Add another Dropout node. Change the Probability of this layer to 0.5 and click Close. Add a final Dense node. This will represent the output classes, two in this case. Double click the node and change the number of nodes in the settings to 2 and click Close. Now, let’s add an activation layer at the end of our architecture. We’ll use Softmax here, it’s commonly used in the last layer of a Neural Network. It returns an output in the range of [0, 1] to represent true and false values for each node in the final layer. Drag and drop a Softmax node to the canvas and connect it to previous nodes. Next, we’ll need to add a means to calculate the performance of the model. This is represented in the form of a cost function, which calculates the error of the model’s predicted outputs in comparison to the actual labels in the dataset. Our goal is to minimize the loss as much as possible. One of the functions that can be used for calculating the loss for a classification model is Cross-Entropy. Drag and drop a Cross-Entropy node from the Loss section in the left sidebar. We’ll add another node to calculate the accuracy of our model’s predctions. Drag and drop the Accuracy node from the Metrics section in the left sidebar. Connect both, the Cross-Entropy and the Accuracy nodes to the Softmax node. Both will perform calculations on the model’s output. Finally, we’ll add an optimization algorithm, which defines how the model will fine tune its parameters to minimize loss. There are many optimization algorithms, in this example, we will use and Adam optimizer. It is a generally well-functioning optimization algorithm and reaches the best results in less time. Drag and drop the Adam node from Optimizer section in the left sidebar. Double click on the Adam node and change its settings to the following: - Set the Learning rate to 0.001. - Set the Decay to 0. Decay changes the learning rate during training iterations, making it smaller as the model nears a convergence or the ground truth value. In Adam optimizer, we don’t need to set this value as Adam already changes the learning rate during training, i.e. it’s not a fixed value as in other optimization algorithms such as SGD. The reason behind the use of other parameters (Beta_1 and Beta_2) is that the Adam algorithm updates exponential moving averages of the gradient and its square, where these parameters control the exponential decay rates of these moving averages. The moving averages themselves are estimates of the 1st moment (the mean) and the 2nd raw moment (the uncentered variance) of the gradient [2]. The parameters Beta_1 and Beta_2 are already set to default values 0.9 and 0.999 which are good for computer vision problems. 3. Publishing a model from Neural Network Modeler and training it using Experiments Now that we have the full architecture of our Neural Network, let’s start training it and see how it performs on our dataset. You can do that directly from Watson Studio’s Neural Network Modeler’s interface. In the top toolbar, select the Publish training definition tab and click it. Name your model so you can identify it later. You will need to have a Watson Machine Learning Service (from IBM Cloud Catalog) associated to your project. If you don’t have one associated, you’ll be prompted to do that on the fly. In the given prompt, click the Settings link. You will be redirected to the project settings page. Here you can manage all services and settings related to your current project. Scroll down to the Associated Services section, click Add service and select Watson from the dropdown. You’ll then be presented with all the available Watson services. Choose Machine Learning and click Add. If you don’t already have a running Machine Learning service instance, choose the New option to create one on the fly. If you followed the prerequisites and already have a Machine Learning service instance, choose the Existing option. Select your service from the dropdown list. Now you’ll be redirected back to your project settings page. Click on Assets on the top bar to return back to your main dashboard on Watson Studio. Scroll down to Modeler flows section and choose the flow we have been working on according to the name you gave it. Once your flow loads, click again on the Publish training definition tab in the top toolbar. Make sure you named your Model, and that the Machine Learning service is detected and selected, then click Publish. Once publishing the training definition is done, you’ll find a notification on the top of the screen showing you a link to train the model in Experiments, click on it. Now, we’ll start creating a new Experiment. Start by giving it a name, then select the Machine Learning service. Lastly, we need to define the source of our data and where we will store the training results. Choose the bucket containg your dataset by clicking Select. Since you have been following along, you should have an existing connection to Object Storage, select that from the dropdown list. Choose the bucket containg the dataset by choosing the Existing radio button and selecting the name of the bucket where you stored your data as in Step 1 in this guide. Then click Select at the bottom of the page. You’ll be redirected back to the new Experiment details page to choose where to store your training results, click Select. As in the previous step, choose Existing connection, select your Object Storage connection from the dropdown list, and to select a bucket to store the resultsby choosing New bucket that has a name globally unique to IBM Cloud Object Storage Finally click Select at the bottom of the page. Note: It’s advisable to have two different buckets for storing your datasets and for storing the training results Now back to the new Experiment details page, let’s click on Add training definition link in the right side of the page. Since we already published a training definition from Neural Network Modeler, we will select the Existing training definition option. Select the training definition from the dropdown list according to the name you gave it previously. Then click on Select at the bottom of the page. Let’s select our hardware option that will be used to train the model. If you’re a free-tier (Lite) account, you’ll have access to the first option in the Compute Plan dropdown list. For the Hyperparameter Optimization Method, let’s select None for now, we will be getting back to that in the next step when we create a training definition using Keras code instead of Neural Network Modeler. Finally, click on Select at the bottom of the page. You’ll be redirected to the new Experiment details page for the final time, take a look at all the options and make sure everything is set, then click Create and run at the bottom of the page. Now training your model will start, you’ll be presented with a view that will give you details about the training process and a means to monitor your model. Once your model is trained, it will be listed in the Completed section at the bottom of the page, with details about how it performed. If you’re satisfied with your model, you can put it into production and start actually using it and scoring on new images! The first step for doing so is clicking the three vertical dots under the Actions tab and selecting Save Model. You can find your saved model later in the Models section in the main dashboard of your Watson Studio project. From there you can deploy it as REST API. 4. For Experts: Code your own model, train it in Experiments and leverage Hyperparameter Optimzation If you would like the maximum control over your architectures, models and hyperparameters, you have the option to import your own code files into Experiments and train it there. Watson Studio Experiments provides the following benefits: Leverage powerful hardware on the cloud without any hardware-software configuration setup needed. Watson Studio provides you with Deep Learning as a Service (DLaaS). Run multiple experiments in parallel Use hyperparameter optimization by giving Watson Studio a range of values for different hyperparameters. Watson Studio will create multiple variants of your model using different hyperparameter settings, it will package each variant of your model into a Container and deploy it on a Kubernetes cluster. All the model variants will run in parallel, Kuberenetes will monitor the performance of each in realtime. At the end of the training process, you can read the details about the performance of all the different runs with the details about what hyperparameters were used for each. Train with Distributed Deep Learning for compute intensive training workloads and exponentially accelerate your training by splitting the workload on multiple servers. Note: This functionality is not presented in this guide, but you can read more about it in Watson Studio’s documentation Let’s start working with code. You can write your model from scratch, using supported deep learning frameworks such as TensorFlow, Keras, PyTorch and Caffe. Another option, which we will be doing here, is to let Watson Studio’s Neural Network Modeler automatically generate code for you in your favorite framework’s code and modify that further. Open your Neural Network Flow we created earlier, and in the top toolbar, click the Download tab, select a framework from the dropdown list, in this guide, we will be using Keras Code. The code will be downloaded on your local machine inside a zip folder. You’ll find two files, keras_helper.py and keras_driver.py. Note: the names of the zip folder and files contained inside may differ The code responsible for the model can be found in the keras_driver.py file, so open that in your favorite code editor. We’ll do some modifications on the code, to make it ready for plugging into Watson Studio’s Experiments HPO. First, we’ll import some libraries. In the imports section of your code, add the following lines: import json from os import environ from emetrics import EMetrics The emetrics import refers to a python script with the same name, it’s availble in the assets folder provided with this guide and downloaded in the prerequisites. Emetrics will be responsible for writing the model’s results into the logs and formatting it in the way that Watson Studio interface expects. Next, add the following block of code, just beneath the imports section: ############################################################################### # Set up working directories for data, model and logs. ############################################################################### model_filename = "SignatureFraud.h5" # writing the train model and getting input data if environ.get('RESULT_DIR') is not None: output_model_folder = os.path.join(os.environ["RESULT_DIR"], "model") output_model_path = os.path.join(output_model_folder, model_filename) else: output_model_folder = "model" output_model_path = os.path.join("model", model_filename) os.makedirs(output_model_folder, exist_ok=True) ############################################################################### # Set up HPO. ############################################################################### config_file = "config.json" if os.path.exists(config_file): with open(config_file, 'r') as f: json_obj = json.load(f) if "initial_learning_rate" in json_obj: learning_rate = json_obj["initial_learning_rate"] else: learning_rate = 0.001000 if "batch_size" in json_obj: batch_size = json_obj["batch_size"] else: batch_size = 16 if "num_epochs" in json_obj: num_epochs = json_obj["num_epochs"] else: num_epochs = 100 if "decay" in json_obj: decay = json_obj["decay"] else: decay = 0.100000 if "beta_1" in json_obj: beta_1 = json_obj["beta_1"] else: beta_1 = 0.900000 if "beta_2" in json_obj: beta_2 = json_obj["beta_2"] else: beta_2 = 0.999000 else: learning_rate = 0.001000 batch_size = 16 num_epochs = 100 decay = 0.100000 beta_1 = 0.900000 beta_2 = 0.999000 def getCurrentSubID(): if "SUBID" in os.environ: return os.environ["SUBID"] else: return None class HPOMetrics(keras.callbacks.Callback): def __init__(self): self.emetrics = EMetrics.open(getCurrentSubID()) def on_epoch_end(self, epoch, logs={}): train_results = {} test_results = {} for key, value in logs.items(): if 'val_' in key: test_results.update({key: value}) else: train_results.update({key: value}) #print('EPOCH ' + str(epoch)) self.emetrics.record("train", epoch, train_results) self.emetrics.record(EMetrics.TEST_GROUP, epoch, test_results) def close(self): self.emetrics.close() # Perform data pre-processing defined_metrics = [] defined_loss = [] The main function of this block of code, is to define a destination folder to store model’s results, save the trained model and write the logs. It’s also responsible for grabbing data provided from Watson Studio’s Experiment HPO interface. The hyperparameters provided in the interface are stored in a config_file which we use to extract hyperparameter values and store them in variables that will be accessed by the model later. Important Note: Please make sure that you have no conflicting lines of code after this point redefining the hyperparameter variables or assigning them to new values, as this will cause HPO to not work properly Find this block of code: model_inputs = [ImageData_1] model_outputs = [Softmax_12] model = Model(inputs=model_inputs, outputs=model_outputs) and replace all following code with this block: # Starting Hyperparameter Optimization hpo = HPOMetrics() # Define optimizer optim = Adam(lr=learning_rate, beta_1=beta_1, beta_2=beta_2, decay=decay) # Perform training and other misc. final steps model.compile(loss=defined_loss, optimizer=optim, metrics=defined_metrics) if len(model_outputs) > 1: train_y = [train_y] * len(model_outputs) if len(val_x) > 0: val_y = [val_y] * len(model_outputs) if len(test_x) > 0: test_y = [test_y] * len(model_outputs) # Writing metrics log_dir = os.environ.get("LOG_DIR") sub_id_dir = os.environ.get("SUBID") static_path_train = os.path.join("logs", "tb", "train") static_path_test = os.path.join("logs", "tb", "test") if log_dir is not None and sub_id_dir is not None: tb_directory_train = os.path.join(log_dir, sub_id_dir, static_path_train) tb_directory_test = os.path.join(log_dir, sub_id_dir, static_path_test) tensorboard_train = TensorBoard(log_dir=tb_directory_train) tensorboard_test = TensorBoard(log_dir=tb_directory_test) else: tb_directory_train = static_path_train tb_directory_test = static_path_test tensorboard_train = TensorBoard(log_dir=tb_directory_train) tensorboard_test = TensorBoard(log_dir=tb_directory_test) if (len(val_x) > 0): history = model.fit( train_x, train_y, batch_size=batch_size, epochs=num_epochs, verbose=1, validation_data=(val_x, val_y), shuffle=True, callbacks=[tensorboard_train, tensorboard_test, hpo]) else: history = model.fit( train_x, train_y, batch_size=batch_size, epochs=num_epochs, verbose=1, shuffle=True, callbacks=[tensorboard_train, tensorboard_test, hpo]) hpo.close() #print("Training history:" + str(history.history)) if (len(test_x) > 0): test_scores = model.evaluate(test_x, test_y, verbose=1) #print(test_scores) print('Test loss:', test_scores[0]) print('Test accuracy:', test_scores[1]) model.save(output_model_path) print("Model saved in file: %s" % output_model_path) This should be the end of your code. What we have just introduced is a means of plugging in the emetrics helper module into our model, so it can read the model’s training history and write it to logs and store it. It will also write a val_dict.json file that will contain the accuracy at each step of the model training as a means to measure its performance. This val_dict.json file will be used by Watson Studio’s interface to display the performance of different training runs with different hyperparameters so you can compare them. If you want to check the full code in keras_driver.py for reference, you can find it here: keras_driver.py Now, to start uploading your code to Watson Studio Experiments, you’ll need to compress it into a zipped folder. You’ll need the following files inside the folder: - The recently modified keras_driver.py keras_helper.py emetrics.pyprovided in the assets for this guide training_data.pickle– provided validation_data.pickle– provided test_data.pickle– provided Once you have all files in place in a zipped folder, we can move on now to Watson Studio to start the training process. In Watson Studio’s main dashboard, scroll down to the Experiments section and click New experiment. Name your experiment, make sure that a Machine Learning service is selected and choose the buckets for your dataset and the training results as we did previously in Step 3. Click Add training definition on the right side of the screen. Choose New training definition option, give it a name and drag and drop the zip folder we prepared earlier with our code files and dataset. Choose the framework to be used for training, in our case, we wrote our code for Keras which runs on top of Tensorflow, so we will choose the TensorFlow option from the dropdown list. Write the following line of code as the execution command, it’s similar to how you would execute the code from your terminal locally: python3 keras-code/keras_driver.py Choose the Compute plan, then for Hyperparameter optimization method choose RBFOpt, choose 100 for the Number of optimizer steps, choose accuracy for the Objective and choose maximize in the Maximize or minimze field. Finally, click Add hyperparameter. Let’s add our first hyperparameter, give it a name. Note: make sure the name matches exactly what was defined in config.json, you can refer to the block of code in keras_driver.py where we extracted the hyperparameter values. Type in initial_learning_rate as the name of the hyperparameter. Choose the Distinct value(s) radio button, as we will be adding different values here. Finally, type in the values you want the model to pick from during training. Here I used the following values: 0.0001,0.0003,0.001. When you’re done, click Add and Create another at the bottom of the screen. On to the next hyperparameter, type batch_size in the Name field. This time we will add a range of values with predefined steps, so choose the Range radio button. Type 8 for the Lower bound and 32 for the Upper bound. Choose Step for the method of Traverse and choose 8 for the Step value. Finally, click Add and Create another. Let’s add the last hyperparameter, type num_epochs in the Name field, choose the Distinct value(s) radio button and type in 100,200 in the Value(s) field. Finally click Add at the bottom of the page. If you want to add other hyperparameters (like decay, beta_1, beta_2, dropout_1, dropout_2) you can click Add and Create another as we did previously. Back to the training definition details screen, take a look at all settings and make sure you provided all needed details, then click Create at the bottom of the page. You can always add more training definitions, and all these will run in parallel. You’ll be redirected back to the new Experiment details page, confirm that all settings are in place and click Create and run at the bottom of the page. Watson Studio will create different training runs by using different values you provided for hyperparameters for each training run. They will run in parallel and the progress will be shown in real-time. Once the training is completed, all runs will be listed with their details on how they performed. By heading to Compare Runs in the top bar, you can get an overview on how each model performed and what were the hyperparameters used for that model. You can also view a graph of models’ history and performance across all training iterations. If you’re satisfied with a certain model’s performance, you can save it as we viewed previously so it can be ready for deployment and scoring. Summary In this tutorial, you learned about the powerful deep learning tools available to you on Watson Studio. You learned how to quickly prototype a neural network architecture using Neural Network Modeler. You learned how to publish the Neural Network Flow and train it using Experiments. You also learned how to import your own code, train it, optimize it and monitor its performance using Experiments and HPO. References Marcus Liwicki, Muhammad Imran Malik , Linda Alewijnse, Elisa van den Heuvel, Bryan Found,. “ICFHR2012 Competition on Automatic Forensic Signature Verification (4NsigComp 2012) “, Proc. 13th Int. Conference on Frontiers in Handwriting Recognition, 2012. Diederik P. Kingma, Jimmy Ba. “Adam: A Method for Stochastic Optimization”, 3rd International Conference for Learning Representations, San Diego, 2015. IBM Watson Studio Documentation on Deep Learning IBM Watson Studio: Coding guidelines for deep learning programs
https://developer.ibm.com/tutorials/create-and-experiment-with-dl-models-using-nn-modeler/
CC-MAIN-2020-45
refinedweb
5,091
54.42
(This is obsolete: there is now a "Check All" button above the bookmark list.) Are you: If you said, "YES!" to any or all of these questions, then just copy the code below to your computer (make sure you have Python), fill in the values for your homenode id and login cookie where indicated, and experience freedom! I make it sound easy but it's really convoluted. Go to your homenode. Examine the "(edit user information)" link, either by mouse-over and observation of the status bar, or by right-click -> Properties -> Address. Whichever way you do it, you want to find something that looks like "". The portion I have emphasized (the node id) is what you want to assign to YOUR_HOMENODE_ID in the script. This varies between browsers, but the idea is to find a list of the cookies on your machine, and then locate the "userpass" cookie set by everything2.com. With Firefox 3, this can be found under the Tools menu -> Options... -> Privacy (tab) -> Show Cookies... (button). Then find everything2.com in the list and click the little plus sign. Then click on the line reading: "everything2.com ... userpass" and at the bottom of the window you should see something like "Content: yourname%cryptjunk"—this is what you should put into LOGIN_COOKIE. import urllib2, re ### FILL THESE VALUES IN ### YOUR_HOMENODE_ID = 12345 LOGIN_COOKIE = 'yourname%cryptjunk' # get all the bookmarks to remove url = '' % YOUR_HOMENODE_ID headers = {'Cookie': 'userpass=' + LOGIN_COOKIE} req = urllib2.Request(url, None, headers) html = urllib2.urlopen(req).read() # file('removeBookmarks.get.htm', 'w').write(html) # for debugging nodes_to_unbookmark = map(int, re.findall(r'"unbookmark_(\d+)"', html)) print 'found', len(nodes_to_unbookmark), 'nodes to unbookmark' # create and submit the unbookmark request url = '' post_data = 'displaytype=edit&node_id=%i' % YOUR_HOMENODE_ID for id in nodes_to_unbookmark: post_data += '&unbookmark_%i=1' % id req = urllib2.Request(url, post_data, headers) html = urllib2.urlopen(req).read() # file('removeBookmarks.set.htm', 'w').write(html) # for debugging print 'done!' Log in or register to write something here or to contact authors. Need help? accounthelp@everything2.com
https://everything2.com/title/How+to+Remove+Your+Bookmarks+%2528all+of+them%252C+and+with+Python%2529
CC-MAIN-2018-22
refinedweb
336
57.87
Python auto-indent - Michael Vincent last edited by Michael Vincent A search turns up a lot on this but I’m looking for a definitive answer. I didn’t code in Python much at all and do now so I don’t remember if this ever “worked” or if it broke or I broke it with some configuration. if test==true: # press enter now next line goes here I would “expect”: if test==true: # press enter now next line goes here Notice the indent, based on my indent settings, which for Python are replace tabs with spaces equal to “4”. It’s the “auto indent” that doesn’t seem to be working in Python, which does work in Perl, C, C++, etc … (when adding the curly braces { } ). I see there is a Python Indent plugin. Is that what Python programmers use in N++, or should this be handled “natively”? Cheers. - Alan Kilborn last edited by Well, not an answer, but merely my 2c: I do a LOT of Python. With Notepad++. And I don’t use the Python Indent plugin. And to be brutally honest: I had to go and actually check what my Notepad++ does in that situation. And it doesn’t auto-indent for me. And apparently that has never bothered me. I just hit Tab and move on with my life. :-) - Ekopalypse last edited by I can just confirm what @Alan-Kilborn said, it doesn’t work for python. I’m using a modify callback to make it happen, something like if the previous line end with a colon but is not a comment, indent += 4 based on previous line. Not sure it works 100% but good enough for me. Let me know if you want to have a look. I can’t post it right now, as I would have to refactor it as I use it in a bigger IDE-like_features_class with linting etc … - Michael Vincent last edited by @Ekopalypse Thanks. I tried the python indent plugin and it works fine, produces the expected results. I can live with that plugin working, source code is simple enough. Cheers. - Alan Kilborn last edited by @Ekopalypse said in Python auto-indent: Let me know if you want to have a look. Sure I do! :) - Ekopalypse last edited by Note, I do have autoindent disabled in order to make this work. Because I know you are doing python, I haven’t refactored anything, but I guess it’s mostly getting rid of self to make this work in a non-class way. Concerning the todo, no, I haven’t looked at it since then. def __init__(self): self.excluded_styles= [1, 3, 4, 6, 7, 12, 16, 17, 18, 19] def on_modified(self, args): # TODO: text == \r even if \r\n is assumed, but length is 2, is this a bug?? Or am I doing something wrong? if args['modificationType'] & 0x100000 == 0x100000 and args['text'] in ['\r','\n']: text = '\r\n' if args['length'] == 2 else args['text'] self._indent(args['position'], text) def _indent(self, position, text): if self.is_python and self.auto_indent: indent = editor.getLineIndentation(editor.lineFromPosition(position)) if (editor.getCharAt(position-1) == 58 and # 58 == : editor.getStyleAt(position-1) not in self.excluded_styles): tabwidth = editor.getTabWidth() text += ' '*(indent//tabwidth+1)*tabwidth else: text += ' '*indent editor.changeInsertion(text)
https://community.notepad-plus-plus.org/topic/18845/python-auto-indent
CC-MAIN-2021-43
refinedweb
551
74.39
April 28, 2020 2020 TCO Algorithm Round 1B Editorials The second round of TCO 2020 was, by intention, quite easier than a usual round (even for Division 2). Indeed, with the top 250 people receiving a bye and another 750 already qualified, the problem set was adjusted for the people who have still not qualified, providing an opportunity to solve 2 or even 3 problems in the 75 minutes of the competition. With the number of total registrants for the official match being actually less than 750, this seemed like a good call, as basically anyone with a positive score would qualify. On the other hand, the nearly 200 people in the parallel round were going to face a *very* easy problem set, which may lead to unrealistic rating changes, but c’est la vie, I guess. Let’s see if anyone manages to complete it in less than 10 minutes! EllysCandies This was a fun problem. In the end it turns out that the initial values of the boxes don’t matter – the winner is always the person who takes the last one. Why is this true? Whenever you take a box, all of the candies in it are added to *each* remaining box. Thus, the next box someone takes contains strictly more than the one we just took. In fact, if the game has progressed already and each of the players have taken several boxes, the candies of *all* of them is contained in *each* of the remaining ones. Thus, each box contains more candy than the total score of each player (even the sum of their scores). So, the person who takes the last box actually in that single turn takes more candy than all candy that was taken in the entire game so far. So, it only matters who takes the last box. So, the order in which the girls take the boxes doesn’t matter, so *any* greedy solution would work! Fun, right? Of course, the most trivial solution would be something along the lines of: public class EllysCandies { public String getWinner(int[] boxes) { return boxes.length % 2 == 1 ? "Elly" : "Kris"; } } One final touch – the constraints of 20 could actually deceive some of the better contestants that the problem needs to be solved by bitmask DP (which it can, due to the fact that almost any solution solves it). Still, this was mostly because the sum of the scores otherwise overflows 32-bit integers, which we didn’t want. EllysWhatDidYouGet This problem was inspired by a running joke recently – where would you travel to next. The list contains several countries, but at number 9 is “Stay at home”. Anyhow, for this problem we didn’t have to do anything fancy. The constraints were low enough to brute-force all triplets (X, Y, Z) and count which of them work for all initial numbers in [1, N]. Initially the constraints for X, Y, and Z were up to 100, which required adding a break in the most-inner cycle so it doesn’t time out, but with the current constraints even this is not necessary. Implementation-wise, the solution could look like this: public class WhatDidYouGet { private int eval(int num, int X, int Y, int Z) { int sum = 0; for (num = ((num * X) + Y) * Z; num > 0; num /= 10) { sum += num % 10; } return sum; } public int getCount(int N) { int ans = 0; for (int X = 1; X <= 50; X++) { for (int Y = 1; Y <= 50; Y++) { for (int Z = 1; Z <= 50; Z++) { boolean okay = true; int last = eval(1, X, Y, Z); for (int i = 2; i <= N; i++) { if (last != eval(i, X, Y, Z)) { okay = false; break; } } if (okay) { ans++; } } } } return ans; } This was supposed to be the 250 of the set, but due to the consideration of having more people with positive scores we decided to move it as a Medium. EllysDifferentPrimes There are two main things to this problem – one is to check if a number consists only of different digits, and one to check if a number is prime. The first one can be done in various ways efficiently, but here’s an example implementation: private boolean isDifferent(int num) { if (num < 0) return false; for (int mask = 0; num > 0; num /= 10) { if ((mask & (1 << (num % 10))) != 0) return false; mask |= (1 << (num % 10)); } return true; } The second one was to check if a number is prime or not. For this we need a somewhat efficient implementation – the standard one going to the square root of the number is good enough, as it turns out. There exist even faster ones (either Erathosthene’s sieve or probabilistic ones), but for this problem they weren’t needed. So, we can use the following function: private boolean isPrime(int num) { if (num < 2) return false; for (int i = 2; i * i <= num; i++) if (num % i == 0) return false; return true; } Finally, since the constraints were low enough we could actually bruteforce the answer – for each candidate (both lower and larger than N) we test whether it is both different and prime, and if it is, directly return it: public int getClosest(int N) { for (int cand1 = N, cand2 = N; ; cand1--, cand2++) { if (isDifferent(cand1) && isPrime(cand1)) return cand1; if (isDifferent(cand2) && isPrime(cand2)) return cand2; } } Since we start from N, we guarantee that we have found the closest one. Since we check first the lower one, we guarantee we handle the case with two numbers being at equal distance. With the cycles above we get to the first numbers with different digits in about 1,000,000 iterations (one of the bad cases is, for example, N = 44,500,000) and after that we find a prime such number relatively quickly (the chance of a number around N being prime is roughly 1 / ln(N)). This problem was initially with constraints for N up to 10^12. In that case we needed to generate the candidates in a smarter way – we can do with a DFS and sort them, or with a BFS and avoid the sorting. The solution with DFS actually was timing out, thus the initial solution was much harder than this one. After we have a sorted list of numbers with different digits, we needed to find the approximate position of N and go in both directions to test if each of the numbers are prime. Since prime numbers are relatively common, this solution was finding the answers quickly enough. One tricky test case here is that the answer for N = 50,000,000 is 50,123,467 – a number over 50,000,000. So, if the contestants generated the numbers only up to 50,000,000 they would fail. Let’s see if many people fall for this! espr1t Guest Blogger
https://www.topcoder.com/2020-tco-algorithm-round-1b-editorials/
CC-MAIN-2020-34
refinedweb
1,135
62.61
Using IPFS with your website Though it is not required, it is strongly recommended that websites hosted on IPFS use only relative links, unless linking to a different domain. This is because data can be accessed in many different (but ultimately equivalent) ways: - From your custom domain: - From a gateway: - By immutable hash: Using only relative links within a web application supports all of these at once, and gives the most flexibility to the user. The exact method for switching to relative links, if you do not use them already, depends on the framework you use. Angular, React, Vue These popular JavaScript frameworks are covered in a blog post from Pinata. They are fixed with minor config changes. Gatsby Gatsby is a JavaScript framework based on React. There is a plugin for it that ensures links are relative. Jekyll Add a file _includes/base.html with the contents: {% assign base = '' %} {% assign depth = page.url | split: '/' | size | minus: 1 %} {% if depth <= 1 %}{% assign base = '.' %} {% elsif depth == 2 %}{% assign base = '..' %} {% elsif depth == 3 %}{% assign base = '../..' %} {% elsif depth == 4 %}{% assign base = '../../..' %}{% endif %} This snippet computes the relative path back to the root of the website from the current page. Update any pages that need to link to the root by adding this at the top: {%- include base.html -%} This snippet also prefixing any links with {{base}}. So for example, we would change href="/css/main.css" to be href="{{base}}/css/main.css" Generic For other frameworks, or if a framework was not used, there’s a script called make-relative that will parse the HTML of a website and automatically rewrite links and images to be relative.
https://developers.cloudflare.com/web3/ipfs-gateway/reference/updating-for-ipfs/
CC-MAIN-2022-21
refinedweb
275
65.93
Out of curiosity, what did you think about the license of OOPS? While I can understand the author's intent, the license is a bit, er, cumbersome. Actually, let me be blunt: the license is bloody awful and there is simply no way I would ever use that software. For example, why are "Internet Journals" excluded from the license? Does that mean they don't have to abide by it or does it mean that author doesn't want them using the code? Also, I can't use it if I've applied for a software patent, unless such application is a "defensive move." Well fine. I'll simply announce that all of patents (if I had any) are "defensive moves." And while we're at it, what's this bit about "Entities that share a 5% or more common ownership interest with an excluded entity are also excluded from this license." So if I own a mutual fund and the fund manager temporarily bumps me over the threshhold I'm not allowed to use the code? Or does "excluded from the license" mean I don't have to abide by it? One of the core principles of economics (something that many economists seem to forget) is that there is a tradeoff between efficiency and fairness. Trying to go too far to either extreme tends to be a bad idea. In this case, while I laud the author's intent, the implementation of the license is terrible and, I'm sure, unenforceable. However, it's bizarre and restrictive enough that I simply can't use that code. Cheers, Ovid New address of my CGI Course. In reply to Re^2: "OOPS" should describe its license, not its namespace by Ovid in thread OO concepts and relational databases by dragonchild Yes No Results (282 votes). Check out past polls.
http://www.perlmonks.org/?parent=379365;node_id=3333
CC-MAIN-2017-13
refinedweb
308
70.94
I am looking for a tool to be able to deploy Adobe Reader 9 across the network. thanks 4 Replies Sep 30, 2008 at 7:23 UTC If you're running AD, it can be deployed through a GPO, though it's more painful than most programs to deploy. Take a look here: http:/ Oct 1, 2008 at 7:19 UTC check this link out: lots of good deployment info at that site - I have used it extensively. I use a patch update software (PatchAuthority by ScriptLogic), so I no longer deploy Adobe Reader through a GPO, though it did work when I did (after lots of trial and error!) Oct 2, 2008 at 4:01 UTC If using group policy and you have branch offices, I would recommend testing a branch machine (via remote desktop if you have to) just to time it. If it takes more than 3 minutes for the user to finally get logged in, they'll probably let you know how they feel about it... This is especially true if it's trying to copy the installer across a WAN... Oct 2, 2008 at 4:07 UTC In WAN cases you'll want to implement DFS Namespaces and Replication. That way in your software deployment policy you can specify \\domain\dfsnamespace\share, and the client computer will use AD sites to decide the closest server to get the installation from. Let me know if you need more info on it, Win2k3 R2 and Win2k8 have some great capabilities with replicating file shares.
https://community.spiceworks.com/topic/24381-need-tool-to-deploy-adobe-acrobat-reader-9
CC-MAIN-2017-09
refinedweb
257
71.89
TakeScreenshot From Unify Community Wiki Description Captures sequentially numbered screenshots when a function key is pressed. Existing screenshots are not overwritten. Usage Just attach this script to an empty game object. C# - TakeScreenshot.cs // TODO: // By default, screenshot files are placed next to the executable bundle -- we don't want this in a // shipping game, as it will fail if the user doesn't have write access to the Applications folder. // Instead we should place the screenshots on the user's desktop. However, the ~/ notation doesn't // work, and Unity doesn't have a mechanism to return special paths. Therefore, the correct way to // solve this is probably with a plug-in to return OS specific special paths. // Mono/.NET has functions to get special paths... see discussion page. --Aarku using UnityEngine; using System.Collections; public class TakeScreenshot : MonoBehaviour { private int screenshotCount = 0; // Check for screenshot key each frame void Update() { // take screenshot on up->down transition of F9 key if (Input.GetKeyDown("f9")) { string screenshotFilename; do { screenshotCount++; screenshotFilename = "screenshot" + screenshotCount + ".png"; } while (System.IO.File.Exists(screenshotFilename)); Application.CaptureScreenshot(screenshotFilename); } } }" ); } }
http://wiki.unity3d.com/index.php?title=TakeScreenshot&amp;diff=prev&amp;oldid=2597
CC-MAIN-2019-18
refinedweb
181
50.33
Last Updated on August 28, 2020. Kick-start your project with my new book Deep Learning for Time Series Forecasting, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. - Update May/2017: Fixed bug in invert_scale() function, thanks Max. - Updated Apr/2019: Updated the link to dataset. >). Download the dataset to your current working directory with the name “shampoo-sales.csv“.. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.. Need help with LSTMs for Sequence Prediction? Take my free 7-day email course and discover 6 different LSTM architectures (with code). Click to sign-up and also get a free PDF Ebook version of the course.. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. The example also prints the RMSE of all forecasts. The model shows an RMSE of 71.721 monthly shampoo sales, which is better than the persistence model that achieved an RMSE of 136.761 shampoo sales.. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.. This is all fine and well but how do you do a true step ahead forecast BEYOND the dataset? Every blog I see just shows comparison graphs of actual and predicted. How do you use this model to simply look ahead x number of periods past the data, with no graphs? Thanks! Hi Shawn, Perhaps the following may help clear up some confusion with a slightly different approach.. I am wondering if I have predictions (not good prediction) and refitting without validating with the test set, wouldn’t it give false prediction again? It may, design an experiment to test the approach. Hi Jason, Thanks for this blog. Can you please share an example of multi-step-ahead forecast using recursive prediction technique using LSTM. Thanks, Shifu Thanks for the suggestion. I have an example of a direct model for multi-step forecasting here:? Hello Peter, You might want to check out the X-11 method to separate trend, seasonal, and random change to your sequence. Then apply an algorithm to each part. You can look at the following article : Study of the Long-term Performance Prediction Methods Using the Spacecraft Telemetry Data from Hongzeng Fang (Sorry but I can’t find a free dl page anymore ..). Thanks for the tip. Hey Jason, in the case of taking the last observation in the dataset in order to predict the next future timestep (for which we don’t have current values for), does the model use only that 1 observation to predict the next??? Or because it’s an LSTM , does it use both the memory from all the preceding observations during training + the last observation in order to predict the next timestep? If it’s the latter – it would make more sense to me. Otherwise I can’t see the sense in using nothing but 1 observation to predict one ahead and guarantee any type of accuracy. The way the model is defined, an input sequence is provided in order to predict one time step. You can define the mapping any way you wish for your problem though. Hi Jason. It is really a nice example of using LSTM. I’m working on it. I also have the same question, are you going to give an example on multivariate time series forecasting? I’m struggling on it. I have an example here:? Hi, Jsson: yhat = model.predict(X):X means current values? and yhad means predict value? Yes, X is the input required to make one or more predictions and yhat are the predictions. Hi, Jason, just got one question related to prediction. Since in your example, you own did prediction one test data,(known), where you have the row data to inverse the difference. What about the prediction for future unknown data, how can we do the difference inversion ? Yes, inverting the difference for a prediction only requires the last known observation.? Hi again Jason! brilliant tutorial once again, I believe many people is asking about the model.predict() because it’s not really working as expected. First problem is that doing: yhat = model.predict(X) with the code example previously given, returns: NameError: name ‘model’ is not defined As I understand, this is because the model is created under the name “lstm_model” instead of model, so using: yhat = lstm_model.predict(X) works, but returns: ValueError: Error when checking : expected lstm_1_input to have 3 dimensions, but got array with shape (1, 1) So, personally, what I have done, is using the “forecast_lstm” function, this way: yhat = forecast_lstm(lstm_model, 1, X) print(yhat) 0.28453988 Which actually returns a value. Now the next problem is that the X, is nothing else than the last X of the example, as I never redefine it. I found that the amount of functions and filters applied to the training data is quite big, hence I need to replicate them to make the shape match. This is my original training data. series = read_sql_query(seleccion, conn, parse_dates=[‘creacion’], index_col=[‘creacion’]) print(series) sys.exit() menores creacion 2018-06-17 03:56:11 0.0 2018-06-17 03:54:03 2.0 2018-06-17 03:52:11 4.0 2018-06-17 03:50:05 6.0 2018-06-17 03:48:17 4.0 2018-06-17 03:46:04 4.0 2018-06-17 03:44:01 4.0 2018-06-17 03:43:05 1.0 2018-06-17 03:40:12 2.0 2018-06-17 03:38:12 0.0 2018-06-17 03:36:21 4.0 2018-06-17 03:34:32 4.0 2018-06-17 03:32:05 3.0 2018-06-17 03:30:01 2.0 2018-06-17 03:28:23 1.0 2018-06-17 03:26:17 3.0 2018-06-17 03:24:04 0.0 2018-06-17 03:22:34 4.0 2018-06-17 03:20:04 2.0 2018-06-17 03:18:18 2.0 2018-06-17 03:16:00 3.0 2018-06-17 03:14:06 6.0 2018-06-17 03:12:06 4.0 2018-06-17 03:10:04 2.0 2018-06-17 03:08:02 0.0 2018-06-17 03:06:02 4.0 2018-06-17 03:04:02 4.0 2018-06-17 03:02:10 3.0 2018-06-17 03:00:22 4.0 2018-06-17 02:59:13 3.0 … … [7161 rows x 1 columns] Then, this process is applied to “series”: # transform data to be stationary raw_values = series.values diff_values = difference(raw_values, 1) # transform data to be supervised learning supervised = timeseries_to_supervised(diff_values, 1) supervised_values = supervised.values if you print “supervised_values” at that point, the original data has been transformed to: [[0 array([4.])] [array([4.]) array([-3.])] [array([-3.]) array([1.])] … [array([2.]) array([4.])] [array([4.]) array([-3.])] [array([-3.]) array([-1.])]] which is clearly less and more condensed information… Therefore, if I try to apply yhat = forecast_lstm(lstm_model, 1, X) after the new data has been loaded: predecir = read_sql_query(seleccion, conn, parse_dates=[‘creacion’], index_col=[‘creacion’]) #limpiamos la cache conn.commit() print(predecir) print(X) yhat = forecast_lstm(lstm_model, 1, X) #ynew = ynew[0] print(yhat) I get the following error: AttributeError: ‘DataFrame’ object has no attribute ‘reshape’ ———— So kinda lost in how to actually apply the same structure to the new data before being able to make the new prediction! I’ll paste the source my code in case someone needs it: You’ll see that I’m loading the data directly from MySQL, and also, splitting the training and test data with a different approach from a previous example given in the blog! I’m not sure either about how to make the next prediction! Thanks a lot for this blog once again… I wouldn’t be even trying this if it wasn’t for your help and explanations!!! My best wishes, Chris Thanks for sharing. I have solved this the same way as suggested in this Stack Overflow post: I am no more directly invoking the reshape method in the pandas training and test DataFrames, but indirectly through their values: train = train.values.reshape(train.shape[0], train.shape[1]) test = test.values.reshape(test.shape[0], test.shape[1]) I’m happy to hear that.. HI Jason, Even i started with machine learning and have similar kind of doubt so in one step forecasting we can only get one time step future observation correct ? and to get the prediction i have provided last input observation and then the value obtained from model.predict(X) has to be again scaled and inversed correct ? PFB the code: X = test_scaled[3,-1:] (my last observation) yhat = forecast_lstm(lstm_model, 1, X) yhat = invert_scale(scaler, X, yhat) yhat = inverse_difference(raw_values, yhat, 1) print(yhat) Can you please guide me if i am going in a right way ? Yes, one step forecasting involves predicting the next time step. You can make multi-step forecasts, learn more in this post: Yes, to make use of the prediction you will need to invert any data transforms performed such as scaling and differencing. Thank you Jason .. Hi Jason, Being said that, i have another clarification , so when i forecast the next time step using this model , using the below code: X = test_scaled[3,-1:] (my last observation) yhat = forecast_lstm(lstm_model, 1, X) yhat = invert_scale(scaler, X, yhat) yhat = inverse_difference(raw_values, yhat, 1) print(yhat) in the above code, let yhat be the prediction of future time step, can i use the result of yhat and use the same model to predict one more step ahead in the future ? is this what we call as the recursive multistep forecast ? Yes. This is recursive. Hi Jason, Can i use the below code and use the recursive multistep forecast for eg : yhat value can be used as an input to the same model again to get the next future step and so on ? X = test_scaled[3,-1:] (my last observation) yhat = forecast_lstm(lstm_model, 1, X) yhat = invert_scale(scaler, X, yhat) yhat = inverse_difference(raw_values, yhat, 1) print(yhat) Hi Jason, In predictive analytics using this ML technique, how many future steps should we able to predict , is there any ideal forecasting range in future for eg if i have a data for the last 10 days or so , and i want to forecast the future , the less the future time steps are set, the better the result as the error will be minimum right. Can i use the same code for predicting time series data in production for network traffic for future 3 days ? requirement given for me was to predict the network bandwidth for the next entire week given the data for past 1 year. Your comments and suggestions always welcome 🙂 Regards, Arun It depends on the problem and the model. Generally, the further in the future you want to predict, the worse the performance of the model.. The issue was with the text in footer ‘Sales of shampoo over a three year period’. Either delete the footer OR Change the line: series = read_csv(‘shampoo-sales.csv’, header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) to: series = read_csv(‘shampoo-sales.csv’, header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser, skipfooter=2) I think that error is caused by the line of descriptive text at the bottom of the shampoo sales data file. If you see this comment at the bottom of the data file, remove it: Sales of shampoo over a three year period The data file also has the dates enclosed in double quotes. A couple of times that bit me too. The date_parser callback is a little fickle., I’m also having the same question as Logan mentioned above. Line 101 lstm_model.predict(train_reshaped, batch_size=1) do we need this line. I comment this line, and I see no difference. Could you explain a bit more why we need this line ? Thank you for the great tutor. Best, Long It is supposed to seed the state on the training data. If it does not add value, remove it. I had to change the limits of the range() function to get the loop to work for lag values > 1. def timeseries_to_supervised(data, lag=1): df = DataFrame(data) columns = [df.shift(i) for i in range(lag, 0, -1)] columns.append(df) df = concat(columns, axis=1) df.fillna(0, inplace=True) return df Nice!. Hey did you ever get this working I have a similar problem I would like to address and would value your input!!. After reading the links what I understood was: – It’s pretty hard (impossible) to always get the same result due to a lot of randomness that can come from a lot of thing. – Only by repeating the operation a lot (like you did on last example, with 30 example (but doing it like 100 would give a better idea?)) can help you to determine how much result can be different and find a range of “acceptable one”? – Seeding the random number generator can help. So to continue my project I thought about seed the random number generator like you explained. Like here, repeat the process 30 time (since I have a server I can let it do it overnight and go for 100) and determine the average and decide of a range. Does it seem ok to begin with? Am I lacking something? Thank you for your attention and for the help you provide to everyone. See this post on how to estimate the number of repeats: I would not recommend fixing the random number seed for experimental work. Your results will misleading. Hi Jason, Thank you so much for the tutorial. I have few doubts though. 1. I am working on a problem where the autocorrelation plot of the detrended data show me that the value at time t is significantly correlated to around past 100 values in the series. Is it ideal to take the batch size of 100 to model the series? 2. You mentioned that less than 5 memory units is sufficient for this example. Can you please give me some idea on how to choose the number of memory units for a particular problem like the above? On what other factors does this number depend on? Kindly clarify. Thanks Try passing in 100-200 time steps and see how you go. Systematically test a suite of different memory units to see what works best. Thank you Jason. You’re welcome. Hey Jason! Thanks a lot for this tutorial, it really helped. I used this tutorial as is for predicting the cost of an item which is of the the range of a dollar and few cents. My dataset has 262 rows, i.e 0 to 261. When I run the model, the graph captures even the most intricate trends beautifully, BUT there seems to be a lag of 1 time step in the predicted data. The predicted values of this month almost exactly matches the expected value of the previous month. And this trend is followed throughout. The indexing is the only thing I’ve changed, to train, test = supervised_values[0:200], supervised_values[200:] rmse = sqrt(mean_squared_error(raw_values[200:], predictions)) pyplot.plot(raw_values[200:]) are the only lines of code I’ve really changed Consider starting with a simpler linear model: I do not believe LSTMs are the best or first choice for autoregression models: I think I’m facing the same issue. Were you able to solve the issue? Hi, Jason Doesn’t LSTM have a squashing gate at input? With outputs in between (-1, 1)? Then why do we need to prepare the input data to be between (-1, 1) if the first input gate will do this for us? Am I missing something? Yes, you can rescale input to [0,1] Considering unseenPredict = lstm_model.predict(X) …how do we structure X to get a one step forward prediction of unseen data? Or can we change some offsets in script “Complete LSTM Example” to get the same effect, and if so how? Hi Jason, Thanks for the excellent tutorial. I tried to modify the above code to include multiple timesteps and multiple lags in the model. I run give these parameters as input and paralleling run the script for different configurations to select the most accurate model. What do you think of the modifications I have done to the following functions? I am especially concerned about the time_steps included in the model, is that correct? def timeseries_to_supervised(data, lag=1, time_steps=0): df = DataFrame(data) columns = [df.shift(i) for i in range(1, lag+1)] #considers the number of time_steps, t, involved and #add next t x columns next to each x columnn #question?? if time_steps = 3, does that mean y should start from y_4 and #we trim the last 3 values from the dataset? if time_steps > 0 : columns_df = concat(columns,axis=1) #note that I have multipled i by -1 to perform left shift rather than right shift timestep_columns = [columns_df.shift(i*-1) for i in range(1, time_steps+1)] timestep_columns_df =concat(timestep_columns, axis=1) columns.append(timestep_columns_df) columns.append(df) df = concat(columns, axis=1) df.fillna(0, inplace=True) return df def fit_lstm(train, batch_size, nb_epoch, neurons, lag, time_steps): X, y = train[:, 0:-1], train[:, -1] X = X.reshape(X.shape[0], time_steps+1, lag) model = Sequential() model.add(LSTM(neurons, batch_input_shape=(batch_size, X.shape[1] , X.shape[2]), stateful=True, return_sequences=True)) model.add(LSTM(neurons, stateful=True)) model.add(Dense(1)) model.compile(loss=’mean_squared_error’ optimizer=’adam’) model.summary() for i in range(nb_epoch): model.fit(X, y, epochs=1, batch_size=batch_size, verbose=1, shuffle=False) model.reset_states() return model def forecast_lstm(model, batch_size, X, lag,time_steps): X = X.reshape(1,time_steps+1,lag) pad = np.zeros(shape=(batch_size-1, time_steps+1, lag)) padded = np.vstack((X, pad)) yhat = model.predict(padded, batch_size=batch_size) return yhat[0,0] Great work! I want to extend this idea to several features x the lag values for each x time observations. Does it seem reasonable to give MinMaxScaler that 3D object? How does the y truth fit in what I give MinMaxScaler, since it’s only 2D? No, I would recommend scaling each series separately. Above you seem to scale y along with X. But with multiple features, the rest of which are not just a time-shifted copy of y, I assume we could fit y_train by itself before transforming y_train and y_val? Then that’s actually the only scaler object we need to save for later inversion? I would recommend scaling each series before any shifting of the series to make it a supervised learning problem. I hope that answers your question, please shout if I misunderstood. I think the words “each series separately” threw me. I assume we can still scale all incoming values (all X’s and the time series that will later become y and shifted, one of the X’s) using a single scaler object. Then we create the lag values from scaled values. Finally, that single scaler object is used to invert y predictions. I have that right? Each series being a different “feature” or “column” or “time series” or whatever we call it. The coefficients (min/max or mean/stdev) used in the scaling of each series will need to be kept to reverse the operation on predictions and apply the operation to input data later. You can save the coefficients or the objects that wrap them. Jason, great tutorial, thank you! A question: why do you loop through epochs instead of just setting an appropriate number of epochs within fit() function? Would’t it give the same result and be neater? So that I can manually manage the resetting of the internal state of the network. Ah, I see, it has to do with “stateful” definition being different in keras, it’s well explained here: Hi, DR jason thank you very much for this tutorial. I want to have multiple time steps , but i don’t know how to modify function “timeseries_to_supervised()”. I found another post of you talking about this , but you use function “create_dataset()” i modify this function as follow: def create_dataset(dataset, look_back=1): dataset = np.insert(dataset,[0]*look_back,0) dataX, dataY = [], [] for i in range(len(dataset)-look_back): a = dataset[i:(i+look_back)] dataX.append(a) dataY.append(dataset[i + look_back]) dataY=numpy.array(dataY) dataY = np.reshape(dataY,(dataY.shape[0],1)) dataset = np.concatenate((dataX,dataY),axis=1) return dataset please, check my modification , is it right OR what? See this post: Hi Jason, Sorry if it seem stupid but there is a part that I don’t understand. To predict you use: “yhat = model.predict(X, batch_size=batch_size)” But as we see X is : train, test = supervised_values[0:-12], supervised_values[-12:] scaler, train_scaled, test_scaled = scale(train, test) yhat = forecast_lstm(lstm_model, 1, X) So X is the 12 value (after being passed in the scale function) that we want to predict. Why are we using them since in normal case we wouldn’t know theirs values. Once again really thank you for your tutorial, really helped me in my training on machine learning. In the general case, you can pass in whatever you like in order to make a prediction. For example, if your model was defined by taking 6 days of values and predicting the next day and you want to predict tomorrow, pass in the values for today and 5 days prior. Does that help? I think it does help me. In my case I have values for each minute and I have to predict the next week (so more or less 10K of prediction). I have data from the last year so there isn’t any problem with my training, just wondered what I should do at the prediction part (so I can just instead of feeding it test_scaled send him my training set again?) Thank you for your help and quick reply! Yes, if your model is setup to predict given some fixed set of lags, you must provide those lags to predict beyond the end of the dataset. These may be part of your training dataset. I don’t think it’s in my training dataset, for this part I’m pretty much doing it like you (the lag appearing when turning a sequence to a supervised learning problem). I’m pretty much feeding my model like you do. The problem for me being to know what to feed it when calling the predict command. Sending it “train_scaled” was a bad idea (got poor result, predicting huge value when it should predict low and predicting low instead of predicting high). I’m working on it but every hint is accepted. Once again thank you and sorry for being a bit slow at learning/understanding. The argument to the predict() function is the same as the argument to the fit() function. The data must be scaled the same and have the same shape, although the number of samples may vary, e.g. you may have just one sample. Obviously, you don’t need to make predictions for the training data, so the data passed to predict will be the input required to make the prediction you require. This really depends on the framing of your problem/model. Does that help? Thank your for your quick reply! I think I’m understanding it better but I have some part I have trouble to understand. The input is X right? If I follow your tutorial on input/output () and I take my case as an example (the database register the value each 3 minute and we want to predict the next value (so to begin for example it would be 12:24:00)): Date,congestion 2016-07-08 12:12:00,92 2016-07-08 12:15:00,80 2016-07-08 12:18:00,92 2016-07-08 12:21:00,86 This is (part of) my training data, when turning it into supervised training data (and shifting) I get: X, y ?, 92 92, 80 80, 92 92, 86 86, ? The problem is that I don’t know X for predicting, I only know the X I use for my training (train_scaled) and the one used to compare my results (test_scaled). What is the input I should feed it? I can’t feed it my test_scaled since in real situation I would have no idea of what it would be. Sorry if my question seem stupid and thank you for taking time to explain it. It depends how you have framed your model. If the input (X) is the observation at t-1 to predict t, then you input the last observation to predict the next time step. It is, each of my input X is the observation at t-1 (pretty similar to the shampoo case used in the tutorial). Thank you for your answer, you answered my question, I shouldn’t have any problem now! Thanks for the tutorial too, they really helped me! Glad to hear that. Just to be sure that I didn’t made any mistake in my reasoning, if I take my example from before: X, y ?, 92 / T-3 92, 80 / T-2 80, 92 / T-1 92, 86 / T 86, ? / T+1 to predict the next step (T+1) I have to use “yhat = model.predict(X, , batch_size=batch_size)” where X is 86 (after scaling/reshaping). Right? Then I’ll get the value predicted for T+1 (that I have to send to invert_scale and difference to get a readable value). If I want to predict farther then I continue (sending the scaled/reshaped value predicted at T+1 to get T+2 and then until I predict as far as wanted). Thanks for your time and answers! Correct. Thank you Jason with your help I managed to predict value of each minute of the next week. I had 2 question though: First: I removed the code for testing sets (since I wouldn’t have it in reality), the only thing I have are the testing set in the excel file (the last 10000 lines) When using this code (to train): # transform data to be stationary raw_values = data.values diff_values = difference(raw_values, 1) # transform data to be supervised learning supervised = timeseries_to_supervised(diff_values, 1) supervised_values = supervised.values # split data into train and test-sets train = supervised_values[:-10000] # transform the scale of the data scaler, train_scaled = scale(train) and this one (to predict): # forecast the entire training dataset to build up state for forecasting train_reshaped = train_scaled[:, 0].reshape(len(train_scaled), 1, 1) lstm_model.predict(train_reshaped) # walk-forward validation on the test data predictions = list() predictionFeeder = list() #Used to feed the model with the value of T-1 X, y = train_scaled[0, 0:-1], train_scaled[0, -1] predictionFeeder.append(X) #Give the last value of training for i in range(0, 10000): # make one-step forecast yhat = forecast_lstm2(lstm_model, predictionFeeder[i]) predictionFeeder.append(yhat) # invert scaling yhat2 = invert_scale(scaler, testa[i + 1], yhat) yhat3 = inverse_difference(raw_values, yhat2, 10000 + 1 – i) predictions.append(yhat3) and train a model (25 epoch) then predict the results I get result that are way too good (RMSE of 2 or less and prediction that have less than 5% of error). Being used of thing going wrong for no reasons I decide to remove the testing data from the excel (even though it shouldn’t change anything since I’m not using them (I’ve even set the variable to None at first)). Then when I do that the prediction is way less good and have some lag (though, if you remove the lag you still have a good results, just way less better than before). Why is that? My 2nd question is about lag, we can see on the prediction that while the shape of both chart (prediction and reality) look alike the prediction is raising/lowering before the reality, do you have any idea to fix it? Do you think changing the lag or timestep would help? Once again thank you for your help, I don’t think I would have achieved this much without your tutorials. Sorry, I cannot debug your code for you. Perhaps you are accidentally fitting the model on the training and test data then evaluating it on the test (e.g. on data it has seen before). I would encourage you to evaluate different lag values to see what works best for your problem. Don’t worry, I wouldn’t ask you to debug. Maybe, I don’t know, I did remove the variable to be sure to never have affected the testing set and using it but since I’m human I may have made errors. So changing lag would help me for theses gap between reality and prediction raise. Thank you I’ll do that. Thanks for your answer! Can I buy your books physically(not Ebook)?Thanks Sorry, now I have read that is not possible. Thanks anyway. Your explanations and tutorials are amazing. Congratulations! Thanks Josep. Hi, Thanks for very good tutorial. I have one question/doubt. in the following part of the code: # invert differencing yhat = inverse_difference(raw_values, yhat, len(test_scaled)+1-i) should not we relay on predicted value instead of already known raw_values? In your example for validation we always refer to the test value(known) while calling inverse difference. But in reality we will have only the predicted values(used as X), and of course known starting point(t=0). Or I missed something? my proposal: # invert differencing – (starting from 2nd loop cycle (1st woul be the starting point (raw_values[-1]) ) yhat = inverse_difference(predictions, yhat, 1) Thanks in advance Pawel We could, but in this case the known observations are available and not in the future and it is reasonable to use them for the rescaling (inverting). Hi, Thanks for explanation, got it now. Cause I train the model used e.g. MAY data (15 seconds samples) and then used that model to predict whole JUNE data. Afterwards I compared predicted data VS data that I got from JUNE, and I have to say that model does not work, after few prediction there is huge “off sync”, In the validation phase as described in your case I got RMSE 0.11 so not bad, but in reality when you use predicted value(t-1) to predict next (t) value there is a problem. Do you know how to improve the model? should I use multiple step forecats, or lag features, input time steps? Thanks a lot. Pawel I would recommend braining storming, then try everything you can think of. I have a good list here: Also, compare results to well-tuned MLPs with a window. Hey Jason, I am not following one point in your post. You wanted to train a state-full LSTM but reset_states() is executed after every epoch. That means, the states from previous batch are not used in the current batch. How does it make the network state-full? Thanks State is maintained between the samples within one epoch. Hi Jason, thanks for the great tutorial. I have one question. Wouldn’t it be better to use a callback for resetting the states? This would enable you to also use for example an EarlyStopping Monitor while training, here is what I changed: class resetStates(Callback): def on_epoch_end(self, epoch, logs=None): self.model.reset_states() model.fit(X, y, epochs=nb_epoch, batch_size=batch_size, verbose=1, shuffle=False, callbacks=[resetStates(),EarlyStopping(monitor=’loss’, patience=5, verbose=1, mode=’min’)]) Yes, that is a cleaner implementation for those problems that need to reset state at the end of each epoch. Hello, can we extend this for anomaly detection techniques ? Perhaps, I have not used LSTMs for anomaly detection, I cannot give you good advice. Perhaps you would frame it as sequence classification? Hi Jason, In the persistence model plot there is a one time interval lag. When making single step predictions, is it possible to overcome this issue? What is causing this to happen? It seems like the model places a huge weight on time interval x[t-1]. Here is an example of the dataset I am analyzing: iteration: 969 Month=970, Predicted=-7.344685, Expected=280.000000 iteration: 970 Month=971, Predicted=73.259611, Expected=212.000000 iteration: 971 Month=972, Predicted=137.053028, Expected=0.000000 Expected should be 280 and 212 (high magnitudes), and the model captures this more or less with 73 and 137, but this is one time interval behind. Thanks! LSTMs are not great at autoregression problems and often converge to a persistence type model. Ok thanks. What model would be a good alternative to capture this issue? I ran into the same problem with ARIMA. It could just be a difficult dataset to predict. I recommend starting with a well-tuned MLP + Window and see if anything can do better. Hi Jason, Thanks to you I managed to get a working LSTM network who seem to have a good accuracy (and so a low RMSE) But I’ve got a problem, do you know what could be the cause of extreme delay between reality values and predictions (my predictions have the same shape as the reality but increase/decrease way before reality)? Best regards and please continue what you are doing, it’s really useful. Hi Eric, it might be a bug in the way you are evaluating the model. It might be the case, I had to make some changes to the prediction system to use the last predicted value, I may have missed something at this moment. (Also I have to use Tflearn instead of Tensorflow but it shouldn’t be a problem since Tflearn is a more transparent way to use tensorflow). Thank you for your answer! Hang in there Eric! Thank you! Well.. I have a gap of 151 (reference to pokemon?). Just to try I removed theses 151 values from my training set, I now have no gap of values (and frankly, the accuracy seem good for a 15 epoch training). I know that this is no way a fix but make me wonder where did I fail.. Could it be that while my training set is on 400K of value my prediction start 151 value before the end (so predicting the value for 399849) of the training set (which is strange since the information from training tell me that I’m training on the 400K of data). It would mean that my machine is trying to predict some point of time used for training. Or it would mean that the 151 last data weren’t used at all for training (I tried to reduce the number of data but it’s the same problem). The algorithm is trained sample by sample, batch by batch, epoch by epoch. The last sample is what is key. Thanks for the reply. When I think about it my prediction is in advance compared to the reality. So my model start his prediction not where I finished my training but after (which is strange since my training end where my testing begin and that each input are a value (and each line correspond to a minute like the the line of shampoo data sets correspond to month)). I must have made a mistake somewhere. Thanks for you answer I think I’m on something! Hang in there! One thing I don’t understand in these models is, how can I predict future. Say, I have trained my model and let’s say we’re living August 8, 2017. Now model.predict() seems to require some test samples to predict anything, in this case values from test_scaled. So what do I give it when I want to get a prediction for August 9, 2017, which we don’t know yet? So I want to know yhat(t+1). Can I ask the model to give me forecasts for the next 5 days? I may be wrong but in my case I trained my model on the data I had (let’s say one year, until August 8, 2017) and to predict the value T you send the value T-1 (so in this case the value of August 8 2017), you’ll get the value of August 9. Then if you want August 10 (T+1) you send the value of August 9 (T). But this code here show a one step forecast implementation. Maybe if you want to predict more you should look for multi step forecasting? I’m think there is some example of it on this website. Jason would give you a better answer I think. Nice one Eric! Yes, I’ll add, if the model was trained to predict one day from the prior day as input, then when finalized, the model will need the observation from the prior day in order to predict the next day. It all comes down to how you frame the prediction problem that determines what the inputs you need. If you would like to predict one day, the model needs to be trained to predict one day. You have to decide what the model will take as input in order to make that prediction, e.g. the last month or year of data. This is how the model is trained. Then, when it’s trained, you can use it to predict future values using perhaps the last few values from the training data. You can use a model trained to predict one day to predict may days in a recursive manner where predictions become inputs for subsequent predictions. More about that here: You can also make multi-step forecasts directly, here is an example: Does that make things clearer? Thanks Eric and Jason. That’s exactly how I’ve done it, but it seems to me, that the prediction that I get is not one timestep forward (T+1), but it’s a prediction of T, which doesn’t make sense. I’m using close price of a stock as my data. I’ll have to check again, if it’s really is making a prediction for the future, as you insist. and I will check the Jason’s links. Anyways, thanks Jason for the excellent tutorials! 🙂 Here’s a sample of the close prices: 2017-05-30 5.660 2017-05-31 5.645 2017-06-01 5.795 2017-06-02 5.830 As a matter of fact, it seems to me that the predicted values lags one time step behind from expected ones. Day=1, Predicted=5.705567, Expected=5.660000 Day=1, Predicted=5.671651, Expected=5.645000 Day=1, Predicted=5.657278, Expected=5.795000 Day=1, Predicted=5.805318, Expected=5.830000 Here I’m going forwarding one day at a time and as you see the predicted ones are closer to what was expected one day before than. And anyway, Let’s examine the second line: Day=1, Predicted=5.671651, Expected=5.645000 The expected price (which is the actual price on 2017-05-31) is what I give the model on 2017-05-31 after the trading day is closed. What I expect the model to predict is closer to 5.79 than 5.67 (which is closer to the actual price the day before!). See the problem? Have I missed something? I haven’t changed anything in the framework except the data. That is a persistence model and the algorithm will converge to that (e.g. predict the input as the output) if it cannot do better. Perhaps explore tuning the algorithm. Also, I believe security prices are a random walk and persistence is the best we can do anyway: Please do some tutorials RNN, NN with more categorical values in the dataset? please, I could not find many resources using the categorical values. Thanks I have a few tutorials on the blog – for example, text data input or output are categorical variables. What are you looking for exactly? is it possible to write the code RNN Regression ,what is the activation function i need to give for regression RNN,how it is difference from classification to regression? Yes. The output would be ‘linear’ for regression. Hi Dr. Brownlee, I am trying to understand this code. I am a beginner working with time-series and LSTM. My questions about your code: 1. What does “transform data to be stationary” mean? 2. Why do you create diff? What does it mean? 3. If the raw_values are 36 data, why diff has only 35? I appreciate your response in advance, Maria Never mind, I already understand. 🙂 Glad to hear it. Consider using the blog search. More about stationary data here: More about differencing here: I hope that helps. Thanks, for the awesome blogs, But, I have a doubt, for time series forecasting, classical methods like autoregression and ARIMA, or this machine learning approach using LSTM RNN model, which is the better approach? Both are good on their own, So, what should be our first choice while forecasting any dataset? How should we choose between them? It depends on your problem. Try a few methods and see what works best. I recommend starting with linear methods like ARIMA, then try some ML methods like trees and such, then an MLP, then perhaps an RNN. When I changed train/test ration to 0.8/0.2 in LSTM code, it took half an hour to predict, and with another data that contains around 4000 records, it’s taking hours(i still couldn’t get result, i don’t know how long it’ll take). Can you please suggest me how I change settings for very long sequences. Thanks I have some ideas here: Hi Jason, for long sequence like 4000 records, can you please help me to understand the changes from code point of view as per your current example, the above link is for classification and i am looking for sequence prediction problem. You must split your sequence into subsequences, for example, see this post for ideas: Thank you Jason, can you please help us to understand with a complete code example ? The above link just have an idea but not a code implementation, I want to understand from code point of view. Regards, Arun Is the evaluation method in this post really “walk-forward”? I’ve read your post about time-series backtest. Walk-forward corresponds to cross validation for not time series data. In both, we make many models to learn, so these methods are not suitable for deep learning, including LSTM, the theme of this post. Yes, we are using walk-forward validation here. Walk-forward validation is required for sequence prediction like time series forecasting. See this post for more about the method and when to use it: Thank you for your reply. Sorry, that’s not what I mean it. I’ve read the post you pasted above, in which you split data into train/test 2820 times and build 2820 models. But, in this LSTM post, you split data and build a LSTM network only once, so I suggest the test in this article is not walk forward. ordered(not shuffle) hold-out test? sorry, please delete this comment Above, the model is evaluated 30 times. Within the evaluation of one model, we use walk forward validation and samples are not shuffled. I recommend re-reading the tutorial and code above. Thank you for your reply! Oh, repeat=30! Sorry, I misunderstood. But I think in the back-test article, the size of train data changes, while in this LSTM article, the size of train data is fixed, right? Sorry for so many questions. Thanks for your work, it is seriously awesome. I wanted to see the chart comparing training loss vs validation loss per epoch, see the chart here:. As you mentioned in your other post: “From the plot of loss, we can see that the model has comparable performance on both train and validation datasets (labeled test). If these parallel plots start to depart consistently, it might be a sign to stop training at an earlier epoch.” However, in this case, the test loss never really gets lower and both lines are not parallel at all. for this problem, and in general, does that mean that no meaningful test-set pattern is learned from the training, as seems to be corroborated by the RMSE being close on average to the persistent/dummy method? Is this the end of the road? Have NN’s failed in their tasks? If not, where to go next? The main thing I changed in the code was the end of the fit_lstm function: for i in range(nb_epoch): e=model.fit(X, y, epochs=10, batch_size=batch_size, verbose=0,validation_split=0.33, shuffle=False) model.reset_states() return model, e Thanks Gauthier. It may mean that the model under provisioned for the problem. OK… So in this case would you suggest to include more neurons for example, or try another ML algorithm? Both would be a safe bet. Hi, @Gauthier, may i know how you editted ur code to get the loss function? Json, Thanks for your blogs and tutorials. Being a research scholar, when I was in dilemma how to learn ML, your tutorials gave me an excellent start to carry out my work in Machine Learning and Deep Learning. I tried to work with the example in this page, But, I couldnt download the shampoo_sales dataset from datamarket.com. Could you please put the link to the datasets in the blog? If any procedure to download, please let me know. Thanks, I’m glad to hear that. Here is the full dataset: Hi Jason, I am getting NAN on the 12 month::: Month=1, Predicted=440.400000, Expected=440.400000 Month=2, Predicted=315.900000, Expected=315.900000 Month=3, Predicted=439.300000, Expected=439.300000 Month=4, Predicted=401.300000, Expected=401.300000 Month=5, Predicted=437.400000, Expected=437.400000 Month=6, Predicted=575.500000, Expected=575.500000 Month=7, Predicted=407.600000, Expected=407.600000 Month=8, Predicted=682.000000, Expected=682.000000 Month=9, Predicted=475.300000, Expected=475.300000 Month=10, Predicted=581.300000, Expected=581.300000 Month=11, Predicted=646.900000, Expected=646.900000 Month=12, Predicted=646.900000, Expected=nan from the code below: predictions = list() for i in range(len(test_scaled)): # make one-step forecast X, y = test_scaled[i, 0:-1], test_scaled[i, -1] #yhat = forecast_lstm(lstm_model, 1, X) yhat=y # invert scaling yhat = invert_scale(scaler, X, yhat) # invert differencing yhat = inverse_difference(raw_values, yhat, len(test_scaled)+1-i) # store forecast predictions.append(yhat) expected = raw_values[len(train) + i + 1] print(‘Month=%d, Predicted=%f, Expected=%f’ % (i+1, yhat, expected)) I do not understand as to why this is happening. thanks for the feedback. Interesting. Does this happen every time? I wonder if it is platform specific. Are you on a 32-bit machine? What version of Python? Happened to me too, because after downloading from the end of the csv — “3-10”,475.3 “3-11”,581.3 “3-12″,646.9 contained ” Sales of shampoo over a three year period ” It worked when removed Nice! Dear Jason, I had a question, what are we supposed to send as first parameter of inverse_difference if we go with the idea that we don’t know the future? Regardless what I do, if I give it the testing set then it have perfect accuracy and values. If I send the training set it will look like the last len(test_scaled) training values. Best regards You would pass a list of past observations so that you can reverse the difference operation. Dear Mr Jason, Sorry, I have trouble understanding, what do you mean by “list of past observations”? Would that be what we predict (the predictions list)? It would then follow what “Pawel” asked and said previously? “my proposal: # invert differencing – (starting from 2nd loop cycle (1st would be the starting point (raw_values[-1]) ) yhat = inverse_difference(predictions, yhat, 1)” Best regards, Eric Sorry, I mean it is the true or real values observed in the domain at prior time steps. E.g. not predictions. Dear Mr Jason, Thank you, so let’s imagine a 500 000 entry file. My training would be the 490 000 first line. I want to predict the last 10 000 lines. For predicting I send the last line of the training set (then for the next prediction I send my first prediction without having unscaled or reversed the difference). To get the true value I send as argument of “Invert_difference” my training set, or more specially, my last 10 000 thousands line? Best regards, Eric Godard. I would not recommend prediction 10K time steps. I expect skill would be very poor. Nevertheless, observation 490000 would be used to invert observation 490001, then the 490001 transformed prediction would be used to decode 490002 and so on. Does that help? Sorry, I may have badly worded myself here. My dataset have one entry per minute, one day is 1440 entry, I have years worth of data in my data set. I want to predict the next day, should I send the last day of the training set (trained_scaled[-1440:]) as argument of inverse_difference? Best regards, Perhaps this post will help you better understand the different transform: I keep getting inconsistent result of RMSE every time I rerun the code, so I added np.random.seed(1) # for consistent results to the very top of my code so now every run produces consistent RMSE. It is hard to get reproducible results, see this post for more information Brian: When I try to load the data after executing the following code line with Python 3.6 I get the following error: series = read_csv(‘shampoo-sales.csv’, header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) TypeError: ufunc ‘add’ did not contain a loop with signature matching types dtype(‘<U32') dtype('<U32') dtype('<U32'). Can you help me, please? Ouch, I have not seen that error before. Perhaps confirm that the data is CSV and that you have removed the footer. Perhaps ensure that your environment was installed correctly: If that does not help, perhaps try posting to stackoverflow? Hello Jason, Thanks for sharing this post, very helpful as currently I am working on a time series project. I have one query.. How can we save a model with best RSME and use it to further prediction. Thanks This post shows you how to save your model and use it to make predictions: Thanks Jason for the great tutorial! Can you please clarify this statement for me: “The batch_size must be set to 1. This is because it must be a factor of the size of the training and test datasets” I don’t understand how 1 in this case is a factor of the training and test datasets? Further, for time series data, can we have a batch size greater than 1? If not, what was the relevance of the above statement? Thank you Yes, it is a constraint for the chosen framing of the problem so that we can make one-step forecasts with a stateful LSTM. It is not a constraint generally. Great post Jason, very granularly explained LSTM example. Thanks a lot. I am trying to train a model to predict scientific notation data like “9.543301358961403E-9”, could you suggest a good way to rescale this data that fits LSTM the best? For input data, rescale values to the range 0-1. Hi, I have gone through many of your posts and still confusing when BPTT is applied?? Does BPTT is applied in this example? or just applied when we take more than one previous inputs to predict the current output?? And in this example, the parameters are updated at each epoch (batch gradient descent) completely contrast to stochastic gradient descent (online-learning), is this right?? This post provides an intro to BPTT: BPTT is applied when you have more than one time step in your input sequence data. Hi Jason, Thank you for great tutorial. Please let me know the difference between below two – 1) model.fit(trainX, trainY, epochs=3000, batch_size=1, verbose=0) 2) for i in range(3000): model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() Not much. Hi Jason, You have reshaped value of X and then predicted the value of yhat, after that invert scaling of yhat is done and then this value is added to the raw value of the previous month. My question is what if I add the reshaped value of yhat and reshaped value of previous month and then call invert scale_function and not call inverse_difference function at all. Will both give the same result? Best Regards I believe so, we need to group yhat with anything, even 0 values to ensure that we meet the shape requirements when inverting the transform. I ran your code multiple times with different parameters. There is one more question, when we are plotting both “pyplot.plot(raw_values[-12:]) and pyplot.plot(predictions)”, I expected both the lines to coincide at certain points(like in your example graph values for months 1 & 3 are almost close) OR at least show a similar trend, but why in most of the results the orange line for time (t) always seem to be more closely associated with the blue line for time (t-1)?? Initially, I thought it because we must be plotting raw values of month 24-36 and predictions from month 25-37, but that’s not the case, we are plotting both prediction values(predicted from month 23-35) and the raw values(from 24-36). Could be please clarify, why is that the case?. I am definitely missing out something, but not able to figure out! Thanks again. This suggests a persistence model (e.g. no skill). I’m using lag>1 Shouldn’t the function “timeseries_to_supervised” revert the range range(1, lag+1) ? columns = [df.shift(i) for i in reversed(range(1, lag+1))] Hi Jason, Thanks so much for the in-depth tutorial. I have a question about this block of code in fit_lstm(): for i in range(nb_epoch): model.fit(X, y, epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() Can it simply be done by this line? model.fit(X, y, epochs=nb_epoch, batch_size=batch_size, verbose=0, shuffle=False) Thanks for your clarification Yes, but I wanted to explicitly reset states at the end of each epoch rather than the end of each batch. what’s the difference between the 2 approaches? You can learn more about LSTM internal states here: Hi Jason, I am a college student. I am particularly interested in machine learning and are working hard on it. But my English is not very good, so my problems may be a little rough. 1. The rmse in this article is somewhat large, and you have some other articles that describe how to adjust the parameters to reduce the rmse? What are the reasons for regulation? 2. Would you have a better article about using LSTM to predict time series problems? Ps: in Uni-variate time series. Because I did the experiment myself, if I use the model of this article, my experiment accuracy is 85% to 90%. I want to improve it. 3. Is there any article to solve the multivariate time series forecasting problem? If so, where is the latest? 4. I have been running the model several times with my own macbookpro, which is too slow. If I want to be more efficient, can I buy some GPU cloud services? Like amazon? But it seems that these cloud services are especially expensive. Do you have any good solutions? Thank you so much! I want to say that you are my machine learning primer teacher. This post has a list of things to try to lift model skill: Generally, I would not recommend LSTMs for autoregression problems, see this post: Here is a multivariate time series example: In practice, AWS is very cheap: Hello, Jason Thank you for your last reply. I have improved the accuracy of 4% by various methods. I used bias_regularizer =L1L2 (0.01, 0.01) to reduce the value of rmse. (based on my own dataset). But I would like to ask about Tutorial Extensions. I want to know how to complete multi-step Forecast. For example, my training set is 201601 to 201712, 24 sets of data, I would like to predict the six sets of data from 20171-201706. And these six sets of data have no expected value and are empty. I tried 10 or more methods but failed, maybe my python was too bad? Or is my understanding of LSTM not enough?I really can’t figure out a solution.and I was very depressed……….. I hope you can give me some Suggestions. Thank you. Or do you know of any article that can be used as my reference? Here is an example: Hello I have an issue with your terminology regarding the dynamic forecasting, which you defined as follows: “re-fit the model or update the model each time step of the test data as new observations from the test data are made available (we’ll call this the dynamic approach).” That’s not what’s called dynamic forecasting in time series and econometric analysis. The dynamic forecasting involves using the previous forecast value as input into the next forecast. In your case, you are using the lagged value of the dependent variable as a regressor in the estimation of the model: X(t) = Y(t-1). For instance, see this paper So, the dynamic multi-step forecast would involve something like this: Yhat(t) = function(Yhat(t-1)). It doesn’t need to re-estimate the model at each step to be dynamic. All it needs is to use the previous forecast value to come up with the next. Particularly this step would change to use yhat as X: X, y = test_scaled[i, 0:-1], test_scaled[i, -1] yhat = forecast_lstm(lstm_model, 1, X) Thanks for the clarification. Hi I like to input time difference between timesteps into LSTM to model a decay. I saw someone manually put this term into the weight, so weight now has the form, W(t). I wonder if we can include the time difference in input, X. Do you have any advice on this? Sorry, I have not done this, perhaps talk to the person that raised this idea with you? My observation is the model tries to predict the previous value so that it is able to pick a moderate precision. (I used another dataset) Hi, Jason: Use the shampoo sales data and my own test data, the results are good.But when the time series with seasonal trends, LSTM result is very bad.So I want to ask is there any way to solve include seasonal time series forecasting problem? thank you. You can seasonally adjust your data: While the visuals and numbers in this tutorial are compelling, there are several things seriously wrong that aren’t apparent until you go over it with a very fine-toothed comb. Among all the arrays, lists, and indices flying all over the place, there are errors that make the results non-reproducible for one. At the detail level you’ve got this prediction output: Month=1, Predicted=339.700000, Expected=339.700000 Month=2, Predicted=440.400000, Expected=440.400000 Month=3, Predicted=315.900000, Expected=315.900000 …. However running the code block that is supposed to produce that results in an IndexError because “Expected” is supposed to come from: expected = raw_values[len(train) + i + 1] Index i starts out at 0 because you are looping through len(train), so raw_values[len(train) + i + 1] will first be raw_values[len(train) + 0 + 1] which is raw_values[25], which is 440.40 (Month 2), not 339.7 (Month 1). This kind of thing seems minor, but it’s actually very frustrating to readers who are trying to get some reproducible output, but instead they end up having to track down a bug in the code from the start. At a more general level, the purpose of building a machine learning model is not to produce some metrics on test data and leave it at that. Instead, it is to build a model that performs well on out-of-sample data that you haven’t encountered yet. From the beginning, you apply a difference function to the data that is supposed to remove the trend, then you forecast the residuals, and then add the trend back in with an inverse difference function. Right there is a major violation as you are literally imputing known information into what is supposed to be the forecast. The inverse difference function takes raw_values as a parameter, however in a real out-of-sample scenario, you wouldn’t have access to those raw_values, so you couldn’t simply adjust the forecasted residuals to reflect the trend in that manner. How did no one catch this? I’m doing a research project on LSTMS for time-series data and trying to extract something useful from this tutorial that I can apply to this problem, but I feel there are too many land-mines hidden throughout. Hi Jason! I have a dataset of 200 thousand time-series. Each with 150 timestamps. Let’s say they are sales of the same product but for different stores. What is the best way to design the problem? I can’t figure out what would be the ideal way to structure it. Just feed each sequence of 149 values to predict the 150th; Or should I do a rolling-window? If I give the whole sequence, I’m giving it a lot of context to work on, but I’m afraid the series it to big and training will be hard. Any thoughts? There is a lack of literature on how to design time-series problems with LSTMs. You are the only one that is talking about it. Thanks! I would recommend exploring many different framings of the problem and see what works best. 200K series is a lot. Perhaps you can group series and use sub-models and combine results in an ensemble of some sort. Hi Jason, Would like to request your help in updating the above code to a dynamic model which updates itself after every observation and accordingly predicts the future points. Could you suggest how to modify this code to suit that purpose. See this post: Also if I need to create a lookback of a number other than 1, say 7 what all do I need to update in this code above to get it running for a look back of 7. Hi Jason, Thanks for the useful tutorial. I have a question about X, y = test_scaled[i, 0:-1], test_scaled[i, -1] yhat = forecast_lstm(lstm_model, 1, X) (line110): I didn’t understand how the prediction will use timestamps to predict on test data set. As far as I know, eventually we want to predict based on future time. So we should test based on timestamps (not values) and compare the prediction results with real test values to see the accuracy of the prediction. But in the code, it seems that you are predicting based on the values and comparing again with those values! Would you please explain it to me? Maybe I misunderstood some part. Best Regards, Bahar We are predicting t+1. Perhaps this post will help to better understand Python array slicing and ranges: Jason awesome tutorial which helps in getting folks started out. I had 2 quick questions. 1. Is there a good example of adding other “features” apart form time lags. For example whether a particular hour is rush hour or not, whether a particular day is a weekday or not . 2. I have hourly data for 3 years and wish to forecast hourly for the next 3 months. So i have approximately 14600 training rows and have to forecast for approximately 2600 test rows. Multi Step seems to be the right approach. However will be able to train a model to forecast so mant steps in advance? This post shows you how to use multiple input variables: Multistep for that many obs may result in poor results. I’d recommend testing a suite of approaches to see which works best for your data: Awesome Thanks Jason.I will go through the links suggested. Are there any other references which you can suggest for multi step forecasting? See this post: Hi Jason, I have a quarterly sales dataset(below) and tried to use the same code on this post: Month, Sales 5-09,11 5-12,20 6-03,66 6-06,50 6-09,65 6-12,63 7-06,25 7-12,34 I trained the model with data from all preceding quarters to predict the last quarter: last quarter: predicted=-1.329951, actual=34.000000 The sales number can’t be negative but the model has predicted a negative value. Is there a way to prevent the model from predicting negative values? I would recommend starting with some simple linear models to better understand your data and how time series works: Thanks Jason, I tried linear regression but it din’t seem to give me expected results. Then I tuned the LSTM in your code to predict positive values by * changing the scaler range from (-1,1) to (0,1) * adding the activation function ‘relu’ to the model
https://machinelearningmastery.com/time-series-forecasting-long-short-term-memory-network-python/
CC-MAIN-2022-05
refinedweb
11,067
65.12
Enabling The Console and Shell for Blinky¶ This tutorial shows you how to add the Console and Shell to the Blinky application and interact with it over a serial line connection. Prerequisites¶ - Work through one of the Blinky Tutorials to create and build a Blinky application for one of the boards. - Have a serial setup. Use an Existing Project¶ Since all we’re doing is adding the shell and console capability to blinky, we assume that you have worked through at least some of the other tutorials, and have an existing project. For this example, we’ll be modifying the blinky on nRF52 project to enable the shell and console connectivity. You can use blinky on a different board. Modify the Dependencies and Configuration¶ Modify the package dependencies in your application target’s pkg.yml file as follows: Add the shell package: @apache-mynewt-core/sys/shell. Replace the @apache-mynewt-core/sys/console/stubpackage with the @apache-mynewt-core/sys/console/fullpackage. Note: If you are using version 1.1 or lower of blinky, the @apache-mynewt-core/sys/console/fullpackage may be already listed as a dependency. The updated pkg.yml file should have the following two lines: pkg.deps: - "@apache-mynewt-core/sys/console/full" - "@apache-mynewt-core/sys/shell" This lets the newt system know that it needs to pull in the code for the console and the shell. Modify the system configuration settings to enable Shell and Console ticks and prompt. Add the following to your application target’s syscfg.yml file: syscfg.vals: # Enable the shell task. SHELL_TASK: 1 SHELL_PROMPT_MODULE: 1 Use the OS Default Event Queue to Process Blinky Timer and Shell Events¶ Mynewt creates a main task that executes the application main() function. It also creates an OS default event queue that packages can use to queue their events. Shell uses the OS default event queue for Shell events, and main() can process the events in the context of the main task. Blinky’s main.c is very simple. It only has a main() function that executes an infinite loop to toggle the LED and sleep for one second. We will modify blinky: - To use os_callout to generate a timer event every one second instead of sleeping. The timer events are added to the OS default event queue. - To process events from the OS default event queue inside the infinite loop in main(). This allows the main task to process both Shell events and the timer events to toggle the LED from the OS default event queue. Modify main.c¶ Initialize a os_callout timer and move the toggle code from the while loop in main() to the event callback function. Add the following code above the main() function: /* The timer callout */ static struct os_callout blinky_callout; /* * Event callback function for timer events. It toggles the led pin. */ static void timer_ev_cb(struct os_event *ev) { assert(ev != NULL); ++g_task1_loops; hal_gpio_toggle(g_led_pin); os_callout_reset(&blinky_callout, OS_TICKS_PER_SEC); } static void init_timer(void) { /* * Initialize the callout for a timer event. */ os_callout_init(&blinky_callout, os_eventq_dflt_get(), timer_ev_cb, NULL); os_callout_reset(&blinky_callout, OS_TICKS_PER_SEC); } In main(), add the call to the init_timer() function before the while loop and modify the while loop to process events from the OS default event queue: int main(int argc, char **argv) { int rc; #ifdef ARCH_sim mcu_sim_parse_args(argc, argv); #endif sysinit(); g_led_pin = LED_BLINK_PIN; hal_gpio_init_out(g_led_pin, 1); init_timer(); while (1) { os_eventq_run(os_eventq_dflt_get()); } assert(0); return rc; } Build, Run, and Upload the Blinky Application Target¶ We’re not going to build the bootloader here since we are assuming that you have already built and loaded it during previous tutorials. We will use the newt run command to build and deploy our improved blinky image. The run command performs the following tasks for us: - Builds a binary Mynewt executable - Wraps the executable in an image header and footer, turning it into a Mynewt image. - Uploads the image to the target hardware. - Starts a gdb process to remotely debug the Mynewt device. Run the newt run nrf52_blinky 0 command. The 0 is the version number that should be written to the image header. Any version will do, so we choose 0. $ newt run nrf52_blinky 0 ... Archiving util_mem.a Linking /home/me/dev/myproj/bin/targets/nrf52_blinky/app/apps/blinky/blinky.elf App image succesfully generated: /home/me/dev/myproj/bin/targets/nrf52_blinky/app/apps/blinky/blinky.elf Loading app image into slot 1 [/home/me/dev/myproj/repos/apache-mynewt-core/hw/bsp/nrf52dk/nrf52dk_debug.sh /home/me/dev/myproj/repos/apache-mynewt-core/hw/bsp/nrf52dk /home/me/dev/myproj/bin/targets/nrf52_blinky/app/apps/blinky] Debugging /home/me/dev/myproj/bin/targets/nrf52_blinky/app/apps/blinky/blinky.elf Set Up a Serial Connection¶ You’ll need a Serial connection to see the output of your program. You can reference the Using the Serial Port with Mynewt OS Tutorial for more information on setting up your serial communication. Communicate with the Application¶ Once you have a connection set up, you can connect to your device as follows: - is a number. You must map the port name to a Windows COM port: /dev/ttyS<N> maps to COM<N+1>. For example, /dev/ttyS2 maps to COM3. You can also use the Windows Device Manager to locate the COM port. To test and make sure that the Shell is running, first just hit : 004543 shell> You can try some commands: 003005 shell> help 003137 Available modules: 003137 os 003138 prompt 003138 To select a module, enter 'select <module name>'. 003140 shell> prompt 003827 help 003827 ticks shell ticks command 004811 shell> prompt ticks off 005770 Console Ticks off shell> prompt ticks on 006404 Console Ticks on 006404 shell>
http://mynewt.apache.org/latest/tutorials/blinky/blinky_console.html
CC-MAIN-2018-43
refinedweb
943
55.84
Few keywords are as simple yet amazingly powerful as virtual in C# (overridable in VB.NET). When you mark a method as virtual you allow an inheriting class to override the behavior. Without this functionality inheritance and polymorphism wouldn’t be of much use. A simple example, slightly modified from Programming Ruby (ISBN: 978-0-9745140-5-5), which has a KaraokeSong overrides a Song’s to_s (ToString) function looks like: class Song def to_s return sprintf(“Song: %s, %s (%d)”, @name, @artist, @duration) end end class KaraokeSong < Song def to_s return super + ” – ” @lyrics end end The above code shows how the KaraokeSong is able to build on top of the behavior of its base class. Specialization isn’t just about data, it’s also about behavior! Even if your ruby is a little rusty, you might have picked up that the base to_s method isn’t marked as virtual. That’s because many languages, including Java, make methods virtual by default. This represents a fundamental differing of opinion between the Java language designers and the C#/VB.NET language designers. In C# methods are final by default and developers must explicitly allow overriding (via the virtual keyword). In Java, methods are virtual by default and developers must explicitly disallow overriding (via the final keyword). Typically virtual methods are discussed with respect to inheritance of domain models. That is, a KaraokeSong which inherits from a Song, or a Dog which inherits from a Pet. That’s a very important concept, but it’s already well documented and well understood. Therefore, we’ll examine virtual methods for a more technical purpose: proxies. Proxy Domain Pattern A proxy is something acting as something else. In legal terms, a proxy is someone given authority to vote or act on behalf of someone else. Such a proxy has the same rights and behaves pretty much like the person being proxied. In the hardware world, a proxy server sits between you and a server you’re accessing. The proxy server transparently behaves just like the actual server, but with additional functionality – be it caching, logging or filtering. In software, the proxy design pattern is a class that behaves like another class. For example, if we were building a task tracking system, we might decide to use a proxy to transparently apply authorization on top of a task object: public class Task { public static Task FindById(int id) { return TaskRepository.Create().FindById(id); } public virtual void Delete() { TaskRepository.Create().Delete(this); } } public class TaskProxy : Task { public override void Delete() { if (User.Current.CanDeleteTask()) { base.Delete(); } else { throw new PermissionException(…); } } } Thanks to polymorphism, FindById can return either a Task or a TaskProxy. The calling client doesn’t have to know which was returned – it doesn’t even have to know that a TaskProxy exists. It just programs against the Task’s public API. Since a proxy is just a subclass that implements additional behavior, you might be wondering if a Dog is a proxy to a Pet. Proxies tend to implement more technical system functions (logging, caching, authorization, remoting, etc) in a transparent way. In other words, you wouldn’t declare a variable as TaskProxy – but you’d likely declare a Dog variable. Because of this, a proxy wouldn’t add members (since you aren’t programming against its API), whereas a Dog might add a Bark method. Interception The reason we’re exploring a more technical side of inheritance is because two of the tools we’ve looked at so far, RhinoMocks and NHibernate, make extensive use of proxies – even though you might not have noticed. RhinoMocks uses proxies to support its core record/playback functionality. NHibernate relies on proxies for its optional lazy-loading capabilities. We’ll only look at NHibernate, since it’s easier to understand what’s going on behind the covers, but the same high level pattern applies to RhinoMocks. (A side note about NHibernate. It’s considered a frictionless or transparent O/R mapper because it doesn’t require you to modify your domain classes in order to work. However, if you want to enable lazy loading, all members must be virtual. This is still considered frictionless/transparent since you aren’t adding NHibernate specific elements to your classes – such as inheriting from an NHibernate base class or sprinkling NHibernate attributes everywhere.) Using NHibernate there are two distinct opportunities to leverage lazy loading. The first, and most obvious, is when loading child collections. For example, you may not want to load all of a Model’s Upgrades until they are actually needed. Here’s what your mapping file might look like: <class name=”Model” table=”Models”> <id name=”Id” column=”Id” type=”int”> <generator class=”native” /> </id> … <bag name=”Upgrades” table=”Upgrades” lazy=”true” > <key column=”ModelId” /> <one-to-many class=”Upgrade” /> </bag> </class> By setting the lazy attribute to true on our bag element, we are telling NHibernate to lazily load the Upgrades collection. NHibernate can easily do this since the it returns it uses its own collection types (which all implement standard interfaces, such as IList, so to you, it’s transparent). The second, and far more interesting, usage of lazy loading is for individual domain objects. The general idea is that sometimes you’ll want whole objects to be lazily initialized. Why? Well, say that a sale has just been made. Sales are associated with both a sales person and a car model: Sale sale = new Sale(); sale.SalesPerson = session.Get<SalesPerson>(1); sale.Model = session.Get<Model>(2); sale.Price = 25000; session.Save(sale); Unfortunately, we’ve had to go to the database twice to load the appropriate SalesPerson and Model – even though we aren’t really using them. The truth is all we need is their ID (since that’s what gets inserted into our database), which we already have. By creating a proxy, NHibernate lets us fully lazy-load an object for just this type of circumstance. The first thing to do is change our mapping and enable lazy loading of both Models and SalesPeoples: <class name=”Model” table=”Models” lazy=”true” proxy=”Model”>…</class> <class name=”SalesPerson” table=”SalesPeople” lazy=”true” proxy=”SalesPerson “>…</class> The proxy attribute tells NHibernate what type should be proxied. This will either be the actual class you are mapping to, or an interface implemented by the class. Since we are using the actual class as our proxy interface, we need to make sure all members are virtual – if we miss any, NHibernate will throw a helpful exception with a list of non-virtual methods. Now we’re good to go: Sale sale = new Sale(); sale.SalesPerson = session.Load<SalesPerson>(1); sale.Model = session.Load<Model>(2); sale.Price = 25000; session.Save(sale); Notice that we’re using Load instead of Get. The difference between the two is that if you’re retrieving a class that supports lazy loading, Load will get the proxy, while Get will get the actual object. With this code in place we’re no longer hitting the database just to load IDs. Instead, calling Session.Load<Model>(2) returns a proxy – dynamically generated by NHibernate. The proxy will have an id of 2, since we supplied it the value, and all other properties will be uninitialized. Any call to another member of our proxy, such as sale.Model.Name will be transparently intercepted and the object will be just-in-time loaded from the database. Just a note, NHibernate’s lazy-load behavior can be hard to spot when debugging code in Visual Studio. That’s because VS.NET’s watch/local/tooltip actually inspects the object, causing the load to happen right away. The best way to examine what’s going on is to add a couple breakpoints around your code and check out the database activity either through NHibernate’s log, or SQL profiler. Hopefully you can imagine how proxies are used by RhinoMocks for recording, replaying and verifying interactions. When you create a partial you’re really creating a proxy to your actual object. This proxy intercepts all calls, and depending on which state you are, does its own thing. Of course, for this to work, you must either mock an interface, or a virtual members of a class. In This Chapter In chapter 6 we briefly covered NHibernate’s lazy loading capabilities. In this chapter we expanded on that discussion by looking more deeply at the actual implementation. The use of proxies is common enough that you’ll not only frequently run into them, but will also likely have good reason to implement some yourself. I still find myself impressed at the rich functionality provided by RhinoMock and NHibernate thanks to the proxy design pattern. Of course, everything hinges on you allowing them to override or insert their behavior over your classes. Hopefully this chapter will also make you think about which of your methods should and which shouldn’t be virtual. I strongly recommend that you take a look at the following articles/posts to better understand the virtual by default vs final by default points of view: - Anders Hejlsberg’s explanation on why C# is final by default - Two posts by Eric Gunnerson from Microsoft about the topic (make sure to read the comments!) – post 1 and post 2 - And, representing the virtual by default camp, Michael Feathers’ point of view @Fregas, It’s a limitation/feature of .Net itself, and nothing to do with reflection. The dynamic proxy is creating a subclass of the concrete type on the fly. Any member that is marked as lazy loaded has to be virtual so that the dynamic proxy class can override that property to set up the lazy loading. Karl, Why is virtual necessary for NHibernate? I assume that they are using reflection to create a proxy that inherits from the real class. Is this some limitation of reflection? craig Hey, thanks for articles, much appreciated! Just note, the links to posts by Eric Gunnerson are same, so we only got one post.
http://codebetter.com/karlseguin/2008/06/18/foundations-of-programming-pt-9-proxy-this-and-proxy-that/
crawl-003
refinedweb
1,673
53.81
Vector of logical diffs describing changes to a JSON column. More... #include <json_diff.h> Vector of logical diffs describing changes to a JSON column. Type of the allocator for the underlying invector. Type of iterator over the underlying vector. Type of iterator over the underlying vector. Type of the underlying vector. Constructor. Append a new diff at the end of this vector when operation == REMOVE. Append a new diff at the end of this vector. Return the element at the given position. Return the length of the binary representation of this Json_diff_vector. The binary format has this form: +--------+--------+--------+ +--------+ | length | diff_1 | diff_2 | ... | diff_N | +--------+--------+--------+ +--------+ This function returns the length of only the diffs, if include_metadata==false. It returns the length of the 'length' field plus the length of the diffs, if include_metadata=true. The value of the 'length' field is exactly the return value from this function when include_metadata=false. Clear the vector. De-serialize Json_diff objects from the given String into this Json_diff_vector. Return the number of elements in the vector. Serialize this Json_diff_vector into the given String. An empty diff vector (having no diffs). The length of the field where the total length is encoded. Length in bytes of the binary representation, not counting the 4 bytes length.
https://dev.mysql.com/doc/dev/mysql-server/latest/classJson__diff__vector.html
CC-MAIN-2022-27
refinedweb
208
61.63
Pandas Data Series: Find the index of the first occurrence of the smallest and largest value of a given series Pandas: Data Series Exercise-39 with Solution Write a Pandas program to find the index of the first occurrence of the smallest and largest value of a given series. Sample Solution : Python Code : import pandas as pd nums = pd.Series([1, 3, 7, 12, 88, 23, 3, 1, 9, 0]) print("Original Series:") print(nums) print("Index of the first occurrence of the smallest and largest value of the said series:") print(nums.idxmin()) print(nums.idxmax()) Sample Output: Original Series: 0 1 1 3 2 7 3 12 4 88 5 23 6 3 7 1 8 9 9 0 dtype: int64 Index of the first occurrence of the smallest and largest value of the said series: 9 4 Python Code Editor: Have another way to solve this solution? Contribute your code (and comments) through Disqus. Previous: Write a Pandas program to check the equality of two given series. Next: Write a Pandas program to check inequality over the index axis of a given dataframe
https://www.w3resource.com/python-exercises/pandas/python-pandas-data-series-exercise-39.php
CC-MAIN-2021-21
refinedweb
186
54.46
int memcmp( void *s1, void *s2, size_t count ) void *s1; /* address of first memory buffer */ void *s2; /* address of second memory buffer */ size_t count; /* size in bytes of comparison */ Synopsis #include "string.h" The memcmp function compares the count bytes starting at s1 with the count bytes beginning at s2 . Parameters s1 and s2 are addresses of memory buffers at least count bytes in length. Return Value If the bytes at s1 are lexicographically less than those at s2 , then memcmp returns a negative number; if the bytes at s1 are lexicographically greater than those at s2 , then memcmp returns a positive number; otherwise they are equal, and memcmp returns 0. See Also strcmp Help URL:
http://silverscreen.com/idh_silverc_compiler_reference_memcmp.htm
CC-MAIN-2021-21
refinedweb
116
56.59
Adventures in .NET As (a very small) some of you may know, I have been developing an dynamic UI rendering control for SQL Server Reporting Services in ASP .NET. Given a report, the control will obtain a list of report parameters, and dynamically render UI and validation elements based on datatype and other properties of Report Parameters. This control basically functions like a table, where each cell contains all the UI bits for each parameter. Each collection of UI bits (a Value webcontrol, a Required Field validator, a Datatype validator, and a Set-Value-To-Null checkbox) is itself wrapped up into a self-contained server control that loads itself from a parameter. It's called RSValidatedInputControl. RSValidatedInputControl inherits from another server control I created called ValidatedInputControl. For the most part, this system of objects works beautifully. However, I've noticed something very strange. If ValidatedInputControl has a list of valid values, it renders a dropdownlist as its primary value control. Since it's a dropdownlist, required field and datatype validation are unnecessary, so those two controls are disabled. The ValidValues list is stored as a ListItemCollection internal to the control. The aspx markup is as follows: <sdp:validatedinputcontrol id=ValidatedInputControl1 <asp:ListItemText1</asp:ListItem> <asp:ListItemText2</asp:ListItem> </sdp:validatedinputcontrol> I have a page containing only this control and a button. When I click the button, I test Page.IsValid to determine if the validation has passed. Since all the validators are disabled by default, this should return “True.” It doesn't. I checked the validator controls--they are disabled. They are also marked as valid. Why would the page be marked as invalid? I've even stepped through the built-in javascript provided with the validator controls. 25 points to the first person with the correct answer (that was an obvious bid to encourage others to help me answer this question : ) ). The only thing that I can find that might be the start of an explanation is this. When I look at the aspx markup in HTML view, I get a tooltip to the effect that the page parser doesn't like the <asp: listitem> tags. “The active schema does not support the element '<asp:listitem>'”. Whaddya' think? The only thing I notice is that the control is prefixed as <sdp: and the listitems are pre-fixed as <asp: . I take it this is a custom control where you are inheriting from another control. I haven't tested, but shouldn't the listitems be prefaced by <sdp: ? That's possible. It isn't an inherited control actually--it's a composite. Normally, it just validates what the user enters into a textbox for required field and datatype--but if the developer enters a list of items instead, it renders a dropdown. Since the ListItemCollection already exists for the dropdown, I thought it'd be fairly simple to just create one for my control, and populate the dropdown list with it if necessary. It didn't occur to me that there might be namespace issues. I may have to create an SDPListItemCollection instead. I'll try your suggestion first thing in the morning. Whether it works or not, thanks for the advice. Chris You've been Taken Out! Thanks for the good post.
http://weblogs.asp.net/taganov/archive/2004/03/09/86846.aspx
crawl-002
refinedweb
544
56.76
An abstract representation of a 2D image pyramid. More... An abstract representation of a 2D image pyramid. 2D image pyramid containers are created by calling vpiPyramidCreate to allocate and initialize an empty (zeroed) VPIPyramid object. The memory for the image pyramid data is allocated and managed by VPI. Image formats match the ones supported by image container. The pyramid is not necessarily dyadic. The scale between levels is defined in the constructor. Parameters such as levels, scale, width, height and image format are immutable and specified at the construction time. The internal memory layout is also backend-specific. More importantly, efficient exchange of image pyramid. The set of vpiPyramidLock / vpiPyramidUnlock calls allows the user to read from/write to the image data from the host. These functions are non-blocking and oblivious to the device command queue, so it's up to the user to make sure that all pending operations using this image pyramid as input or output are finished. Also, depending on the enabled backends lock/unlock * operation might be time-consuming and, for example, involve copying data over PCIe bus for dGPUs. Stores the pyramid contents. Every level is represented by an entire VPIImageData. There are numLevels levels, and they can be accessed from levels[0] to levels[numLevels-1]. Definition at line 110 of file Pyramid.h. #include <vpi/Pyramid.h> Creates an image that wraps one pyramid level. The created image doesn't own its contents. Destroying the pyramid while there are images wrapping its levels leads to undefined behavior. If image wraps the base pyramid level, locking the pyramid will also lock the image. Once the image isn't needed anymore, call vpiImageDestroy to free resources. #include <vpi/Pyramid.h> Create an empty image pyramid instance with the specified flags. Pyramid data is zeroed. #include <vpi/Pyramid.h> Destroy an image pyramid instance as well as all resources it owns. #include <vpi/Pyramid.h> Returns the flags associated with the pyramid. #include <vpi/Pyramid.h> Returns the image format of the pyramid levels. #include <vpi/Pyramid.h> Get the image pyramid level count. #include <vpi/Pyramid.h> Get the image width and height in pixels (for all levels at once). #include <vpi/Pyramid.h> Acquires the lock on a pyramid object and returns pointers to each level of the pyramid. Depending on the internal image representation, as well as the actual location in memory, this function might have a significant performance overhead (format conversion, layout conversion, device-to-host memory copy). #include <vpi/Pyramid.h> Releases the lock on a image pyramid object. This function might have a significant performance overhead (format conversion, layout conversion, host-to-device memory copy).
https://docs.nvidia.com/vpi/group__VPI__Pyramid.html
CC-MAIN-2021-21
refinedweb
446
50.33
Now its time to use the ATL object you just created. For this step, you are going to create a simple dialog in an MFC project. With your ATLAttributes project still open, click New, New Project from the File menu. Select the Visual C++ project type and click the MFC Application template on the right side. Give your project the name MFCClient. Before you close this dialog, click the Add to Solution radio button underneath the Location field. Close the dialog by clicking OK. In the MFC Application Wizard dialog that is displayed, click the Application Type option on the left and select the Dialog radio button for the Application Type setting. Leave the other settings as they are and click Finish to create the project. For this project, you will use the static text field already provided for you. First of all, click the static text field and set its ID property to IDC_OUTPUT using the prop. However, remove the two buttons and add a different button. Set the new button's Caption property to Hello and its ID property to IDC_HELLO using the Property Viewer. Arrange the controls in the dialog and resize the dialog so that it looks similar to Figure 15.5. To set up a message handler for the button, double-click it to create the OnBnClickedHello function. This function will first create the ATL object you created earlier. It will then ask for the interface, IAttributeTest, that contains the HelloWorld interface method. It will then finish by calling that method and displaying the results in the static text box. The OnBnClickedHello function appears in Listing 15.7. 1: void CMFCClientDlg::OnBnClickedHello() 2: { 3: CComPtr<IUnknown> spUnknown; 4: HRESULT hr = spUnknown.CoCreateInstance(__uuidof(CAttributeTest)); 5: CComPtr<IAttributeTest> pI; 6: spUnknown.QueryInterface(&pI); 7: 8: TCHAR sTest[ 256 ]; 9: pI->HelloWorld( sTest ); 10: 11: SetDlgItemText( IDC_OUTPUT, sTest ); 12: } Upon close examination of the OnBnClickedHello function, you may have noticed the use of the data types CAttributeTest and IAttributeTest without having specified where these declarations come from. At the top of the same file, you will need to import the type library that is created from your ATL project. Add the following code snippet to the top of this file after the initial preprocessing instructions: #import "..\ATLAttributes\_ATLAttributes.tlb" no_namespace using namespace std; The last step for your MFC client is to initialize the COM library so that your application doesn't fail when trying to create COM objects. In the OnInitDialog function in the MFCClientDlg.cpp file, add the following code snippet: CoInitialize(NULL); You can now build your solution and run MFCClient.exe. Clicking the Hello button should then call the HelloWorld function in the IAttributeTest interface and set the static text box accordingly, as shown in Figure 15.6.
https://flylib.com/books/en/3.121.1.126/1/
CC-MAIN-2019-39
refinedweb
465
63.8
Hi, It’s all been working well with quasar dev. I have done a quasar buildand fired up a http servers for my backend api(feathers) and the dist folder and that was working fine Now to secure things I started running a https server for both the backend api and my quasar app. The issue is that now the quasar app can not connect to the backend api (my other nodejs clients don’t have an issue, and visiting backend url in a browser gives me a “secure https page” backend alive reply) Is there something else I need to set in the webpack production stuff or elsewhere? I just changed the proxy table and my feathers client api.js. Am I missing something here about production vs dev???..what do I need to do with settings for quasar buildin order to talk to my api backend correctly my config/index.js var path = require('path') module.exports = { // Webpack aliases aliases: { quasar: path.resolve(__dirname, '../node_modules/quasar-framework/'), src: path.resolve(__dirname, '../src'), assets: path.resolve(__dirname, '../src/assets'), '@': path.resolve(__dirname, '../src/components'), variables: path.resolve(__dirname, '../src/themes/quasar.variables.styl') }, // Progress Bar Webpack plugin format // progressFormat: ' [:bar] ' + ':percent'.bold + ' (:msg)', // Default theme to build with ('ios' or 'mat') defaultTheme: 'mat', build: { env: require('./prod.env'), publicPath: '', productionSourceMap: false, // Remove unused CSS // Disable it if it has side-effects for your specific app purifyCSS: true }, dev: { env: require('./dev.env'), cssSourceMap: true, // auto open browser or not openBrowser: true, publicPath: '/', port: 8090, // If for example you are using Quasar Play // to generate a QR code then on each dev (re)compilation // you need to avoid clearing out the console, so set this // to "false", otherwise you can set it to "true" to always // have only the messages regarding your last (re)compilation. clearConsoleOnRebuild: false, // Proxy your API if using any. // Also see /build/script.dev.js and search for "proxy api requests" // proxyTable: { '/api': { target: '', // on switches rpi changeOrigin: true } } } } my feathers client api.js import feathers from 'feathers' import hooks from 'feathers-hooks' import socketio from 'feathers-socketio' import auth from 'feathers-authentication-client' import io from 'socket.io-client' const socket = io('', { transports: ['websocket'] }) const api = feathers() .configure(hooks()) .configure(socketio(socket)) .configure(auth({ storage: window.localStorage })) export default api Ok, so I didn’t do anything more than I have listed above and it’s now fine and then not. Look like maybe the app gets ahead of the connection as in the connection now takes a bit longer to establish with the ssl handshake and the app is asking for data before the connection has been made. I can’t see where in the code the api proxy requests a connection. If so I could async delay asking for data until I know the connection has been eastablished. At the least I could set a timeout. Where is that code that uses the proxyTablekey in config/index.js
https://forum.quasar-framework.org/topic/1148/can-t-connect-to-api-backend-socket-after-build-using-https
CC-MAIN-2021-39
refinedweb
495
63.39