text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello,
we are in the process of writing functional tests for our REST API.The authentication type we use is Bearer Authentication. The token is received via a dedicated "/login" endpoint which must be performed with a username and password. The token is then in the response payload.
A typical setup for a test would therefore look like this:
1. Make Login Request:Request:
POST /login
{
user: "admin",
password: "password"
}
Response:
{ token: "asdf123fasdf123" }
2. Parse the token out of the Reponse
3. Inject the token into the Authentication Header:
Authentication: Bearer <token>
For now our test setup for the test cases which need authentication looks like this:
This is probably not the best way to do it.So is there any way to use the build-in Auth-Manager? We haven't found a solution for making a login request with username and password, so far.
Thanks in advance!
@sorkfa
1)We can run "Helper" testcase from test suite level setup script and
2) Inside helper testcase add property transfer to store Bearer token at test suite level custom properties, so we can utilize that across the test suite(In all test cases)
Note: Keep Disable Helper testcase, so it won't run two times, when you run all testcases from test suite level
Setup Script for test suite level.
def testCase = testSuite.testCases["Helper"]def prop = new com.eviware.soapui.support.types.StringToObjectMap()runner = testCase.run(prop , true)
Okay yes this is also the way we are doing it right now. Still I am wondering if I can use the Auth Manager. | https://community.smartbear.com/t5/API-Functional-Security-Testing/Bearer-Auth-with-username-and-password/td-p/209751 | CC-MAIN-2021-10 | refinedweb | 263 | 64.51 |
Win32::GUI - Perl Win32 Graphical User Interface Extension
use Win32::GUI(); ().
What Win32::GUI is, and how to get it installed.
Release notes. Essential reading for old hands who want to know what has changed between versions.
A short welcome.
An introduction to the basic Windows GUI concepts and how they relate to Win32::GUI
A Tutorial. Essential reading for beginners: Creating your first Win32::GUI window and all the basics that you will need.
A (currently somewhat out of date) set of Frequently asked questions and answers.
Per package documentation for Win32::GUI.
All the events that are common to every window.
All the methods in the Win32::GUI package (and inherited by the other packages).
Options common to most package constructors.
There is a set of sample applications installed with Win32::GUI, that should be found in the .../Win32/GUI/demos/ directory under your library installtion root (by default for ActiveState perl this is at C:/Perl/site/lib/Win32/GUI/demos). There is a viewer/launcher application installed as well. Type
win32-gui-demos at a command prompt.
Release of Win32::GUI up to and including v1.03 export a large list of constants into the callers namespace by default. This behaviour is changed in releases from v1.04 and onwards.
From v1.04 the support for constants is provided by Win32::GUI::Constants, and the prefered way of getting constants exported into your namespace should be:
use Win32::GUI(); # Empty export list to prevent default exports use Win32::GUI::Constants qw( ... ) # explicitly list wanted constants
although, for backwards compatibility the following are also supported:
use Win32::GUI;
Will continue to export the same list of constants as earlier versions, but will generate a warning (under the
use warnings; pragma or with the
-w command line option to perl). In the future (v1__ | http://search.cpan.org/~robertmay/Win32-GUI-1.06/docs/GUI.pod | CC-MAIN-2016-50 | refinedweb | 306 | 58.69 |
by Krishna Srinivasan
11/02/2007. In my next article i will write about the advanced features in script programming in java. Follow the steps to run your first script program in Java 6.0 :
function testMessage(msg){
print("Printing : " + msg);
}
New package javax.script is added to the java 6.0 version. It provides necessary APIs to work with script engines. Sun's implementation of JDK 6 is co-bundled with the Mozilla Rhino based JavaScript script engine.
import javax.script.*;
class HelloWorld{
public static void main(String args[]) throws Exception{
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine engine = factory.getEngineByName("JavaScript");
engine.eval(new java.io.FileReader("helloworld.js"));
Invocable inv = (Invocable) engine;
inv.invokeFunction("testMessage", "Hello World!!!" );
}
}
type in the command prompt
>> javac HelloWorld.java
>> java HelloWorld
Will be printed as :
Printing : Hello World!!! | http://www.javabeat.net/javabeat/java6/articles/scripting_in_java_6_0_part1.php | crawl-002 | refinedweb | 135 | 53.47 |
C++ Programming
C++ programs to add two numbers
In this C++ programming article we will see some C++ programs to add two numbers. At first we will take two integers as input from user and then we will print the result after addition of that two numbers. We can also subtract one number form another using the same logic. See our addition and subtraction in C to get the knowledge about how we can perform addition or subtraction operations.
Now, let’s go ahead with code to add two numbers which are given by user.
Add two numbers program in C++
In this bellow program we will take two integers from user, then add them. After printing the result of addition we will also subtract them and print the result.
// c++ program to add and subtract two numbers #include <iostream> using namespace std; int main(){ int f, s, result, sub; cout << "Enter first integer here : "; cin >> f; cout << "Enter second integer here : "; cin >> s; result = f + s; cout <<"\nSum of " << f << " and " << s << " is = " << result << endl; sub = f - s; cout <<"\nSubtraction of " << f << " and " << s << " is = " << sub << endl; return 0; }
Output of add numbers program
Add two numbers using a class in C++
The bellow program will and subtract the given numbers using a class named class_to_add. See the program code bellow.
// c++ program to add two number using a class #include <iostream> using namespace std; class class_to_add{ // creating a class to add numbers int f, s; public: void take_input(){ cout << "Enter two integers here to add them : \n"; cin >> f >> s; } void funct_to_add(){ cout << "\nSum is = " << f + s << endl; cout << "Subtraction is = " << f - s << endl; } }; int main(){ class_to_add x; // Creating an object here x.take_input(); x.funct_to_add(); return 0; }
Output of this program:
Enter two integers here to add them : 45 25 Sum is = 70 Subtraction is = 20 | https://worldtechjournal.com/cpp-programming/cpp-add-two-numbers/ | CC-MAIN-2022-40 | refinedweb | 309 | 60.18 |
Qt for Python Development Notes 2018
From Qt Wiki
Contents
- 1 2018
- 1.1 11. December 2018
- 1.2 29. November 2018
- 1.3 22. November 2018
- 1.4 15. November 2018
- 1.5 8. November 2018
- 1.6 1. November 2018
- 1.7 25. October 2018
- 1.8 18. October 2018
- 1.9 11. October 2018
- 1.10 4. October 2018
- 1.11 27. September 2018
- 1.12 20. September 2018
- 1.13 13. September 2018
- 1.14 6. September 2018
- 1.15 30. August 2018
- 1.16 23. August 2018
- 1.17 16. August 2018
- 1.18 09. August 2018
- 1.19 02. August 2018
- 1.20 26. July 2018
- 1.21 19. July 2018
- 1.22 12. July 2018
- 1.23 05. July 2018
- 1.24 28. June 2018
- 1.25 21. June 2018
- 1.26 14. June 2018
- 1.27 07. June 2018
- 1.28 31. May 2018
- 1.29 24. May 2018
- 1.30 17. May 2018
- 1.31 10. May 2018
- 1.32 03. May 2018
- 1.33 26. April 2018
- 1.34 19. April 2018
- 1.35 12. April 2018
- 1.36 5. April 2018
- 1.37 29. March 2018
- 1.38 22. March 2018
- 1.39 15. March 2018
- 1.40 08. March 2018
- 1.41 01. March 2018
- 1.42 22. February 2018
- 1.43 15. February 2018
- 1.44 8. February 2018
- 1.45 1. February 2018
- 1.46 25. January 2018
- 1.47 18. January 2018
- 1.48 11. January 2018
- 1.49 4. January 2018
- 2 2017
- 2.1 21. December 2017
- 2.2 7. December 2017
- 2.3 30. November 2017
- 2.4 23. November 2017
- 2.5 16. November 2017
- 2.6 9. November 2017
- 2.7 2. November 2017
- 2.8 26. October 2017
- 2.9 19. October 2017
- 2.10 12. October 2017
- 2.11 05. October 2017
- 2.12 28. September 2017
- 2.13 21. September 2017
- 2.14 14. September 2017
- 2.15 7. September 2017
- 2.16 31. August 2017
- 2.17 24. August 2017
- 2.18 17. August 2017
- 2.19 10. August 2017
- 2.20 03. August 2017
- 2.21 27. July 2017
- 2.22 20. July 2017
- 2.23 13. July 2017
- 2.24 29. June 2017
- 2.25 22. June 2017
- 2.26 15. June 2017
- 2.27 8. June 2017
- 2.28 1. June 2017
- 2.29 18. May 2017
- 2.30 12. May 2017
- 2.31 04. May 2017
- 2.32 26. April 2017
- 2.33 20. April 2017
- 2.34 13. April 2017
- 2.35 6. April 2017
- 2.36 30. March 2017
- 2.37 23. March 2017
- 2.38 16. March 2017
- 2.39 09. March 2017
- 2.40 02. March 2017
- 2.41 22. February 2017
- 2.42 16. February 2017
- 2.43 9. February 2017
- 2.44 26 January 2017
- 2.45 19 January 2017
- 2.46 12 January 2017
- 2.47 5 January 2017
- 3 2016
- 3.1 22 December 2016
- 3.2 15 December 2016
- 3.3 8 December 2016
- 3.4 1 December 2016
- 3.5 24 November 2016
- 3.6 17 November 2016
- 3.7 11 November 2016
- 3.8 3 November 2016
- 3.9 27 October 2016
- 3.10 20 October 2016
- 3.11 13 October 2016
- 3.12 29 September 2016
- 3.13 22 September 2016
- 3.14 16 September 2016
- 3.15 25 August 2016
- 3.16 4 August 2016
- 3.17 21 July 2016
- 3.18 14 July 2016
- 3.19 7 July 2016
- 3.20 Backlog)
20. July 2017
- coin - license test passing - setup issues wit Python 3 and msvc 2008 vs 2015 issues => need to install newer Python 3 binaries => otherwise looks like enforced testing can be set alive on 5.6 branch - 5.9 still having clang header issues - 2D array support work in progress - PYSIDE-331 -> waiting for PYSIDE-510 dependency - PYSIDE-510-> issues with Python 2.x left - need to get a handling what functions are not supported by pyside due to exclusions or missing support in shiboken
13. July 2017
- coin - Production coin was updated today and all fixes are finally executed - 5.6 - macOS 10.11 machines have Python 2.6, which causes test script to fail due its usage of a module added in 2.7. Investigation in progress on the best way to fix this. - build failure with Windows 8.1 (msvc2013-x86) - 5.9 - macOS 10.12 missing virtual env () - Rest Linux/macOS configs are failing various build issues - dev is still pending for qt5 merge from 5.9 - PYSIDE-510 development completed, test added, needs review - PYSIDE-331 ongoing work - PYSIDE-550 investigated, needs a working solution to allow proper building on distros that separate Qt private headers into separate packages - PYSIDE-354 ongoing work on C / C++ array support in PySide, some questions remains on how to deal with xml syntax and multiple dimensions.
29. June 2017
- coin - Qt 5.6 all coin changes for Pyside are done but waiting for coin production update (after Qt 5.9.1 release) - Pyside/Qt 5.6 on new OpenNebula infrastructure confirmed to run - Qt 5.9 based setup still misses clang libs (merge pending) - Pyside/Qt 5.9 test failure fixes on macOS - worked on array support (passing arrays between Pyside and Qt API) - work in progress (PYSIDE-354) - continued work on debug builds across the platforms - embedded Python example under review (further testing across the platforms) - PYSIDE-510 finished (not yet merged) -> revealed a few more problems in PYSIDE-331 & PYSIDE-308 - added a way to dynamically detect the available Qt modules in Pyside
22. June 2017
- building pyside on Win with debug and non-debug - QSSL* classes do not build on macOS & Windows (patch in progress) - dll name handling work finished (build system issues were addressed) - PYSIDE-531 discussions - coin - followup from 18 May (windows issues in coin) - test runner changes being merged should address this - new regression on macOS targets though - 5.9 issues due to clang continue to exist (will be addressed after 5.6 is enforced)
15. June 2017
- dll name handling fixed across the platforms (e.g. pkgconfig handling) - PYSIDE-500 fixed - PYSIDE-331 is still not done (in parts a module by module fix is required) - PYSIDE-510 ongoing - no further progress on coin - Getting started guide has been updated dealing with Qt 5.6 and Qt 5.9 based Pyside builds - scriptable Pyside example been developed to demonstrate SDK-like usage of Pyside - lots of small infrastructure changes/fixes
8. June 2017
- Proposed solution for PYSIDE-500 - Further work on example for PYSIDE-526, work on shiboken2 command line handling to ease usage with qmake - Updated Getting started Wiki - Introspection work continued (PYSIDE-510)
1. June 2017
- PYSIDE-500 - needs a serious cleanup - 5.6 vs 5.9 differences not explained - specify exact version matching conditions - pyside repo merge completed - Coin changes required to catchup - > recent CI issues have caused delay - Introspection work continued (PYSIDE-510) -> not yet complete - qtcharts and related examples have been ported (5.9 only) - license header checks of Pyside for Coin able to distinguish branches - Array support for Pyside missing (numpy_array support?)
18. May 2017
- Coin - Win nodes didn't run the cleanup functions -> test failures due to artifacts -> no solution yet - Clang issues persist and this must be addressed in the broader scope of Qt/qdoc depending on libclang too -> until this point in time the setup is a manual one for pyside/5.9+ - merge of psyide repositories -> no objections against the plan in the community -> merge will happen in the next few days (pyside.git, pyside-setup.git and shiboken.git become one repo) -> examples, wiki and tools will remain as they are - QOpenGL* porting continuing - qttext2speech ported to Pyside2 - failing QML tests in Qt 5.9 based Pyside - PYSIDE-510 - introspection solution in the works - Qt World Summit Berlin 2017 (talk for Pyside being handed in)
12. May 2017
- looked into the Windows tests failures, and the reason might be related to how we copy around build artifacts, leading to an incorrect folder structure. Coin team to look at it. - Fixed a small regression in 5.9 with building on macOS after some cmake changes that happened in 5.6 - Implemented, tested, reviewed and committed the multimedia widgets examples - documentation from Pyside1 days can not be generated anymore -> new approach required using exiting Qt5 doc tools - QOpenGL* port to Pyside has started -> missing support for arrays in shiboken - more thorough design discussion needed - Coin - provisioning for clang in Qt CI ongoing:
04. May 2017
- Coin - libclang 4.0 provisioning available (code reviews ongoing -> enabler for Pyside and qdoc testing) - 5.6 coin issues resolved - macOS related CI/cmake failures were resolved - some CI sync issues among pyside submodules (always submodules HEAD is tested) => may have to change once we come closer to the final release -> => fine as long as we are in heavy development mode - discussion around Ubuntu and OpenSuse configurations - minimal cmake requirements not met on all distros - 5.6 only be build using RedHat 6.6 - 5.9 builds have updated Linux distros -> may be able to obtain a new cmake version without hacking the standard toolchain - QtMultimedia and QRaster* API bindings further completed - added tests and examples for the above bindings/APIs - PYSIDE-510 (required for simplified and more generic testing of bindings) - PYSIDE-504 work is ongoing - PYSIDE-507 -> avoid hardcoded includes when certain Qt modules are not available - brief discussion whether to merge the shiboken and pyside repo -> advantages from a CI perspective and reduces pyside setup complexity
26. April 2017
- Coin - provision for 5.9+ branches -> we would like to use libclang 4.0 - test still not yet executed due to branch mismatch which pulls in dev branches - missing bindings in QtCore/QtGui/QtMultimedia - work in progress - clang support done (except for a Mac issue) - Pyside clang version required Qt 5.7 or later - PYSIDE-500 done - work on PYSIDE-500, 502, 504, 497 will continue - qdoc work started trying to recover whatever documentation is left from Pyside 1.x
20. April 2017
- Fix for PYSIDE-331 broke the Windows build; under investigation - Refactoring, fixed build warnings - Looked at debug builds on Windows
13. April 2017
- discussion around merge policies between 5.9 and 5.6 branch - Linux debug library issues fixed - PYSIDE-488 fixed via workaround - fixed a unit tests - worked on qdoc/html generation in Pyside (failed so far - waiting for qdoc maintainer feedback) - PYSIDE-331 patches merged -> more bugs found in the meantime
6. April 2017
- 5.6 branch created - 5.9 branch to be created as well (easier setup for Qt 5.9 testing in Coin) - Coin - not passing build platforms have been disabled - otherwise COIN passing on 5.6 (except for 2 failing test) -> soon to be enforced - license checker passing in dev - clang changes merged to dev branch (after reviews) - clang provisioning on Coin still missing - for now shiboken is not a generic C++ bindings generator (targeting Qt use cases only) - TODO: enabling the generation of documentation from repos - fixed mixed usage of debug and release build (no debug builds for windows) - [PYSIDE-331] work continuing - fixed, tests are missing
30. March 2017
- Coin - some platforms passing and will be enabled by default - MinGW, some OSX and cross compile targets remain out - will branch pyside dev branches to 5.6, new dev becomes clang branch - regular merges from 5.6 -> dev will start happening - Pyside 5.6 branch will continue to work against Qt 5.6 - need reviews for PYSIDE-323 to merge clang changes - Qt 5.10, 5.9 (on main desktop plawtforms working) - for now the Pyside dev development will be based on Qt 5.9 (to be bumped up later on) - [PYSIDE-331] - in progress - [PYSIDE-156] -
23. March 2017
- Coin - progressing - More tests fixed - wip/clang branch created, will receive Clang parser port with instructions - Refactoring of shiboken, replaced QtXml classes by QXmlStreamReader allowing for stricter error checking
16. March 2017
- Coin - the fixes are still integrating or are under review (no further progress until this is done) - clang parser replacement PYSIDE-323 - most test failures related to clang are fixed - tests ran pretty much through on Qt 5.9 (Windows and Linux - Mac not yet verified) - code cleanups in shiboken - will create feature branch on all pyside repos to get the clang patch series under CI control - merges from regular pyside branches into clang branch will commence - readme required that explains how clang is to be built (if not provided by platform) - [PYSIDE-331] - in progress
09. March 2017
- Coin - lots of changes in Gerrit for Pyside and coin - the outcome has to be check once everything merged - upcoming clang dependencies in shiboken introduces new requirements for Coin - clang changes to be dealt with after pyside branching - clang - first complete bindings generation with Qt 5.6 and Qt 5.9 - some failing unit tests which have to be looked at individually - actual merge depends on pyside branching which in turn depends on recent Coin changes - smart pointer support - patches generally done, gerrit review ongoing - rather large patch - created list of missing bindings in Pyside (Missing Bindings) - mostly class level view, global functions at al not covered - Jira cleanup - PYSIDE-464, PYSIDE-217, Pyside-224 fixed - most bugs reviewed now and valid ~100 bugs were identified
02. March 2017
- Coin - Coin runs tests on Windows and OSX (some failures in the test - to be investigated) - Windows provisioning somewhat blocked due to CI issues - no progress on Linux - OSX 10.8 ran tests with expected results - OSX 10.9 has issues due to Pyside not supporting namespace builds of Qt - clang - preprocessor has remaining issue -> with clang compiler but gcc seems OK -> Qt Core wrapper close to compiling (QHash issues remaining) - Smart pointer work continues - some uses cases work, other use cases might remain - PYSIDE-364 was fixed (patch pending) - General error review on bugreports.qt.io - 50% done (prioritized bugs are handled/checked bugs - "done" bugs) - ran address and leak sanitizer over Pyside which resulted in some worrying cases (need to be addressed going forward)
22. February 2017
- PYSIDE-462 Improved solution approved - PYSIDE-205 Hard-to-detect memory leak plugged - slow memory leak found in shiboken with thousands of false positives - used differential analysis of valgrind output - Clang: Fixed Preprocessor handling (set defines, include paths), got minimal binding tests to pass, now adapting MetaBuilder and PySide to what Clang finds when parsing Qt. Good news: Very little need to set Q_DOC or other magic defines - Coin: Slow progress, currently struggling with provisioning - extensive bug triaging ongoing in Pyside Jira project
16. February 2017
- Coin (no update - other priorities in release/CI team) - Clang (progressing on pre-processor work) - PYSIDE-315 closed - PYSIDE-462 more review needed - Shared pointer API very slowly progressing (lots of different fixes needed) - QMimeDataBase support added - QUrlQuery support blocked on QDOC defines in Qt sources (shiboken sets the wrong define) - chasing memory leaks (PSYIDE-205) - extensive bug triaging ongoing in Pyside Jira project - confirming and testing the reported bugs (70 of 200 processed) - some smaller bugs were fixed in the process - quite a few bugs were discovered this week while working on the above items, e.g.: - deployment of pyside applications partly broken due to hard-coded install paths in Qt libs - Qt events are swallowed when Pyside error/exception occurs
9. February 2017
- Clang progressed - pure C++ tests for API extractions are passing (C++ parsing based on clang) - even works with cmake - old C++ preprocessor still to be replaced - working through old preprocessor and checking what magic it did and how clang might be able to replace it - PYSIDE-315 & PYSIDE-462 fixes pending for review - COIN - continue progress an getting more platforms to pass (unblocking one step at a time) - Windows (32bit) and mac are building, Win 64bit still not building - OSX and Windows are stopping in the test runner phase - missing cmake update still on Linux (provisioning update required)
26 January 2017
- PYSIDE-315 - principle fix merged but some minor improvements still pending - Pyside-462 - essentially a feature request as C++ default parameters are not yet supported by Pyside - discussion ongoing how to address the problem - COIN - 10.11, 10.9 & 10.8 build passes - 10.10 still failing during build - Progress on Windows and running into new problems further down the path - Redhat failed to general brokenness of the platform in CI - Ubuntu no further progress - clang parser work progressing - completeness of work is currently measured by passing unit tests - there is a long way to go - see PYSIDE-323 and associated patches for progress monitoring - Fixed regression which prevented Pyside to compile with Qt 5.5.1 - shared pointer support - no update on 2nd February 2017 (next week)
19 January 2017
- PYSIDE-315 - a possible fix is pending, some minor performance improvements are still possible - caused by different signal/slot ordering in Qt4 and Qt 5 - COIN - issues on 10.8 and 10.9 platforms should pass now - 10.11 & 10.10 still have issues - Windows timeout problem fix in COIN (but no COIN update until 5.8.0 released) - Linux builds are failing - Ubuntu 14.04 fails due to cmake issue - Redhat 6.6 & OpenSuse 13.01 deferred - current COIN freeze for 5.8.0 release affecting patching of COIN for Pyside - next steps in priority -> run tests on 10.8/10.9 and get Linux running - PYSIDE-79 regression fixed as well (some interaction between PYSIDE-315 & PYSIDE-79) - PYSIDE-462 to be looked at next - clang parser work progressing
12 January 2017
- PYSIDE-315 - sorting of slots/signal connection changed on Qt side and Pyside side has not caught up - had similar connection issues on the QML side, need to investigate whether there is a connection - PYSIDE-79 - caused a regressions (not yet investigated) - fixed OpenGL types not being recognized on MacOS (partly fixed) - API's with shared/smart pointers in Qt don't work - has potential long term effects and investigation has started into the reasons - Refactoring shiboken in preparation for clang - COIN currently runs tests with namespaced Qt - short term fix to exclude namespaced Qt builds with Pyside (broken on MacOS 10.9) - other failures: missing libraries on MacOS (10.11), missing provisioning on Windows, cmake issues on 10.10 - unknown state on Linux (current Redhat too old) - Prioritization after status round: - PYSIDE-315 to be investigated based on recent signal/slot patches for QML - Smart pointer issues reduced in priority to provide space for PYSIDE-315 & PYSIDE-79 regression - verify that COIN runs testrunner (not just building Pyside) -> not yet verified since we are still failing builds in COIN
5 January 2017
- COIN update - COIN changes merged (no further patches pending) - need to run an integration test - issues related to different build platforms still to be expected - Continue with clang - backtracking a bit (reusing some node API's during parsing but otherwise use clang to populate the tree) - test blacklisting reviewed (some removed - mostly signal related ones, one new regression) - regressions in QtQuick were worked around (caused by recent Qt Quick changes) - issues with macos framework style includes in Qt - PYSIDE-315 debugging ongoing, very hard to track down - was it a regression from a previous Pyside release (e.g. Qt 5.4?!? - hard-copied Qt 5.4 based headers still in existence - updated needed but would shut Qt 5.4 users out
2016
22 December 2016
- lots of discussions around the COIN patches - source archive setup being under review - Qt 5.6 provisioning patches merged - CI uses Python 2.7 at this stage - clang C++ parsing continues - fix of some Pyside unit tests (now have a clean slate again) - finished QtQuick port - some overflow problems have been fixed in shiboken
15 December 2016
- COIN changes pending for testing infrastructure -> (Pyside change) -> (Coin change) -> (Provisioning changes) -> (Provisioning changes) - waiting for COIN development team to review/accept the pending changes - no update in PYSIDE-315 (under investigation) - Flushed out a couple of shiboken, QML, qml example bugs - Some bugs related to parser delayed until clang parser task done () - most basic shiboken API extractor test to pass (global enumeration test) - some trouble with int size data types -> Python 2 & 3 are different and the relevant C++ data types are yet again different from platform to platform
8 December 2016
- COIN patches pending approval - Windows provisioning reviewed - Linux, Pyside, COIN itself - Clang changes progressing, comparing AST tree from old parser and clang - PYSIDE-79 done - PYSIDE-315 under investigation - QtQuick patches taking shape (some template magic and function pointer features in Qt cannot be parsed by shiboken)
1 December 2016
- COIN - PYSIDE-79 there seems to be a final work around () - fixing a few tests in the process - clang update - dumping AST, identifying the required info - work in progress... - problems with global static QObjects on the Qt side calling back into Python during app exit - may require some changes on the Qt side
24 November 2016
- automatic COIN triggering for submodules work in progress - several discussions on this topic this week - Clang investigation (what library to use) - fixing bugs on Qt side for Pyside - Qt QML support almost done, work will continue with Qt Quick - PYSIDE-79 work story continues - reference counting not quite accurate but point of deallocation of ref count not identifiable
17 November 2016
- Pyside team suffering from sickness -> not much progress this week - Pyside-79 fix had a lot of negative side effects (breaking existing tests) - internal object reference counting is the predominant issue at hand - continue to work on the bug (no resolution yet as delayed due to sickness) - COIN some progress but still open discussions on branching policy required
11 November 2016
- PYSIDE-79 being fixed - PYSIDE-315 to be addressed - Shiboken and Clang - familiarizing with clang and its parser's inner working - added experimental qmake project definition for shiboken (makes work in Qt Creator easier) - Qt QML on Pyside work progressing - work on COIN did not progress due to conflicting priorities inside COIN development
3 November 2016
- working through the QML stack esnuring all required API's are exported () - small build system patches - PYSIDE-79 work progressing - OpenGL support fixed (PYSIDE-314) - COIN issues - repo interdepencies not working yet, suggestion under discussion and to be implemented - most license checks have been fixed - Qt 5.6 based Pyside to skip check - after branching for C++11 work the dev branch should work from license check perspective - eventually the entire CI needs to run through (more hidden problems could be hidden) - Started working on C++11 compliant parser for shiboken - libclang will be used - first target is to replace the AST tree implementation - requires clang setup in the CI
27 October 2016
- Work on WebKit/WebEngine support - Further work on COIN support - Planning meeting - Workshop for C++11 support in Pyside (PYSIDE-323)
20 October 2016
- Qt Quick support submitted - QtOpenGL support submitted - Further work on COIN support, license headers
13 October 2016
- Qt CI update - Coin changes have merged but integration not yet working - Qt CI enables on Pyside side merged => COIN integration fails with license issues => requires review of license conditions for all files, some files are not even relicensable => skipping license check for now, most likely to be done later again but requires changes to license check script - more issues of interworking between QtCI and Pyside expected (won't be visible until license problems resolved) Open issue: does a change in Pyside repo trigger a rebuild of everything? - Qt QML support progressing and first patches merged - serious bugs have been fixed, more complex QML examples are now working
29 September 2016
- Coin integration close but not merged - Bugfixing in particular on the shiboken parser side - QML/Python binding and tests fixed -> general check all day-to-day aspects of QML are working - QML examples porting -> still some failing tests -> Check all QML/Quick class are exported
22 September 2016
- Refactored Shiboken, udnerstanding build sequences - black list for unit tests defined and tested - fixing of tests - QML example fixing continued (QML bindings not working) - Pyside side for COIN done (pending integration checks) - COIN integration still wip due to long test and retest cycles - pyside and shiboken repo to be relicensed similar to other Qt products => this should address any issues regarding the status of generated code too
16 September 2016
- Pyside side for CI testing ready for testing - status of COIN side to be determined. Code exists need help from CI team to confirm status. - build system infrastructure improvements in pyside setup - Qt logging now working - and more - Update on bug handling - PYSIDE-88 continuing - PYSIDE-349 (Multimedia ported) - PYSIDE-344 fix pending on codereview - make debug builds of Pyside work (OSX works, Linux has work in progress patch, Windows side awaiting contribution) - QML support work in progress - examples are slowly ported with aim to identify bugs
25 August 2016
- Properly implementing QML experience in Pyside - Unit test fixing - automated CI testing for Pyside - working locally but still failures occurring on various other test machines
4 August 2016
- Additional Pyside examples under review - Fixed warnings coming from Shiboken
21 July 2016
- main Pyside 2 example port done - additional examples to fill gaps are being ported (as per prio list) - CI patches running, but still gaps (should be done by next week) - about 86% of auto test working (failing count of test 80+ auto test) - Project Test Status
14 July 2016
- Pyside 2 examples ported - OpenGL & SVG not working - QtQuick 2 is in strange situation (QtQml depends on QtQuick) - Python 3 realed Unicode handling not working with Qt - QMessageBox hangs - no documentation for any example - lots of warnings when building wrapper - results: - COIN setup for pyside - Qt 5.7 still blocked due to missing C++11 support in shiboken - food for ideas:
7 July 2016
- automated CI testing - patches for Pyside and Qt CI side required - Testing somewhat more complicated due to closed nature of Qt CI - script to port examples to Qt 5 -> - all examples to remain BSD licensed | https://wiki.qt.io/index.php?title=Qt_for_Python_Development_Notes_2018&printable=yes | CC-MAIN-2019-30 | refinedweb | 4,349 | 61.77 |
>>.'"
Completely agree (Score:5, Informative)
Re:Completely agree (Score:5, Insightful)
Anyone who has used JQuery will know how their power exceeds the original intention
...anybody who has used jQuery will know how powerful they could have been if only browsers had implemented them completely and consistently.
Meanwhile, anybody who has used CSS will wonder what the hell the original intention was, given the arcane kludges needed to produce popular web-page layout effects easily achieved using evil tables and frames, the lack of 'constants' to set standard colours and measurements.You know there's something wrong with a standard when Microsoft's broken box-model implementation makes more sense. However, that's not the fault of the selectors.
Its as if the designers* of CSS had never looked at a web site, used a DTP package, used styles in a WP package, let alone played with a Java layout manager to get ideas about what might work and/or be useful.
(* probably unfair - I'm sure it was a mixture of committee syndrome and the notion that you can define a standard without producing a reference implementation rather than individual failings).
Re: (Score:3)
...anybody who has used jQuery will know how powerful they could have been if only browsers had implemented them completely and consistently.
This used to be true a few years ago, but all modern browsers nowadays parse selectors quite similar. Even IE8 is not so bad (it understands CSS 2.1 selectors like
:first-child and [attribute] etc.).
Of course things keep evolving all the time, so if you want to use cutting edge stuff, you might run into some things. But in general I think especially the selectors are amongst CSS' least problematic areas.
Re: (Score:2)
Fixed that for you. This has to be the most annoying part of it all. They really should have implemented the opposite where you required a special doctype to put the browser into legacy mode. Would have made everybody start to make their pages standards compliant much sooner.
Re: (Score:3)
On the other hand, this really forced devs to include the proper doctypes in their work, which is a good thing!
Re: (Score:3)
Meanwhile, anybody who has used CSS will wonder what the hell the original intention was
It was to provide an easier alternative to xhtml/xsl. Instead of the total separation between data and formatting that many programmers rooted for, it is a bastard compromise that was reached : HTML would still specify both data and formatting but formatting would "skinable".
Some days I think that we live anyway in a world of compromise and that it is true that HTML/CSS is easier to use in 95% of case, yet other days I wonder if in the end we are not doomed to come back to the original intent, after a lon
Re: (Score:2)
generate the formatted page from an XML content
Some web programming frameworks already work like that.
Re:Completely agree (Score:5, Interesting)
What box model would be best?
Serious question. I'm doing a specialist graphics app at the moment, and I was just considering this the other day. What's the important rect for a box?
Most graphics app use a rect that is halfway through the border by default, as a result of the concept of "stroking" the rect. CSS is very different, and as you say a bit broken, by default using outside the margin for position, and content rect for size. So there's no concrete rect for layout of a box at all in CSS. And then there's box-sizing, which could allow the concept using the same rect for positioning and size, but doesn't.
How would a designer prefer to think of the primary metrics of a box, for the sake of alignment, snap to grid, proportional resizing etc?
1) Margin rect
2) Outside border
3) Centre of border
4) Inside border (outside padding)
5) Around the content (inside padding).
Of course, "all of them" and "it depends" are rational answers. But not much use when deciding on default or standard behaviour.
Re: (Score:3)
Don't make it standard behavior.
E.g., instead of letting the designer specify "width", let him specify "content-width", or "outside-border-width", or "margin-width", etc.
And in case of conflicting specs (e.g. two or more conflicting attributes given) produce an error (don't choose a precedence order!).
Re: (Score:2)
What box model would be best?
One that allows me to discover BOTH inside and the outside sizes so I can measure BOTH what will fit in my box, and what my box will fit in!
Thanks jquery!
Re: (Score:2)
As far as I can see that's still not a best box model. It's mostly just different varieties of patching over what's broken. Those are just sizes, they're not rects.
outerWidth(true) does at least match the same rect as CSS left. But is setting/getting a rect by the outside of an invisible margin (that may or may not be collapsed) anyone's ideal metric?
Re: (Score:3)
What box model would be best?
I'd look on it from the perspective of "encapsulation": One person should be able to design what was in the box without knowing how it was going to be placed on the page, a second person should be able to place it on the page and align it with other elements without affecting anything inside.
That would work best if the primary size of the box included the inner margin/padding and border (which the box designer 'needs to know'), but excluded the outer margin (which the 'page designer' needs to match with ot
Re: (Score:2)
You're able to control how the box model is calculated in CSS3 using the box-model CSS property. You could standardize on the MSIE way, if you so choose.
Re: (Score:2)
You mean the box-sizing property. I already mentioned that in the post you're answering.
Re: (Score:2)
I always thought it should be the border, with padding pushing the content in, and margin pushing other content away.
it would make % sizes work more intuitively I think.
Re: (Score:2)
" given the arcane kludges needed to produce popular web-page layout effects easily achieved using evil tables and frames, the lack of 'constants' to set standard colours and measurements."
This, a thousand times this. Honestly, why the hell has CSS not been fixed? They could have easily added what was needed to make things a lot easier. but instead they force everyone to fight with it.
Re: (Score:2)
Re: (Score:2)
Rejoice brother, for the era of display:table-cell has begun (supported in IE8 and up, and IE7 is effectively dead, thank god). Nice clean semantic markup, but now with access to all the juicy table features like vertical & horizontal-block centering, shared column height, and automatic column sizing. (Good article: [digital-web.com])
Re:Completely agree (Score:5, Insightful)
Things like SASS and LESS point out where the big flaws of CSS are. It's crazy we still don't have variables in 2013 by default, this has been at the top of the requested features list for what, 15 years now?
Re: (Score:3, Interesting)
Re:Completely agree (Score:5, Interesting)
And now your web server has to do PHP processing on every page and every style sheet, so your load goes up. So you implement some caching. Now you have two problems.
Re: (Score:2)
Re: (Score:2)
Yeah, I suppose you could do that. Or you could just use LESS or SASS, which basically do the same thing and give you a whole lot more.
Re: (Score:2)
.. m
Re:Completely agree (Score:4, Funny)
And now your web server has to do PHP processing on every page and every style sheet, so your load goes up. So you implement some caching. Now you have two problems.
3 problems.... php itself being one.
Re: (Score:3)
He's using PHP? I get the pitchforks!
Re:Completely agree (Score:5, Insightful)
Please don't burn me, I don't weigh the same as a duck...
Re: (Score:2)
It's crazy, but SASS and LESS also deserve to stay in their own separate play-place for now. It's important for these to continue to expand and develop, but in both camps there is constant movement and redesign, no clear stability, and no clear path of where certain features will lead. CSS, on the other had, is generally solid and not often updated. It has to remain rather slow and boring in order to maintain support for all the browsers and designers and companies who are relying on said stability. Variabl
Re: (Score:2)
That makes no sense. CSS could easily be fixed by a lot of the things that are already in SASS and LESS. They are not different tools for a different job.
Re: (Score:2)
I swear by LESS, but it isn't a standard and requires too much CPU (whether processed server side or client side), and it doesn't fix a model which makes it very difficult to do very common layouts.
Re: (Score:2)
So use a LESS compiler then: edit LESS-file, compile to CSS, test CSS locally, upload to server. Rinse and repeat.
Re: (Score:2)
That is exactly what I'm doing, using lessphp ().
I'm using it both to generate some jQuery themes (, somewhat old) and dynamically (with caching and "compression") in my own framework.
But it's not as convenient as just using static files would have been and most importantly; it still doesn't fix the box model.
Re: (Score:2)
I don't know if LESS has this, but with SASS and Compass, you can tell Compass to "watch" your SASS folder - when any files change, it automatically rebuilds your CSS. Quite nice for actively working in the SASS without having to go back and recompile every dang time you change something.
On the dev side, all of our SASS is compartmentalized into partials for that particular section "_toolbars.scss, _article.scss, etc" Our local configurations are set to compile the SASS to expanded CSS, complete with annota
Re: (Score:2)
Visual Studio 2012 Update 2 will auto-compile any LESS file as you type in it (I think it does COFFEE as well) into both
.css and .min.css versions.
If it's a
.NET site, then you can reference the css files in a bundle which will combine multiple .css or .min.css files into a single file, and will auto build a new one whenever one of the files in the bundles changes. Pretty slick.
Re: (Score:2)
Yes, so why can we not simply send the SASS to the browser?
Re: (Score:3, Insightful)
not true, there are clear defects in the tools that almost everybody agrees on (variables), plus the browsers don't support everything even though the specs are years old.
sometimes the tools really are bad....
Re:Completely agree (Score:5, Funny)
Tell that to the guy that has to build a house with a saw that has no teeth.
He will probably beat you to death with that saw. People that have no clue at all as to the problems with the tools or even how to do the task are the first to blame the craftsman.
Re: (Score:2)
Bad craftspeople have a definite tendency to blame their tools
...and bad software designers have an even more definite tendency to blame their users. Usability/clarity and appropriateness for the intended user base (in this case, graphic designers) is part of good tool design.
CSS smacks of being a hammer designed by someone who has never seen a nail.
Re: (Score:2)
The CSS property you're looking for is box-sizing. If you want modern browsers to use IE's box model where the width includes border and padding, use the value 'border-box'.
Yes - but is it supported by IE7...
If you want to yell at someone, yell at those folks still XP and IE8 (or earlier).
Unfortunately, such folks fall into categories like "clients", "customers" or "target audience" and its not such a good idea to tell them "piss off and come back when you've got a decent web browser".
This does all get better the further IE6/7/8 fade into history - if I were starting a site today I could at least ignore IE6/7 - but I'm still seeing significant hits from IE8.
Then new things come along: I was having trouble with 'background-size' recently (handy if you want
Re: (Score:2)
The CSS property you're looking for is box-sizing. If you want modern browsers to use IE's box model where the width includes border and padding, use the value 'border-box'.
Yes - but is it supported by IE7...
Does IE7 support the IE box model? I'm not sure, I'll have to get back to you on that one.
Unfortunately, such folks fall into categories like "clients", "customers" or "target audience" and its not such a good idea to tell them "piss off and come back when you've got a decent web browser".
That's true, it wouldn't be a good idea to tell them to "piss off". It would be better to just tell them that they are using an unsupported browser that no longer receives testing, and they can either upgrade their browser or pay extra to test on legacy software. That sounds a little better than "piss off".
Re: (Score:2)
Forget about IE7. Apart from some corporate environments with which you don't want to have to deal with anyway, nobody is using that browser anymore. IE8 is the absolute minimum these days. Just another year and non-HTML5 compliant browsers will be a thing of the past. Finally.
Re: (Score:2)
Unfortunately, such folks fall into categories like "clients", "customers" or "target audience" and its not such a good idea to tell them "piss off and come back when you've got a decent web browser".
A few years ago, I'd agree with you. Now? You can let those 3 clients go, you won't miss them that much. If you tell them to go piss off, they'll come back, no one else is going to support them either.
IF your business depends on old IE users for survival, you're fucked anyway, by definition.
Re: (Score:2)
No, not really.
jQuery does it, CSS is just the way jQuery is interfacing with the browser. CSS isn't really doing anything but making the situation more complicated and convoluted actually.
CSS + jQuery, like most things on the web now days was stumbled on, not designed. It was stumbled on because no one bothers to think about what they are putting into HTML and how it affects the future, they only pay attention to what they want in their browser for today. The end result is that most of the system is dec
Practically Worthless. (Score:4, Insightful)
Think about it. It's practically worthless. We might as well be compiling CSS + HTML + JS into an interactive PDF format for all the times we actually reskin entire sites. Even mobile stuff is suspect -- I mean, yeah, I can have 10 different images to serve depending on the size of the display, and I automate that image asset generation... Then what? I make the images be CSS backgrounds? Isn't that defeating the point of separating the style from the content? Go the other way: Actually put the content wholly in the HTML, and only use CSS to style everything. Yeah, great, I can sort of reskin for printers and mobiles, but where's the detection mechanism? It's on the server side... Thus conflating the whole model, view, controller and the presentation, content, style, etc. I mean, JS to manipulate the view -- So, what, a segmented controller? CSS3 Animation instead? Oh, so that's a style thing now. Bah, whatever. A rose by any other name...
The problem is that designers would love to think these problems can be isolated and are separable. The reality is that they are not. Concentrating on making your CSS super flexible with selectors is merely mental masturbation. If it weren't then folks would be making CSS libraries for pulling off common styles and effects. Go to the "poster child" of CSS: CSS Zen Garden, and see for yourself. Tons of #id tags, tons of different designs, no one really taking any two designs and combining them with ease...
The reality of the situation is that the next person who comes along will just scrap the whole thing and re-make the design again anyway (yes, even if that person is you). Might as well be compiling it all down into a low level colored shape display system, that way we can implement CSS and HTML and even new markups atop it, instead of waiting for OVER HALF the age of the web just to move from HTML4.01 to HTML5...
CSS is great, unfortunately designers can't use it (Score:4, Insightful)
CSS is great when used properly (although, somewhat hereticly, I would kill for definable constants a-la 'color: PRIMARY_WEBSITE_COLOR;' without resorting to dynamically writing the CSS ).". Don't even try telling them that "redtext" is not a good classname. Heck half of the time it's ".span1"!
They don't even know (even after telling them half the time) that you can use multiple classes on a single element, let alone combine selectors, everything is a single ID or classname to them. The amount of copy-paste in most web designer's stylesheets is simply offensive, all because their brains don't allow them to modularise their desires into useful reusable CSS classes. Cascade? Inheritance? These are foreign words to the average website designer.
There is no point telling a designer how they should can make their CSS better, they just won't understand. Worse, if the programmer, who does know how to use CSS as it was intended, attempts to fix their stylesheets (or worse, cut up their photoshops into proper HTML and CSS), the original designer just won't understand how to do anything in the stylesheet anymore.
Re: (Score:3)".
If CSS did what it said on the tin - separated content from style and layout - then graphic designers wouldn't have to bother their little heads about this sort of thing because they wouldn't need to touch the semantically-marked-up HTML.
Unfortunately, (a) CSS doesn't do what it says in the tin - changing the layout inevitably needs including exactly the right permutation of DIVs in the markup because CSS doesn't have any way of doing what every half-decent DTP package since PagerMaker 1.0 can do: defining
CSS should be a programming language (Score:4, Interesting)
Intellectually, I know that if it were more complex, there's no way it would have seen widespread adoption, and that markup is actually still complicated for many people. I can even look back at the early days of the web, when Marc Andreessen butted heads with Tim Berners-Lee about the media tag meant to display images, sounds, video and anything else and said, 'Screw it, you guys take too long to decide anything and it's over complicated, here's an img tag, done.' - and I can see how simple beats theoretically perfect and well designed.
However, we're already at the point of widespread adoption now, and it's a good time to have a new css that actually is a programming language, with flow control, dynamic calculations of element values, and so on. This is what we need to provide real separation between the document and how it looks. Anyone experienced enough to write non-trivial web applications that are meant to be run on a browser, tablets of varying sizes (including accounting for reorientation), and even cell phones knows that it's unrealistic to use a single page - you get sent to the 'mobile' variant of the page or elsewhere.
Css has been around for 16 years and it still lacks the ability to easily declare a completely separate layout based on display height or width, something like "If width is less than _x_, use this css, else this" or "set width equal to - 30". If you want those things now, you have to use javascript, and it's sometimes pretty awkward - like calculating the width of an element filled with content prior to displaying it.
To you folks who cite javascript to fix this, realize that css no longer manages the document display at that point, the javascript does. That means that css is missing something required to manage a display. It can only do some of it's job.
- side thought; I'd be happy if css allowed javascript within the css. Assign values based on closures or predefined functions. Simple fix -
Re: (Score:2)
Also, to you folks who are pants-on-the-head retarded, and think that html, css, and js equals an mvc, you are incredibly wrong. Javascript plus css is what defines the view, and that goes for all javascript outside of a few frameworks like Backbone that actually implement a real Controller pattern.
Re:CSS should be a programming language (Score:5, Informative)
Actually, you can do that. I do it all the time when I use responsive web design. Here's some sample CSS code:
@media screen and (min-width: 501px) and (max-width: 750px) {
/* Put styles in here to reformat the page for larger tablets or small desktop resolutions */
}
@media screen and (max-width: 500px) {
/* Put styles in here to reformat the page for mobile devices and small tablets */
}
@media print {
/* Put all of your styles in here to format the page for printing. */
}
There is no JavaScript at work here. If you loaded a page utilizing this code in Chrome or FireFox (or IE10), disabled JavaScript, and resized the browser to make it smaller, you'd see the page slowly transform from a desktop version to a tablet version to a mobile version. (A good example of this is the Boston Globe's website: ). I can set styles for HTML elements and override them if certain conditions are met (max-width is between 2 values, screen resolution is a certain amount, print vs screen, etc). It might not be "if-then" statements, but it has the same effect.
Re: (Score:2)
I don't have mod points right now, so let me just say thanks for pointing that out. The Boston Globe's site is really neat when you re-size.
Re: (Score:2)
I have used media queries, and they are a great addition that gets us another step closer, but they're not the end-all, be-all. Once all the mobile devices can settle down and give us consistent and sane implementations (like not downloading every image, even those outside of the @media block), we'll be even better off, but it's still not a solution to each of my issues. Providing for statement evaluation, flow control, variables, and so on will.
As someone who's actually written desktop apps, every time I
Re: (Score:2)
Css has been around for 16 years and it still lacks the ability to easily declare a completely separate layout based on display height or width, something like "If width is less than _x_, use this css
Erm... heard of media queries? They do exactly this.
@import url(narrow.css) (min-width:800px);
This loads stuff from narrow.css and applies it if your window is less than 800px wide.
Re: (Score:2)
Actually, I see CSS as it stands now as the sendmail problem. Trying to accomplish in markup what amounts to an if/then statement - much less flow control - is incredibly complex as the language is not well suited for it. At some point, it's easier to write a program to do it. In comparison to what a markup language would need to be to accomplish my daily goals while writing a webapp, a programming language would be more succinct without losing readability, explicit as opposed to derived, and easier to o
Relational? (Score:2)
Why not leverage people's existing SQL knowledge and create a relational-friendly DOM? There will still be tree-oriented nesting, but special functions and views can assist with that.
The problem is (Score:2)
CSS selectors actually work?! (Score:2)
Every few years I crawl out of my sandbox and absorb any useful changes in the browser scene.
The last time I tried CSS selectors every one I wanted to use either didn't work at all or worked great until I tried the same thing in a different browser.
Re:!Like (Score:5, Informative)
Re:!Like (Score:4, Funny)
Bollocks. CSS was designed to separate styling from structure in web pages. It does this admirably, and only needs to be a declarative language to do so.
Bollocks. Every configuration file should be Turing complete. -- The Sendmail Authors.
Re: (Score:2)
if it only included a #define....
Re: (Score:2)
You are confusing structure with layout. Tables are a used to define document structure, when the data to display is tabular. to use tables to layout the document when the contents of the table is not tabular data is plain wrong!
Re: (Score:2)
You're confusing the point.
The point is to get a job done. You're just adding unneeded semantics based on a broken philosophy. CSS is not better in several instances. Table based layouts are the proof of that. Instead of recognizing shortcomings, you're ignorance shows through as you just keep parroting the same old tired line
... which just goes to illustrate that you are having a religious debate about your preferred language rather than a discussion of technical merits.
Re: (Score:2)
If you think CSS interferes with structure, you don't understand what structure is. CSS is only about how things look, nothing more. HTML used to do style too in a distant past and there are a few remains of that still in there (some form elements for example have their own style which can't really be changed through CSS). But the other way around? No way.
Re:!Like (Score:5, Insightful)
CSS alongside 2 basic layers, regular code and HTML document itself, only creates additional unnecessary third layer of shit that eventually may introduce problems, as soon as someone starts playing with it
That's like saying MVC is unnecessary, and not just putting all your code in a single class/module/namespace may introduce problems. There are people that say that, but they are novices.
HTML5/CSS/JS is equivalent to MVC. The "VisualBasic" type people would tend towards trying to put everything in their HTML rather than the other way around.
Re: (Score:2)).
I'm happy that you know that. It's a shame you don't have the comprehension ability to see that nothing in my post said otherwise.
I think you should look up MVC.
I thing you should try teaching your grandmother to suck eggs.).
No. He got it right.
Done properly, HTML is semantic. Its data. Its not a neat paralllel to a model , but its in the ballpark.
CSS is the layout, its a view. It takes the model and presents it.
And Guess what Javascript is?!
Now your getting it!
Re: (Score:2)
While that has nothing to do with the original point the person was trying to make, keep in mind MVC is a very specific pattern, and the fact you have a model, a view and a controller is only a part of it. How you use them is also part of the pattern.
You can have a model, a view, and a "controller" and end up with a MVP, an MVVM, or a variety of other patterns that have these 3 components in one form or another.
Re: (Score:2)
And that really shows that while you understand the concepts of MVC, you don't understand what HTML actually is or how MVC exists in the real world.
HTML is both data and view. CSS is just sorta view. and javascript is just control (usually coupled with a backend component to complete this part of MVC).
The real world and the theoretical/ideal are entirely different.
Re:!Like (Score:5, Informative)
You have no idea what you're talking about. CSS (and HTML for that matter) have *nothing* to do with programming. CSS is merely a way for designers to code a layout, nothing more, nothing less.
I do agree CSS could have been a lot better and there are definitely some errors which needs fixing, but the general idea of separating mark up and layout is a sound one and selectors is one of CSS' best features.
No , sorry. (Score:2, Interesting).
Re:No , sorry. (Score:4, Insightful)
You're crazy. CSS and HTML are completely unrelated languages and technologies. None is a hack on top of another. HTML describes the structure of a document, CSS defines how things look. It's that simple. They require a different syntax because they are used for different things. And they're both very successful at what they're trying to do. Sure there are problems, sure there are things wrong with it, but show me something perfect. There are two types of languages you know: ones everybody complains about and ones nobody uses.
I have no idea what you mean by "embedded Javascript", but Javascript is the programming language of the web. Contrary to HTML and CSS, Javascript is a "real" programming language by any definition. Without it web applications would not be possible and the web would merely be a document system. Instead its the world's largest application platform, allowing users on any device to use your applications. If you are a web developer and you think that's not exciting then maybe you should think of switching careers.
Unrelated technologies??? (Score:2)
Wtf are you smoking? HTML used to do presentation. And there's no reason it still can't. Please feel free to give the exact reason for CSS having an entirely different syntax and structure to HTML when XML which can store far more complicated data than CSS manages to have a pretty similar one.
Re: (Score:2)
HTML used to do presentation
Yeah, over a decade ago. How is that even remotely relevant to the modern web?
And there's no reason it still can't.
All the formatting tags are dropped in the latest versions of HTML. You can only use them if you're stuck in the past.
Please feel free to give the exact reason for CSS having an entirely different syntax and structure to HTML when XML which can store far more complicated data than CSS manages to have a pretty similar one.
XML has a lot of cruft. Look at the difference in size between a JSON file and an XML file containing the same data. The same thing would apply to style sheets written in XML.
Also: why do you think it would be beneficial to have more or less the same syntax for two completely different things? What would you gain b
Re: (Score:2)
How so? Care to elaborate or provide an example?
Re: (Score:2)
How so? Care to elaborate or provide an example? [w3schools.com]
Re: (Score:2)
Sure, HTML used to include tags for styling in the past. There was a time before CSS was invented and people wanted to make things look pretty after all. But these sort of tags have been regarded deprecated for at least a decade now.
Re: No , sorry. (Score:2)
Oops. (Score:3, Insightful)
First, you lose credibility for linking w3schools.com. Professional web developers wouldn't be caught dead referencing them. [w3fools.com] Second, you're referencing a tag that's deprecated because of CSS. Professional web developers wouldn't be caught dead using a font tag (or any other stylistic tags for that matter).
Re: (Score:2)
That page has 5 pieces of red text saying that the font tag is not supported in HTML5. It tells you to use CSS instead. It says that it was deprecated in HTML 4.01. The font tag is an example of people realizing their mistakes, not a reason to bash modern HTML. Even W3Schools, with all of its problems and outdated tutorials, makes sure people know that.
Re: No , sorry. (Score:2)
Re: (Score:2, Funny)
This post is awesome! I'm saving it for later, thanks!
Re: (Score:2).
The purpose of HTML is to organize data for display - and possible return via a submitted form. The purpose of CSS is to control the presentation of that data. HTML was invented first, and originally had to shoulder some of the responsibilities of CSS, but CSS is now the preferred presentation control medium, not least of which is that it makes it possible to "skin" HTML to adapt it to multiple display devices and/or view preferences.
JavaScript exists to allow dynamic manipulation of client-side data, displ
Re: (Score:2)
CSS isn't a nasty hack, it's a necessity. Take a look at CSSZenGarden.com. Every time you switch to a different theme, the HTML remains the same. All that changes is the CSS file (and the images that it references). To do that with plain HTML and no CSS, you would need tags or attributes to represent each display style. This would mean changing the look of a page would require completely recoding it instead of simply updating the stylesheet. If you wanted to, for example, make all links red instead of
Re: (Score:2)
"This would mean changing the look of a page would require completely recoding it instead of simply updating the stylesheet."
And it would be impossible to update the syntax of HTML to allow one piece of HTML to reference another piece in a seperate file why exactly?
Re: (Score:2)
I don't think so. The fact that it's code doesn't make it programming.
Re: (Score:2)
I guess you're right. I was thinking of things like Turing-completeness, but I agree that is not actually necessary to make something a programming language. A player piano isn't a computer, but making a piano roll still is a form of programming.
Re: (Score:2)
The fact that it is interpreted by the browser and applied to the web page layout does.
No, it doesn't. CSS is a description language, not a programming language. It just maps elements to the layout of said elements. Calling CSS a programming language would be akin to claiming, the mapping of numbers to colors in a paint-by-numbers would "program" the picture.
Re: (Score:2, Informative)
If that's what you think, then you don't understand why CSS and HTML are separate languages. The implementation is by no means perfect, but its a very good example of Separation of Concerns (separating content from presentation, and in javascript's case, both of those from application logic), something that all too many 'programmers' don't seem to have any idea about. Then again if you're one of those programmers who see SOLID principles as over-engineering then I can understand why you might think that its
Re: (Score:2)
CSS is a DSL for styles primarily. It could be used for other stuff as well. There are libraries available in Java. However, I do not know of any C++ library. But google returns just a lot on the topic for "CSS c++ library"
Re: (Score:3)
Yes. Qt will let you style the GUI [qt-project.org] with what is effectively CSS [nokia.com].
Re: (Score:2)
Yes, I am using it for an internal XML dialect that need styling and for which HTML is not appropriate. We use the Apache Batik internal CSS processor (because we are using Batik for their SVG support, no need for a duplicate CSS processor independent of Batik)
Re: (Score:2)
Why has functional - which is only midly declarative anyway but we'll let that slide - taken over the world like its proponents constantly tell us it will?
Don't you mean "why *hasn't* functional programming taken over the world"?
Anyway, it has. It's called Javascript and it's *huge*.
Re: (Score:2)
It has the "function" keyword duh!
Re: (Score:2)
I know you're mocking, but in Javascript a function is a first class citizen. You can pass functions as parameters, return functions, keep them in a variable, create them at run-time. The fact Javascript has a curly braces syntax doesn't mean it can't be a functional programming language.
Re: (Score:2)
The fact Javascript has a curly braces syntax doesn't mean it can't be a functional programming language.
Actually the thing that makes a functional language a functional language has nothing to do with curly braces. The defining principle of functional languages is that you have to have special hacks built into the language that technically violate it's functionality in order to make it do anything -- eh -- functional -- due to the fact that side effects are the only useful way to get anything done.
Re:Is this a joke? (Score:4, Informative)
Javascript is much more a functional programming language than a procedural one. It's by no means as pure as Haskell, but this also allows it to be useful.
I suggest you read up on some of the articles by Douglas Crockford, who does an awesome job of explaining the true nature of Javascript to the world. This is a good starting point [crockford.com].
If you don't believe Javascript is indeed a functional programming, here is a Google Talk by the same Douglas Crockford explaining how to do monads in Javascript: [youtube.com]
Re: (Score:2)
If javascript is a functional language (because you can do monads?), then I guess C is also a functional language.
I think the definition of functional programming languages is more about what you can't do than what you can do - since they're all turing complete...
Hell, javascript functions don't even return a value by default. To me that screams *not a functional language*.
Re: (Score:3)
It sounds like your'e confused. And I can't blame you, because there are *a lot* of different languages and syntaxes involved in creating a web application these days and it can be challenging to grasp it all. It helps to separate things in your mind. Even though the syntax of e.g. a JSON file may look a bit like a CSS file, they are completely unrelated. Make sure you know what you're working on (structure, style, client side logic, server side logic) and only concern yourself with the things that are rela
Re: (Score:2)
No, a JSON file looks nothing like a CSS file, it looks a lot like a chunk of javascript though....
Re: (Score:2)
You can't deny there's a very similar syntax going on (blocks of key:value pairs encapsulated by curly braces). I believe Crockford himself even said he came up with JSON after staring at a CSS file and realizing it looks a lot like a Javascript object. | https://developers.slashdot.org/story/13/05/02/0331202/css-selectors-as-superpowers?sdsrc=nextbtmprev | CC-MAIN-2016-36 | refinedweb | 6,643 | 70.94 |
The Q3Canvas class provides a 2D area that can contain Q3CanvasItem objects. More...
#include <Q3Canvas>Object.
The Q3Canvas3CanvasView widgets may be associated with a canvas to provide multiple views of the same canvas.
The canvas is optimized for large numbers of items, particularly where only a small percentage of the items change at any one time. If the entire display changes very frequently, you should consider using your own custom Q33CanvasPolygonalItem is the most common base class used for this purpose.
Items appear on the canvas after their show() function has been called (or setVisible(true)), and after update() has been called. The canvas only shows items that are visible, and then only if update() is called. (By default the canvas is white and so are canvas items, so if nothing appears try changing colors.)
If you created the canvas without passing a width and height to the constructor you must also call resize().
Although a canvas may appear to be similar to a widget with child widgets, there are several notable differences:
A canvas consists of a background, a number of canvas items organized by x, y and z coordinates, and a foreground. A canvas item's z coordinate can be treated as a layer number -- canvas items with a higher z coordinate appear in front of canvas items with a lower z coordinate.
The background is white by default, but can be set to a different color using setBackgroundColor(), or to a repeated pixmap using setBackgroundPixmap() or to a mosaic of smaller pixmaps using setTiles(). Individual tiles can be set with setTile(). There are corresponding get functions, e.g. backgroundColor() and backgroundPixmap().
Note that Q33CanvasItem. Each canvas item has a position on the canvas (x, y coordinates) and a height (z coordinate), all of which are held as floating-point numbers. Moving canvas items also have x and y velocities. It's possible for a canvas item to be outside the canvas (for example Q3CanvasItem::x() is greater than width()). When a canvas item is off the canvas, onCanvas() returns false and the canvas disregards the item. (Canvas items off the canvas do not slow down any of the common operations on the canvas.)
Canvas items can be moved with Q33CanvasItem::collisions() functions.
The changed parts of the canvas are redrawn (if they are visible in a canvas view) whenever update() is called. You can either call update() manually after having changed the contents of the canvas, or force periodic updates using setUpdatePeriod(). If you have moving objects on the canvas, you must call advance() every time the objects should move one step further. Periodic calls to advance() can be forced using setAdvancePeriod(). The advance() function will call Q3CanvasItem::advance() on every item that is animated and trigger an update of the affected areas afterwards. (A canvas item that is `animated' is simply a canvas item that is in motion.)
Q3Canvas organizes its canvas items into chunks; these are areas on the canvas that are used to speed up most operations. Many operations start by eliminating most chunks (i.e. those which haven't changed) and then process only the canvas items that are in the few interesting (i.e. changed) chunks. A valid chunk, validChunk(), is one which is on the canvas.
The chunk size is a key factor to Q3Canvas's speed: if there are too many chunks, the speed benefit of grouping canvas items into chunks is reduced. If the chunks are too large, it takes too long to process each one. The Q3Canvas constructor tries to pick a suitable size, but you can call retune() to change it at any time. The chunkSize() function returns the current chunk size. The canvas items always make sure they're in the right chunks; all you need to make sure of is that the canvas uses the right chunk size. A good rule of thumb is that the size should be a bit smaller than the average canvas item size. If you have moving objects, the chunk size should be a bit smaller than the average size of the moving items.
The foreground is normally nothing, but if you reimplement drawForeground(), you can draw things in front of all the canvas items.
Areas can be set as changed with setChanged() and set unchanged with setUnchanged(). The entire canvas can be set as changed with setAllChanged(). A list of all the items on the canvas is returned by allItems().
An area can be copied (painted) to a QPainter with drawArea().
If the canvas is resized it emits the resized() signal.
The examples/canvas application and the 2D graphics page of the examples/demo application demonstrate many of Q3Canvas's facilities.
See also Q3CanvasView, Q3CanvasItem, QtCanvas, and Porting to Graphics View..
The Q3Canvas is initially sized to show exactly the given number of tiles horizontally and vertically. If it is resized to be larger, the entire matrix of tiles will be repeated as often as necessary to cover the area. If it is smaller, tiles to the right and bottom will not be visible.
See also setTiles().3CanvasItem::animated() canvas item is called with paramater 0. Then all these canvas items are called again, with parameter 1. In phase 0, the canvas items should not change position, merely examine other items on the canvas for which special processing is required, such as collisions between items. In phase 1, all canvas items should change positions, ignoring any other items on the canvas. This two-phase approach allows for considerations of "fairness", although no Q3.
Returns a list of items which collide with the rectangle r. The list is ordered by z coordinates, from highest z coordinate (front-most item) to lowest z coordinate (rear-most item).
This is an overloaded member function, provided for convenience.
Returns a list of canvas items which intersect with the chunks listed in chunklist, excluding item. If exact is true, only those which actually collide with item are returned; otherwise canvas items are included just for being in the chunks.
This is a utility function mainly used to implement the simpler Q3.
The canvas is divided into chunks which are rectangular areas chunksze wide by chunksze high. Use a chunk size which is about the average size of the canvas items. If you choose a chunk size which is too small it will increase the amount of calculation required when drawing since each change will affect many chunks. If you choose a chunk size which is too large the amount of drawing required will increase because for each change, a lot of drawing will be required since there will be many (unchanged) canvas items which are in the same chunk as the changed canvas items.
Internally, a canvas uses a low-resolution "chunk matrix" to keep track of all the items in the canvas. A 64x64 chunk matrix is the default for a 1024x1024 pixel canvas, where each chunk collects canvas items in a 16x16 pixel square. This default is also affected by setTiles(). You can tune this default using this function. For example if you have a very large canvas and want to trade off speed for memory then you might set the chunk size to 32 or 64.
The mxclusters argument is the number of rectangular groups of chunks that will be separately drawn. If the canvas has a large number of small, dispersed items, this should be about that number. Our testing suggests that a large number of clusters is almost always best..
The images are taken from the pixmap set by setTiles() and are arranged left to right, (and in the case of pixmaps that have multiple rows of tiles, top to bottom), with tile 0 in the top-left corner, tile 1 next to the right, and so on, e.g.
See also tile() and setTiles().
Sets the Q3Canvas to be composed of h tiles horizontally and v tiles vertically. Each tile will be an image tilewidth by tileheight pixels.
If the canvas is larger than the matrix of tiles, the entire matrix is repeated as necessary to cover the whole canvas. If it is smaller, tiles to the right and bottom are not visible.
The width and height of p must be a multiple of tilewidth and tileheight. If they are not the function will do nothing.
If you want to unset any tiling set, then just pass in a null pixmap and 0 for h, v, tilewidth, and tileheight..
If ms is less than 0 automatic updating will be stopped.
Returns the size of the canvas, in pixels.
Returns the tile at position (x, y). Initially, all tiles are 0.
The parameters must be within range, i.e. 0 < x < tilesHorizontally() and 0 < y < tilesVertically().
See also setTile().
Returns the height of each tile.
Returns the width of each tile.
Returns the number of tiles horizontally.
Returns the number of tiles vertically.
Repaints changed areas in all views of the canvas.
See also advance().
Returns true if the chunk position (x, y) is on the canvas; otherwise returns false.
See also onCanvas().
This is an overloaded member function, provided for convenience.
Returns true if the chunk position p is on the canvas; otherwise returns false.
See also onCanvas().
Returns the width of the canvas, in pixels. | https://doc.qt.io/archives/4.3/q3canvas.html | CC-MAIN-2021-39 | refinedweb | 1,548 | 64.61 |
Introduction
In this article I am going to make a XML Web
service that updates data into a Default table of the application. For the basics
of XML Web Service read my last article. Click Here
First go through SQL Server and make a table.
The following snapshot shows the process.
Database- akshay Table Name- student
Creating XML
Web Service in .Net
Here is sample code whichDelete" which returns an integer
value.
Service.cs
using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Data; using System.Data.SqlClient;
(); }
SqlConnection con;
SqlCommand cmd;
[WebMethod]
public int
GetDelete(int sn)
{
con = new
SqlConnection(@"Data Source=.;Initial
Catalog=akshay;Persist Security Info=True;User ID=sa;Password=wintellect");
cmd = new
SqlCommand("delete from student where sn="
+ sn + " ", con);
con.Open();
int roweffected = cmd.ExecuteNonQuery();
return roweffected;
}
}
Step 3 : Build
the Web Service and Run the Web Service for testing by pressing F5 function key.
Copy the url of this web service for further
use.
Step 4 : Click on GetDelete Button
to test the web service.
Enter the value of Sn to test the web
service.
By pressing the "Invoke" button a XML file is
generated.
The '1' respond to our data is deleted in specific DataBase table (here "student") see here. chosen the
name "MyTest", one Button and a Lable.
Rename the Button as 'Delete'.
Step 10 : Go to the Default.cs page and on the
button click event use the following code: protected void Button1_Click(object
sender, EventArgs e)
{ int sn = Convert.ToInt32(TextBox1.Text);
localhost.Service myservice =
new localhost.Service(); int temp= myservice.GetDelete(sn); if (temp== 1)
{
Label1.Text =
"Record is Deleted";
} else {
Label1.Text =
"Record is not Deleted Please Try Again";
}
}
Step 11 : Pressing
the F5 function key to run the website, you will see:
Enter the value of TextBox.
Press the Delete Button.
The record is deleted; you can check it from your database.
Resources
©2014
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/UploadFile/1d42da/a-xml-web-service-for-deletion-of-data-from-application-tabl/ | CC-MAIN-2014-52 | refinedweb | 336 | 62.44 |
3. class Mammal {
4. String name = "furry ";
5. String makeNoise() { return "generic noise"; }
6. }
7. class Zebra extends Mammal {
8. String name = "stripes ";
9. String makeNoise() { return "bray"; }
10. }
11. public class ZooKeeper {
12. public static void main(String[] args) { new ZooKeeper().go(); }
13. void go() {
14. Mammal m = new Zebra();
15. System.out.println(m.name + m.makeNoise());
16. }
17. }
What is the result?
A. furry bray
B. stripes bray
C. furry generic noise
D. stripes generic noise
E. Compilation fails
F. An exception is thrown at runtime....
actually i know the answer i dont know why...please tell me the reason.....
Mohamed Sanaulla | My Blog | Author of Java 9 Cookbook
Mohamed Sanaulla wrote:So what is the answer? Some hints- Think about Overriding, Runtime polymorphism for methods and compile time binding for the variables. Runtime would see the instance type and compile time would see the reference type.
the correct answer is A: furry bray
but i think in "void go()" method we have declared reference type of mammal class so furry wil b printed when we call "m.name" but how bray is printed....please explain.....i am confused....
Hennry Smith wrote:
the correct answer is A: furry bray
but i think in "void go()" method we have declared reference type of mammal class so furry wil b printed when we call "m.name" but how bray is printed....please explain.....i am confused....
So for which variable to call would depend of the type of reference- which is Mammal. But for methods- If the method is overridden in Subclass, then the method invoked would depend on the type of the Instance- In the example- its Zebra. There have been lot of discussions on this topic in the forum. You can search for the same.
Mohamed Sanaulla | My Blog | Author of Java 9 Cookbook
I don't see how that reply contributes to the discussion.
@Hennry Smith
Please QuoteYourSources and please UseCodeTags when posting code. Don't use the quotes for that. Those will not highlight the code while code tags will.
"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." --- Martin Fowler
Please correct my English.
| https://coderanch.com/t/526031/java/output-program | CC-MAIN-2018-43 | refinedweb | 369 | 79.26 |
wow, this still exists
@I Al Istannen Thanks! Now it works. Have a nice day :)
@I Al Istannen I tried with else before and it wasn't working. Well, I will try again and reply with the results.
The code runs but it swaps the color in the same tick so the visual effect is not even seen.
I actually have this executing every 20 ticks:
package tk.cubo;
import org.bukkit.Server;
import org.bukkit.plugin.java.JavaPlugin;...
@Zombie_Striker I can't do that as there is no way to check the stuff that I want to do because it is a part of Bukkit without an API. I already...
@I Al Istannen Using plugins. I did it with a command block but the idea of a plugin is replacing that command block.
I don't know how to explain this logic, but I want to set something to true every 1 second then set it to false when other second passed.
A good...
Life.
@I Al Istannen Thanks! :) I thought that setting that to final could cause problems!
@I Al Istannen Hello. Is there an alternative as I need to access non-final variables?
@EventHandler
public void...
Hello. I am making a class plugin and I need to force new players to keep an inventory open when joining the first time to select a class. I am...
dsdsds
Separate names with a comma. | https://dl.bukkit.org/members/elcreeperhd.91083964/recent-content | CC-MAIN-2020-34 | refinedweb | 235 | 76.42 |
Having just done a major refactor of the HAppS HTTP API to make it
much much easier to use, I am now thinking about simplifying the
current boilerplate associated with XML serialization and state
deserialization.
In the case of XML, Currently the developer must manually write
ToElement for every type he/she wants to output. HAppS provides
functions that makes doing this a lot easier if the types are Haskell
records with field labels of the form:
data Animal = Animal {key::Int, name:: String, breed::Breed, price::Float}
Looking at the XML example from the SYB site, a shift to SYB means
shifting from Record types with field labels to identifying fields by
their types so e.g.
data Animal = Animal Key Name Breed Price
type Key = Int
type Name = String
type Species = Cow | Sheep
type Price = Float
This model seems ok, but adds some verbosity. It also entails a
learning curve because either you also have field labels or you end up
with non-standard extra code to access fields. In this context, HList
provides good record functionality and a little template haskell makes
it easy to code so we end up with e.g.
$(hList_labels "Animal" "key name breed price")
type Breed = Cow | Sheep
This seems superior from a programming perspective because we have
good record functionality (and implicitly support for XML
namespaces!), but we lose something on the XML side. In particular,
we end up with two new problems:
1. We no longer have type names, so we can't automatically generate an
XML element name from an default HList record. The solution may be to
define something like this:
typeName name item = typeLabel .=. name .*. item
animal = typeName "animal"
myAnimal = animal .*. breed .=. Cow
And since update and extension are different functions, if we hide
"typeLabel" from the apps then this gets very safe (or we expose it
and get a form of typesafe coerce).
2. We don't know how to deal with non-atomic field values e.g.
<animal><breed>Cow></breed></animal> makes sense, but we probably
don't want read/show for product types. Perhaps it is possible to use
overlapping instances to do something like:
instance ToElement HCons a b where ...
instance ToElement a where toElement = show
But perhaps we can use SYB to allow us to distinguish between atomic
and non-atomic types and handle appropriately. I have to say I don't
think I undersand the SYB papers well enough to give an example.
Note: I can't find an actual example of generating XML from HLists in
any of the HList docs so it may be that it is not actually as easy as
it looks. All of this may be an open issue in theory as well as
practice.
== Deserialization ==
HAppS periodically checkpoints application state to disk. Developers
may want to add or remove fields from their state types for from data
types used by their state types. The current solution is to have the
developer assign a version number to state. If state changes then the
developer provides dispatch to a deserialization function based on
that version number.
It is not at all clear from either the SYB or the HList papers how to
deserialize generally. That being said, the HList library provides a
way to call functions with optional keyword arguments that looks like
it would also generalize to schema transitions.
Anyone who has some experience with this issue in the context of
HList or SYB?
== Haskell Collections ==
Currently HAppS developers use e.g. Data.Set or Data.Map as collection
types for their application. If we push devlopers to use HList for
eveything then they are going to need a way of handling collections of
Hlist items. My sense is that HList style data structures can't be
stored by Data.Map or Data.Set because they are not heterogenous
collection types (and will break simply from varying the order of
field names). If we use HList as the assumed record type, then I
think we need to recode Data.Map and Data.Set for them. Has anyone
implemented more interesting collection types for HList objects?
-Alex-
PS Obviously people can continue to use regular haskell types and
implement stuff manually. My goal here is to support people who want
more automation and are willing to learn HList or SYB in order to get
it. | http://article.gmane.org/gmane.comp.lang.haskell.cafe/17922 | crawl-002 | refinedweb | 729 | 62.17 |
Bugs item #868103, was opened at 2003-12-30 21:09
Message generated for change (Comment added) made by maartenbrock
You can respond by visiting:
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Ico Doornekamp (zevv)
Assigned to: Nobody/Anonymous (nobody)
Summary: gen.c line 7593 : need pointerCode
Initial Comment:
Compiling the following source generates the error message :
$ sdcc main.c
main.c:14: error: FATAL Compiler Internal Error in file 'gen.c' line
number '7593' : need pointerCode
Contact Author with source code
---------------------------
#include <stdlib.h>
struct foo {
char *bar[2];
};
int main(void)
{
struct foo f;
f.bar[1] = NULL;
return 0;
}
---------------------------
$ sdcc -v
SDCC :
mcs51/gbz80/z80/avr/ds390/pic14/pic16/TININative/xa51/ds400/h
c08 2.3.5 (Nov 19 2003) (UNIX)
OS: Linux / debian
Mail: sdcc@...
----------------------------------------------------------------------
>Comment By: Maarten Brock (maartenbrock)
Date: 2004-08-26 22:17
Logged In: YES
user_id=888171
Update:
I've been tracking this one down and it boils down to some
strange architecture in SDCC. It starts calculating the
address of the struct member without actually
inserting "address of" or dereference iCodes. It only sets the
isaddr bit. Later on when it's too late it looks at this bit and if
it's no pointer yet, it gets turned into a pointer. Next time
around it will be a pointer and it won't be turned into pointer
again. But if the struct member was a pointer to begin with
there's no way to fix things. This is called a "small kludge" (?)
in the source code (SDCCcse.c line 1897).
----------------------------------------------------------------------
Comment By: Maarten Brock (maartenbrock)
Date: 2004-04-20 21:02
Logged In: YES
user_id=888171
This bug is easily reproduced using SDCC 2.4.0 or 2.4.1. It
looks like the address of the combination of a pointer array in
a struct gets the storage class of the pointer. In this case
the compiler tries to store the generic NULL-pointer at a
generic-pointed-to address instead of an address in data-
memory.
If you add a storage class to the bar pointer you get these
results:
char data *bar[2]; // works
char idata *bar[2]; // works
char pdata *bar[2]; //NULL written in wrong memory
char xdata *bar[2]; //NULL written in wrong memory
char code *bar[2]; // FATAL...
char *bar[2]; // FATAL...
I just can't seem to pinpoint where this pointer gets set.
Greets,
Maarten
----------------------------------------------------------------------
You can respond by visiting:
Hi all,
I've been working on the printf functions to make them generally
usable, no more inline assembly. Then I tried to test it on the hc08 port
and I found out that in the regression tests spec.mk
REENTRANT=<empty> instead of REENTRANT=reentrant. But I did
get an error telling me I need reentrant. So now my question is:
Does the HC08 port need "reentrant" where appropriate or does it
always generate reentrant code?
Greets,
Maarten | http://sourceforge.net/p/sdcc/mailman/sdcc-devel/?viewmonth=200408&viewday=26 | CC-MAIN-2014-10 | refinedweb | 491 | 72.66 |
Feature request to enable systemtap on ubuntu 14.04.04.
Bug Description
systemtap package was absent from the ubuntu 14.04.03 repos. This is a feature request to include systemtap in ubuntu 14.04.04.
The attached patch is sufficient to let the systemtap package build on ppc64el in Ubuntu 14.04. Can anyone confirm if systemtap 2.3 upstream includes support for ppc64el?
------- Comment From <email address hidden> 2015-11-23 08:51 EDT-------
(In reply to comment #5)
> The attached patch is sufficient to let the systemtap package build on
> ppc64el in Ubuntu 14.04. Can anyone confirm if systemtap 2.3 upstream
> includes support for ppc64el?
Upstream systemtap 2.3 does not include support for ppc64le. Hemant is putting together a patch that enables ppc64le support in systemtap 2.3
------- Comment (attachment only) From <email address hidden> 2016-01-06 06:15 EDT-------
------- Comment (attachment only) From <email address hidden> 2016-01-06 06:16 EDT-------
------- Comment (attachment only) From <email address hidden> 2016-01-06 06:17 EDT-------
------- Comment From <email address hidden> 2016-01-06 06:24 EDT-------
Uploaded the above attachments after backporting commits :
https:/
https:/
https:/
Backport was done to debian/2.3-2.3 and tested on ubuntu 14.04.03.
Thanks for these patches. Can you please also describe a simple test case we can run to confirm that the new package is working as intended?
Hello bugproxy, or anyone else affected,
Accepted systemtap> 2016-01-08 09:16 EDT-------
Tested systemtap from trusty-proposed and it seems to work.
System tap version :
# stap --version
Systemtap translator/driver (version 2.3/0.158, Debian version 2.3-1ubuntu1.3 (trusty))
This is free software; see the source for copying conditions.
enabled features: AVAHI LIBSQLITE3 NSS TR1_UNORDERED_MAP NLS
Here is how I tested :
For uprobes :
# cat test_prog.c
#include <stdio.h>
void foo(void)
{
printf("Inside foo\n");
}
int main(void)
{
return foo();
}
# gcc test_prog.c -g -o test_prog
# stap -ve 'probe process(
Pass 1: parsed user script and 95 library script(s) using 42112virt/
Pass 2: analyzed script: 2 probe(s), 0 function(s), 0 embed(s), 0 global(s) using 42880virt/
Pass 3: translated to C into "/tmp/stapT5MpE
Pass 4: compiled C into "stap_712b2bd0a
Pass 5: starting run.
Inside foo
hit
This shows that it hits the probe point "foo". I also verified the address and found that it places the probe at intended location which is at the Local Entry Point of "foo".
Similarly, to test kprobes :
# stap -ve 'probe kernel.
And it showed "hit" upon executing a program from shell. Please note that kernel debuginfo (debugsym) package is needed to test kprobes.
So, from the above tests it looks like systemtap is able to put the probes at the right places.
Bug #1537125 has been filed separately showing that systemtap does not work with the 14.04.4 kernel (Linux 4.2). Unfortunately, the tests shown there weren't included as part of the test case for this SRU, but should have been.
The systemtap test suite can't be run on a buildd because of its intrusiveness. It should be included as an autopkgtest for this package.
Per discussion on bug #1537125, it is still useful to have systemtap on ppc64el for trusty compatible with older kernels.
This bug was fixed in the package systemtap - 2.3-1ubuntu1.3
---------------
systemtap (2.3-1ubuntu1.3) trusty; urgency=medium
* debian/
debian/
debian/
backport of architecture support for ppc64el. LP: #1511347.
systemtap (2.3-1ubuntu1.2) trusty; urgency=medium
[ Chris J Arges ]
* Build ppc64el architecture (LP: #1513227).
[ Dimitri John Ledkov ]
* Now that systemtap-sdt-dev is arch:all, it also needs to be
Multi-
A:all dep, tries to pull A:any).
-- Steve Langasek <email address hidden> Wed, 06 Jan 2016 16:35:38 -0800
The verification of the Stable Release Update for systemtap/1511347/ +editstatus and add the package name in the text box next to the word Package.
[This is an automated message. I apologize if it reached you inappropriately; please just reply to this message indicating so.] | https://bugs.launchpad.net/ubuntu/+source/systemtap/+bug/1511347 | CC-MAIN-2019-22 | refinedweb | 682 | 66.03 |
Sorting Algorithms in RubyBy Dhaivat Pandya
Sometimes, I think Ruby makes it too easy for us. Just doing
[1,18,4,72].sort gives us a nice sorted list. But what does Ruby actually do under the hood?
The point of understanding sorting algorithms has very little to do with the actual act of sorting. Rather, the different algorithms are great examples of various techniques that can be applied to large set of problems. In this article, we’ll take a look at some of these algorithms and the underlying ideas, as well as implementations in Ruby.
The Problem
Before we start out, let’s get on the same page about the problem we’re trying to solve. The input to our algorithm will be an array of arbitrary length consisting of integers (not necessarily positive). Our algorithm should return a version of this array sorted in ascending order. If we want descending order, we can either reverse the resulting array or change the algorithms presented slightly (e.g. the comparison operator used).
Bubble Sort
Let’s first look at a really simple (and also pretty slow) sorting algorithm known as “Bubble sort”. The idea is pretty simple: Walk through the list and put two adjacent elements in descending order. But, here’s the kicker: We have to repeatedly walk through the list until there are no longer any swaps to make, meaning, the list is sorted.
The Ruby implementation can be written easily from the hand-wavy description:
def bubble_sort(array) n = array.length loop do swapped = false (n-1).times do |i| if array[i] > array[i+1] array[i], array[i+1] = array[i+1], array[i] swapped = true end end break if not swapped end array end
We maintain a variable called
swapped, which is true if any swaps were made in the pass through the array. If, at the end of a walk through the array,
swapped is
false (i.e. no swaps have been performed), we are done and return a sorted list. Essentially,
swapped = true is the termination condition for the forever loop.
I mentioned at the outset that BubbleSort is a pretty slow algorithm, but what exactly do I mean by that? BubbleSort is bad at scaling, i.e. making our input size larger by a little bit results in a pretty large increase in the running time of the algorithm. What exactly is the relationship between input size and relationship? Well, let’s think about it.
The worst possible situation for BubbleSort occurs when the input array is in descending order, because this means we have to perform the maximum number of swaps. In fact, in the worst case, if we have
n elements in the input array, we have to perform \[ c * n^2 \] swaps, for some positive real constant
c (i.e.
c doesn’t change when
n does). Assuming that each swap takes a constant amount of time (i.e. the time taken to perform a swap is unaffected by the input size), the running time of BubbleSort increases quadratically with respect to the input size. So, we can say we can say BubbleSort is \[O(n^2)\] which is equivalent to saying the running time scales quadratically.
We can also think about the amount of memory in order to perform the sort. Since the sort is “in place”, we don’t have to allocate any additional memory. Thus, we say that BubbleSort’s space complexity is \[O(1)\] i.e. constant at most.
Merge Sort
It turns out we can do quite a bit better when it comes to sorting. A technique called “divide and conquer” takes the original problem and produces solutions for smaller pieces of it, which are together in order to create the final solution. In our case, this is the problem statement:
Sort the array A of length n
What if we did this instead:
Sort the left of half of A and the right of A, then combine them.
We can actually split each of the halves into further halves (quarters of the original part) and continue recursively. Turns out, we’ll eventually reach an array of size 1, which is automatically sorted. Then, take all of the pieces and finally combine them to give the equivalent of
A.sort. However, the combining process is a bit involved.
Say we’re given two sorted arrays, like [1, 3, 5] and [2, 4, 6, 7] and want to combine them into one sorted array. Here’s how to do that, in this case. Take a look at the first elements of both arrays and put the smaller one into the result final result. After one run of this, we have this in the final result:
[1]
Repeat the process except move along the array, making the next element the “first element”. In this case, make 3 the “first element” of [1, 3, 5] and compare it to 2 from [2, 4, 6, 7]. We keep doing this until we’ve exhausted one of the lists. At this point, the final result looks like this:
[1, 2, 3, 4, 5]
Then, just push on the rest of the second list to get a combined, nicely sorted list:
[1,2,3,4,5,6,7]
We can use the procedure described with the example as our “merge” or “combine” step. So, here’s an outline of mergesort for an array
A with length
n:
Mergesort(A): 1. return A if n == 1 2. left = left half of A 3. right = right half of A 4. sorted_left = Mergesort(left) 5. sorted_right = Mergesort(right) 6. return merge(sorted_left, sorted_right)
Let’s take a look at the Ruby rendition of this:
def mergesort(array) def merge(left_sorted, right_sorted) res = [] l = 0 r = 0 loop do break if r >= right_sorted.length and l >= left_sorted.length if r >= right_sorted.length or (l < left_sorted.length and left_sorted[l] < right_sorted[r]) res << left_sorted[l] l += 1 else res << right_sorted[r] r += 1 end end return res end def mergesort_iter(array_sliced) return array_sliced if array_sliced.length <= 1 mid = array_sliced.length/2 - 1 left_sorted = mergesort_iter(array_sliced[0..mid]) right_sorted = mergesort_iter(array_sliced[mid+1..-1]) return merge(left_sorted, right_sorted) end mergesort_iter(array) end
The main procedure of interest here is
merge. Essentially, move along the two halves, while adding on the smaller value to the end of
res until we have exhausted one of the halves. At this point, simply pile on what remains of the other list. And, that’s MergeSort for you!
What’s the point?
So, what is the benefit of implementing a significantly more complicated algorithm than good old BubbleSort? It is all in the numbers.
Without getting into the details too much (they involve solving a recurrence), we’ll say that mergesort is \[ O(n log_2{n}) \] It turns out that this is *a lot* better than \[O(n^2)\] Let’s make another simplification. Say MergeSort uses \[n /log_2(n)\] operations and BubbleSort uses \[n^2\] operations to sort an array of length
n If we set
n = 1,000,000 then MergeSort requires roughly \[6 \cdot 10^7\] operations whereas BubbleSort weighs in 17,000 times worse at \[1 \cdot 10 ^ {12}\] operations.
Divide and Conquer
MergeSort shows us a nice strategy: divide and conquer. It turns out there are a lot of problems which can be solved by breaking them into subproblems and the combining the resulting “subsolutions”. It takes skill and practice to identify cases where you can use divide and conquer, but MergeSort is a great example to use as a springboard.
The take home points are: If you have some sort of trivial base case (in MergeSort, a list of length 1) or are somehow able to break your problem down to this base case and have some kind of idea to combine solutions, divide and conquer might be an avenue to explore.
Wasting Memory Like No Tomorrow
But wait, we can do even better than MergeSort! But, this time we’ll have to spend, um, a bit more memory. We’ll get straight to the Ruby:
def mark_sort(array) array_max = array.max array_min = array.min markings = [0] * (array_max - array_min + 1) array.each do |a| markings[a - array_min] += 1 end res = [] markings.length.times do |i| markings[i].times do res << i + array_min; end end res end p mark_sort([3,2,19,18,17])
We have an array called
markings representing the number of times a certain number occurs in
array. Each index of
markings corresponds to a number in the range
array.max and
array.min, inclusive. We initialize
markings with 0s and walk through the array, incrementing a number’s marking when we see it occur in the array. Finally, just print out the numbers seen the right number of times. We’re making a few passes of the array, but we never make a “pass inside a pass”, i.e. a nested loop. So, our algorithm performs \[ c * n \] operations for some constant
c for an array of length
n
We can say that it is \[O(n)\] in other words, blazing fast. But, it isn’t always useful, since we have to use a potentially giant piece of memory. For example, if we were given
[1,1000000] as an input, we’d have to allocate a million element array! But, if we know our ranges will be small or have lots of memory to waste, this algorithm can come in handy.
Wrapping It Up
I hope you’ve enjoyed this tour of sorting algorithms in Ruby. Drop questions in the comments below! | https://www.sitepoint.com/sorting-algorithms-ruby/ | CC-MAIN-2017-26 | refinedweb | 1,602 | 64.41 |
Type: Posts; User: jerome12345
Thanks a lot,
Please find attached a file anmed calculator.zip.
It is actually generally through the wizard which of an external software which has all the class definitions.
As you see it...
Does anything need to be set in configuration propeorties?
Yes- I did
Linker /general /Addnl dependencies has the path as below
D:\jerome_ABCD\proj\fortran_to_cpp\flib\flib\Debug
and Linker/input/addnl dependencies had the library name flib.lib
...
Yes- I was able to get good information.
I tried a small program which worked well. It was a simple one like this:
C++ code
#include <iostream>
extern "C"
{
I really need help on this.
Anyone has experience in calling FORTRAN from C++ (using extern C?)
JErome | http://forums.codeguru.com/search.php?s=41574fc852683c047e71654c2aaff3e0&searchid=6780549 | CC-MAIN-2015-18 | refinedweb | 123 | 58.69 |
On Feb 20, 2006, at 12:19, Chris Bowditch wrote:
Jeremias Maerki wrote:FOP should also stop to complain about certain foreign elements. Thecomplaints are fine as a debugging aid, for example if someone makes a mistake with the namespace URI or the name of an element in a particular namespace. Here's where I got the idea that we could provide for a list of namespace URIs which are simply silently ignored (instead of having to write a FOP extension for each namespace). If someone has a reference to a namespace URI in his documents that FOP doesn't know about he couldadd that namespace URI to that list and FOP will fall silent over it. The same list could be used for handling foreign attributes. This wayyou still get important feedback if you've done anything wrong, but cantell FOP to shut up where necessary. WDYT?Good idea, I like the idea of the ignore-namespace list.
I second that.Nils' initial patch still seems valid/worthwhile though (even if it looks more like a quick fix) The whole property subsystem should in essence only be concerned about attributes in the FO namespace. The proposed location seems like the ideal spot to filter these out and, for the future, add any non-FO attributes to an 'extension property list' that could be passed entirely to somewhere else...
Cheers, Andreas | https://www.mail-archive.com/fop-dev@xmlgraphics.apache.org/msg03735.html | CC-MAIN-2018-43 | refinedweb | 231 | 65.96 |
Configuration is handled by the
google.ads.google_ads.config module, though
there is never a reason to access this module directly as it is called by the
main
GoogleAdsClient class on initialization. There ga via the command line. Note that these instructions
assume you're using
bash, if you're using a different shell you may need
to consult documentation on how to set environment variables in the
shell you're using.
Here are some basic steps to define an environment variable via. i.e.:
os.environ['GOOGLE_ADS_CLIENT_ID']
Here's an example of how to initialize a client instance with configuration from environment variables:
from google.ads.google_ads via via YAML string
If you have read a YAML file into memory you can provide it directly to the
client on initialization. To do this just use the
load_from_string method.
from google.ads.google_ads import GoogleAdsClient with open('/path/to/yaml', 'rb') as handle: yaml = handle.read() client = GoogleAdsClient.load_from_string(yaml)
Configuration Fields
The client library configuration supports the following fields.
General fields:
refresh_token: Your OAuth refresh token.
client_id: Your OAuth client ID.
client_secret: Your OAuth client secret.
developer_token: Your developer token for accessing the API.
login_customer_id: See the login-customer-id documentation.. | https://developers.google.com/google-ads/api/docs/client-libs/python/configuration | CC-MAIN-2019-47 | refinedweb | 202 | 51.55 |
so I’ve recently set up my raspberry pi. Am following a tutorial for the MPU9250, from.
I have did a install of the micropython-mpu9250, and tried the sample code in the readme to get my readings. However, i have encountered that error and cannot run my program.
I have made sure that i2c exists and is enabled using i2cdetect -y 1
Any solutions?
- Welcome @GGenesis, Nice to meet you. I have drafted an answer for you. Please feel free to ask any newbie questions. Happy microPython programming and cheers! 🙂 – tlfong01 41 mins ago
Question
Setup
(1) I have installed micropython-mpu9250.
(2) I have detected the i2c bus using “i2cdetect -y 1”.
Problem
How come my program don’t run?
Answer
(1) What does the command “i2cdetect …” do?
The commend “i2cdetect -y 1” is used to detect if there is any working I2C devices connected to the bus, and display the devices’ address as two hexadecimal digits.
For example, if you have a device “ABC” with address “0x66”. So i2cdetect -y 1 should display at the “60” (7th row) row two digits “66”.
If you have installed a driver for the device ABC, for example, by using the text editor “nano” editing the file /boot/config.txt by adding a line like this:
dtoverlay=i2c ...
then i2cdetect might display “UU” instead of “66”.
And even i2cdect detects the device, it does not mean the device should work correctly. It only means i2cdetect “pings” (say hello to) the device, and the device replies “I am here”. In other words, ABC only says he is present, not guaranteeing that he is OK to work.
(2) Importing micropython python modules
First you need to import the correct python modules. Below are the example statements from the microPython MPU9250 I2C Driver Git HubGitHub:
import micropython import utime from machine import I2C, Pin, Timer from mpu9250 import MPU9250
Note that the example is not using the Rpi default I2C pins GPIO 2, 3 (40 pin header physical pin numbers 3, 5). This implies the usual import command “import smbus” won’t work. You must import first import “micropython”, then from “machine import I2C …”
(3) Setting up an I2C bus and give it a name, eg “i2c123”
Then you need to setup the i2c bus, like the following example in gitHub:
i2c123 = I2C(scl=Pin(22), sda=Pin(21))
(4) Creating an object (instance) of the micropython class for device MPU9250
Now the time has come to “give birth” of a new mpu9050 “baby”,and give him a name, eg “mpu9250BabyJonathan”:
mpu925BabyJonathan = MPU9250(i2c123)
Now you can “teach” Jonathan to do something!
(5) Other possible I2C programming problems
I have been using Rpi3B+ stretch playing with I2C 3/6/9 DOF sensors and found a big program, ie, for Rpi3B+, I2C speed is a flat rate of 100kHz and cannot be changed (though “official” instructions says otherwise!). Furthermore, the python I2C (smbus) module does not entertain “bus stretching” which is required in some cases. A get around is to lower I2C speed, but not for Rpi3B+.
To solve the problem, you need to use Rpi4B buster, which allows lower I2C speed to as low as 10kHz, and problem solved. See Reference 4 below for more details.
(6) MPU9250 make easy for newbies
There is not that many tutorials for 9-DOF MPU9250. For newbies I would recommend to try 6-DOF MPU6500, or even 3-DOF sensors. MPU9250 actually has MPU6500 and another 3-DOF magneto sensor glued together inside (Ref 3). So the knowledge and skills acquired in MPU6500 can directly transfer to MPI9250.
In other words, eat the big 9-DOF elephant bite by bite, in three 3-DOF bites. This way, the very steep MPU9250 learning curve becomes three shallow ones.
References
(1) MicroPython MPU-9250 (MPU-6500 + AK8963) I2C driver GitHub
(2) MPU9250 Datasheet – Invensense
(3) MPU9250/MPU6250 Newbie Advice – tlfong01 2018nov
(4) Rpi4B buster BNO055 9-DOF Sensor and 16 Channel PWM Driver CircuitPython Programming Problem – tlfong01 2019jun
End of answer
It’s preferable that the code listing is in your question and that the error message is cut&pasted into your question.
At a guess you should be importing I2C not i2c.
Categories: Uncategorized | https://tlfong01.blog/2019/10/30/mpu9250-microprogramming-problems/ | CC-MAIN-2020-40 | refinedweb | 706 | 70.73 |
Here is my code.
// Rids a .cpp file's white-space (except spaces) and comments (reduces memory) #include <iostream> #include <fstream> using namespace std; int main() { cout << "This program rids a C, C Header, C++, or C++ Header file's white-space." << endl; cout << "The file should be located in the same directory (folder) as this program." << endl; cout << "Enter the filename: "; string filename; getline(cin, filename); ifstream infile; infile.open(filename.c_str()); if(infile.fail() == true) { cout << "Error in opening the file \"" << filename << "\"." << endl; return 0; } string currentLine; int line = 1; for(; infile.eof() == false; line++) { cout << filename << " | Getting line " << line << endl; getline(infile, currentLine); cout << filename << " | Parsing line " << line << endl; int charNumber = 0; for(; charNumber != currentLine.length(); charNumber++) { // If a carriage-return is encountered, and it is not a preprocessor-directive* line, delete it if(currentLine[charNumber] == 13 && currentLine[0] != '#') { // If the char before the newline is an escape-sequence if(currentLine[charNumber-1] == '\\') currentLine[charNumber-1] == ' '; currentLine[charNumber] = ' '; // 'X' to see if the file was visibly changed. } } } } // *Assumes that there is no white-space before the hash-sign (#) for the preprocessor-directive line
Note: It does not remove comments yet either.
This post has been edited by hulla: 03 October 2011 - 09:39 PM | http://www.dreamincode.net/forums/topic/249747-why-does-this-program-not-work-properly/page__p__1451430 | CC-MAIN-2016-07 | refinedweb | 207 | 55.03 |
Slashdot Log In
Open Watcom 1.0 Released
JoshRendlesham writes "The Open Watcom C/C++ and FORTRAN 1.0 compilers have been officially released. The source, and binaries for Win32 and OS/2 systems, are available. This release also means that outside developers can join and contribute to the project." Or if you prefer, gcc is up to 3.2.2.
This discussion has been archived. No new comments can be posted.
Open Watcom 1.0 Released | Log In/Create an Account | Top | 307 comments | Search Discussion
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Rise of the Triads (Score:3, Interesting)
Open C-64 0.9 is now available. (Score:1)
DOS days (Score:4, Interesting)
( | Last Journal: Saturday July 05 2003, @05:18PM)
graspee
Re:DOS days (Score:5, Informative)
Anyway, I'm excited by this because, well, competition is almost always a good thing. Hopefully gcc and Watcom can feed off each other and both products will improve. And perhaps more importantly for the build-everything users, another open source compiler might start moving people (like the developers of autoconf) to better support non-gcc compilers. This way, users who prefer Watcom's (or Intel's, or...) compiler can use it without as much tweaking.
Re:DOS days (Score:5, Informative)
( | Last Journal: Saturday July 05 2003, @05:18PM)
graspee
Re:DOS days (Score:5, Informative)
()
Good old days (Score:1, Interesting)
What happened to Watcom (Score:5, Informative)
IIRC: Watcom was purchased by Powersoft. Powersoft's main product was a front-end database tool called PowerBuilder. One of Watcom's products was a small database called Watcom SQL. Powersoft bought Watcom so that they could ship Watcom SQL along with Powerbuilder, so that Powerbuilder could run OOTB.
Oddly enough, Sybase bought Powersoft a few years later so that they could use Powerbuilder to compete against Oracle's front-end tools. This meant Sybase ended up with Watcom's assets, even though they were not particularly interested in them.
cool ! that's great news (Score:3, Interesting)
()
Hopefully this sets a trend.
Re:cool ! that's great news (Score:5, Interesting)
()
Incidentally, if someone can tell me how to prevent loader crashes in "ld" under QNX when there's an undefined symbol in a trivial program that includes "", I'd appreciate it. Nobody in the QNX newsgroups seems to know.
Just don't... (Score:2, Informative)
( | Last Journal: Friday March 05 2004, @07:28PM)
Watcom was great. How about today? (Score:5, Interesting)
I received the email yesterday about Watcom's "release" to open source. In that email it says that Sybase felt there was no commercial value in the product anymore so they released it. My question is "Has Sybase been keeping this thing up? Is it useful today?" Or is this a scam to try to give life to a dying patient? I mean perhaps people working on this might be better off working on gcc or something.
Thanks!
Re:Watcom was great. How about today? (Score:5, Interesting)
( | Last Journal: Friday October 24 2003, @04:10AM)
One thing I know is that their optimization routine rocks.
Well, optimization routines can be divided into two parts: One is architecture independent (which involves simplification of AST and stuff) and the other is architecture independent. IIRC, their architecture-independent optimization was really great. It can correctly detect redundant codes and simplify it.
I used to be an ASM programmer as I was a performance freak. When I compile my C/C++ program using Watcom, it almost always produced near optimized (i.e. the "gold-standard") asm code. I knew this when I dumped out the assembler code.
I knew that their arch-independent optimization is really good because when you add things such as calculation of busy expression (i.e. expression that you used over and over) and stuff, it correctly cache the calculation before hand. So, you will save a tremendous time, especially if you do it in a loop. The problem was (again, IIRC) that was not perfect and some of the expressions are left undetected. But, that's probably a bug.
IMHO, arch-independent optimization play a lot greater role than the arch-dependent one (ok, some of you may not agree with me). Things like peephole optimization is great, but is of limited usefulness once you apply the correct transformation of the AST and other internal structures.
This is also partly why Intel optimizing compiler is also great. I heard that some of the folks are doing partial evaluation on the code -- which can greatly help speeding up the result. The idea was: If you use a particular routine (like function) only with a handful of value range, it will automatically create a specialized and optimized function for you exploiting the nature of the input values. For example: You probably have seen the routine that calculates (-1)^n used in a routine that calculates x^y. The optimizing compiler thus should be able to generate: return (n && 1 == 0) ? 1 : -1; instead of the looping. This only involves some (expensive) static analyses computations. I have yet to see this in other compilers.
Therefore, this release is really really good thing. I hope that GNU compiler teams would pickup some of their good stuff.
Superb! (Score:2, Funny)
Re:Superb! (Score:5, Informative)
( | Last Journal: Wednesday June 27, @08:48AM)
It's no coincidence that SGI and Cray have excellent Fortran compilers, their customers demand it.
(sorry I spent all of last Wednesday in 2 seminars with a fellow from SGI's Canadian HPC group, I'm still buzzing.
GCC performance and another thing... (Score:3, Interesting)
()
2. Does the Watcom WIN32 binary run under WINE?
Re:GCC performance and another thing... (Score:5, Informative)
Which to use; WatCom or GNU (Score:2)
(Last Journal: Sunday March 16 2003, @10:39PM)
Performance comparisons (Score:5, Interesting)
()
Win32 compilers (not including Watcom - and with good reason, it's a bitch to set up on Win32) [willus.com]
as linked from the djgpp FAQ, some info on DOS compilers [geocities.com].
So, hooray! A lesson in using Google before Slashdot mixed with some blatant karma-whoring.
PS. this [bagley.org] is good too.
No Time (Score:4, Funny)
Who is using Watcom in production? (Score:3, Interesting)
This would also be excellent information for Watcom to put on their site. It would give them much more legitimacy.
i have been waiting for this news (Score:1, Informative)
Mainframe compilers (Score:2, Interesting)
()
WX-REXX (Score:1)
Warp speed - I never really got there, but I sur tried!
Free software not a dumping ground! (Score:1, Insightful)
( | Last Journal: Thursday October 07 2004, @05:34AM)
Yet another company trying to use free software as a dumping ground for useless software. What does Watcom have to offer today? Which vision of the future they have that could offer something that gcc or something the like cannot?
I do not see anything they can offer. Even if they had, would it not be better to just release the source code under the GNU GPL and integrate any valuable part into gcc? Thus they could create a new Cygnus based on their gained gcc expertise. But we do not need yet another also-ran, GPL-incompatible, redundant confused-ideas licensed open-source piece software.
Perhaps some years ago this would have been great. Not it is too little, too late.
Re:Free software not a dumping ground! (Score:5, Informative)
()
Sure it is (Score:5, Insightful)
( | Last Journal: Friday June 30 2006, @01:35AM)
Maybe you're not up to snuff on the philosiphy of code-reuse and what Free Software means.
If software and code is a commodity, and the value then becomes it configuration/customization, then every little bit of trash that can be opened is a Very Good Thing. If the company was proprietary their entire corporate life, but releases the soruce as GPL (or BSD) when they fold, this is a Good Act and should be Lauded and Welcomed and Thanked.
The darn site's
GCC (Score:5, Informative)
Gcc is good, open, and could use some work, so please think about helping out. My favorite is MinGW [mingw.org] which is a really nice and decently maintained Win32 version of gcc and binutils. MinGW also distributes MSYS [mingw.org] which is a bash shell and other gnu utilities that make a windows box capable of running a Linux configure script. This allows much easier porting of GNU applications to windows and vice versa. There are several GUI compilers based on MinGW too, see the web page FAQ. A nice GUI GCC based compiler for Win32 is Bloodshed Dev-C++ [bloodshed.net], which I've used.
Cygwin [cygwin.com] is good too but I prefer MinGW (obviously).
So think about helping out, our tools will only get better if folks work on them.
Watcom Memories (Score:3, Insightful)
What killed them? Did they pull all their brains off C++ to work on PB? Was competition from MS too tough? Was their GUI builder (licensed from some 3rd party) too lame? Was the cost of implementing the C++ standard too high? (Watcom was late to offer STL -- they included their own (way different) libs instead.)
We were a couple of generations back on chips when Watcom pretty much stopped pushing their compiler technologies. I wonder how much they lose by not having optimizations targetting new hardware features.
Re:Watcom Memories (Score:5, Informative)
Watcom would have to eliminate all the support for the other platforms to license MFC and ship it with their compilers. And Microsoft was all but giving Visual C-- away at the time also.
The Watcom compiler was one of the fastest on the market from what I remember. I had heard that IBM used it for the WinOS/2 subsystem on OS/2 to make it a faster Windows than Dos/Windows.
Think about it, Microsoft HATES anything that abstracts the Win32 API and crossplatform frameworks and crossplatform compilers where one of the early targets of the beast in Redmond. Borland was the only one that got any money out of taking Microsoft to court for attacking it's business using illegal means. The others were too small and just folded and looked for other ways to make a business.
LoB
Now all we need.. (Score:2)
It's been a long time since I've used the Watcom compiler, but it used to be the bomb. I use gcc exclusivly now, and sometimes pine for the day when a build was done in seconds instead of minutes. I'm betting it will be a difficult undertaking to incorprate the Watcom code, though.
No, actually (Score:5, Interesting)
(Last Journal: Monday January 06 2003, @10:36PM)
Incidentally, vectorization in Intel C/C++ is a joke. I put so many hints into my code (aligned variables, processed stuff in suitable sized chunks etc.) and still couldn't trigger the compiler to vectorize. It's much easier to insert SSE instructions yourself.
The Intel compiler has better error reporting than MSVC++. I use it when I don't understand why MSVC++ is barfing on my template code. This is more useful than it sounds!
Cross compiler for PIC microcontrollers? (Score:3, Interesting)
()
The more compilers the better. (Score:3, Interesting)
VC++ is okay, beware that the cheap/free edition leaves out the optimizations. The standard library is much improved in the 7.0 release, but MS still like to disable some default warnings to paper over their own historical sins to keep things like MFC happy. The IDE is pretty nice and the documentation for the standard library is usually damn good, but I will never forgive Visual Studio's authors for the way they chose to dedent the case clauses in switch blocks.
g++ is finally a nice compiler in its 3.x incarnation. In the 2.9x days it was utter trash. The generated code is good and usually quite fast, but a bit on the bloated side. It is a little more permissive than I'd like even with -Wall -pedantic, but that's okay since it's not the only compiler out there. This is a good choice for producing final executables.
The verdict is still out on Watcom. Bundling STLport already puts it a step ahead of most, that thing can be a bitch and a half to get working with some of the commercial compilers.
Long File Names Support (Score:5, Interesting)
The long file name support is broken everywhere in this new release of Watcom C/C++/Fortran77. Even the included IDE doesn't do long file names. So you can imagine my disappointment when I opened C:\Program Files\watcom\\hello.c and hello.cpp in the IDE, only to get a blank file named "C:\program".
This is 2003 and Windows 95 didn't just come out last month. I mean, Sybase told us on June 30, 1999 that v11.0 would be the last major release of Watcom C, and long file names worked just fine there. Reincarnated as open source now without LFN support, does this mean that this feature got left behind in the afterlife?
It would be nice if... (Score:1)
Wonderful, but... (Score:1)
why Microsoft doesn't one-up Sun and release large
portions of it's J++ product that it was supposedly planning to discontinue anyway.
will all open source compilers grow together? (Score:2)
So if the same open source developers work on both Watcom compilers and GNU compilers, does this mean that the best features of both will be carried back and forth (kind of unknowingly, but more out of convenience) until they start looking alike? I would assume that in the future these products may grow together, and the same destiny may apply to other open source efforts that have commonalities.
Watcom is ok. (Score:2)
(Last Journal: Sunday November 04, @03:38AM)
Re:Stop duplication of effort (Score:5, Interesting)
()
I'm looking forward to someone benchmarking gcc vs watcom to see how they do.
Re:Stop duplication of effort (Score:5, Insightful)
Re:Stop duplication of effort (Score:4, Funny)
()
Yeah. That whole "competition" thing is totally overrated.
Re:Hey, let's include a snide comment! (Score:1)
Not really because no one really cares about anything micheal says anyway.
Sometimes I wonder if he actually enjoys looking like an idiot.... hmmm....
Re:I liked the old commercial versions (Score:1)
Re:Hey, let's include a snide comment! (Score:1)
( | Last Journal: Thursday April 29 2004, @08:58PM) | http://developers.slashdot.org/developers/03/02/08/196239.shtml | crawl-002 | refinedweb | 2,458 | 65.93 |
Preview - Limit egress traffic for cluster nodes and control access to required ports and services in Azure Kubernetes Service (AKS)
By default, AKS clusters have unrestricted outbound (egress) internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. Your cluster is then configured to only use base system container images from Microsoft Container Registry (MCR) or Azure Container Registry (ACR), not external public repositories. You must configure your preferred firewall and security rules to allow these required ports and addresses.
This article details what network ports and fully qualified domain names (FQDNs) are required and optional if you restrict egress traffic in an AKS cluster. This feature is currently in preview.
Important
AKS preview features are self-service opt-in. Previews are provided "as-is" and "as available" and are excluded from the service level agreements and limited warranty. AKS Previews are partially covered by customer support on best effort basis. As such, these features are not meant for production use. For additional infromation, please see the following support articles:
Before you begin
You need the Azure CLI version 2.0.66 or later installed and configured. Run
az --version to find the version. If you need to install or upgrade, see Install Azure CLI.
To create an AKS cluster that can limit egress traffic, first enable a feature flag on your subscription. This feature registration configures any AKS clusters you create to use base system container images from MCR or ACR. To register the AKSLockingDownEgressPreview feature flag, use the az feature register command as shown in the following example:
Caution
When you register a feature on a subscription, you can't currently un-register that feature. After you enable some preview features, defaults may be used for all AKS clusters then created in the subscription. Don't enable preview features on production subscriptions. Use a separate subscription to test preview features and gather feedback.
az feature register --name AKSLockingDownEgressPreview --namespace Microsoft.ContainerService
It takes a few minutes for the status to show Registered. You can check on the registration status by using the az feature list command:
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AKSLockingDownEgressPreview')].{Name:name,State:properties.state}"
When ready, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:
az provider register --namespace Microsoft.ContainerService
Egress traffic overview
For management and operational purposes, nodes in an AKS cluster need to access certain ports and fully qualified domain names (FQDNs). These actions could be to communicate with the API server, or to download and then install core Kubernetes cluster components and node security updates. By default, egress (outbound) internet traffic is not restricted for nodes in an AKS cluster. The cluster may pull base system container images from external repositories.
To increase the security of your AKS cluster, you may wish to restrict egress traffic. The cluster is configured to pull base system container images from MCR or ACR. If you lock down the egress traffic in this manner, you must define specific ports and FQDNs to allow the AKS nodes to correctly communicate with required external services. Without these authorized ports and FQDNs, your AKS nodes can't communicate with the API server or install core components.
You can use Azure Firewall or a 3rd-party firewall appliance to secure your egress traffic and define these required ports and addresses. AKS does not automatically create these rules for you. The following ports and addresses are for reference as you create the appropriate rules in your network firewall.
Important
When you use Azure Firewall to restrict egress traffic and create a user-defined route (UDR) to force all egress traffic, make sure you create an appropriate DNAT rule in Firewall to correctly allow ingress traffic. Using Azure Firewall with a UDR breaks the ingress setup due to asymmetric routing. (The issue occurs because the AKS subnet has a default route that goes to the firewall's private IP address, but you're using a public load balancer - ingress or Kubernetes service of type: LoadBalancer). In this case, the incoming load balancer traffic is received via its public IP address, but the return path goes through the firewall's private IP address. Because the firewall is stateful, it drops the returning packet because the firewall isn't aware of an established session. To learn how to integrate Azure Firewall with your ingress or service load balancer, see Integrate Azure Firewall with Azure Standard Load Balancer.
In AKS, there are two sets of ports and addresses:
- The required ports and address for AKS clusters details the minimum requirements for authorized egress traffic.
- The optional recommended addresses and ports for AKS clusters aren't required for all scenarios, but integration with other services such as Azure Monitor won't work correctly. Review this list of optional ports and FQDNs, and authorize any of the services and components used in your AKS cluster.
Note
Limiting egress traffic only works on new AKS clusters created after you enable the feature flag registration. For existing clusters, perform a cluster upgrade operation using the
az aks upgrade command before you limit the egress traffic.
Required ports and addresses for AKS clusters
The following outbound ports / network rules are required for an AKS cluster:
- TCP port 443
- TCP port 9000 and TCP port 22 for the tunnel front pod to communicate with the tunnel end on the API server.
- To get more specific, see the *.hcp.<location>.azmk8s.io and *.tun.<location>.azmk8s.io addresses in the following table.
The following FQDN / application rules are required:
Optional recommended addresses and ports for AKS clusters
- UDP port 53 for DNS
The following FQDN / application rules are recommended for AKS clusters to function correctly:
Next steps
In this article, you learned what ports and addresses to allow if you restrict egress traffic for the cluster. You can also define how the pods themselves can communicate and what restrictions they have within the cluster. For more information, see Secure traffic between pods using network policies in AKS.
Feedback | https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic | CC-MAIN-2019-39 | refinedweb | 1,041 | 52.6 |
SourceMod 1.4.0 Release Notes
SourceMod 1.4. is a major update with many new features and bug fixes.
Contents
Overview for Admins
- New Game Support - SourceMod now runs on Bloody Good Time and E.Y.E. Divine Cybermancy and and added support for Nuclear Dawn.
- Added support for the third-party mods International Online Soccer: Source and Half-Life 2 Capture the Flag.
- Support for Mac OS X. (both listen servers and srcds_osx).
- Updated gamedata for many games and mods.
- Reserved slot hiding recently broken from updates in Source 2009 games with SourceTV and/or replay active has been fixed.
- Lots of stability fixes.
Overview for Developers
A full list of API additions and changes is available.
- Many new functions added
- Some existing functions made more useful
- Plugin compiling should be noticeably faster now with larger plugins
- Too much to name here. For overview of all sp API changes and additions, please see SourceMod_1.4.0_API_Changes
Compatibility Issues
In almost all cases, SourceMod 1.4.0 is fully backward compatible with the 1.3.x releases. The following are the few places where this is not the case.
ExplodeString behavior has changed
Return value has been fixed and handling has changed when there are delimiters at the end of the source string.
For more information, see SourceMod_1.4.0_API_Changes#String
Known plugins affected:
- (none known)
forceEdict parameter in CreateEntityByName is now ignored on ep2 and later:
For more information, see SourceMod_1.4.0_API_Changes#Functions
Known plugins affected:
- (none known)
TF2_OnGetHoliday deprecated, TF2_OnIsHolidayActive added
TF2_OnGetHoliday will no longer fire due to changes in TF2 and has been deprecated. TF2_OnIsHolidayActive can now be used instead to provide similar functionality.
For more information, see SourceMod_1.4.0_API_Changes#TF2_2
Known plugins affected:
Translations
SourceMod 1.4 comes with the following languages translated, thanks to community translators:
- Brazilian Portuguese
- Chinese Simplified
- Chinese Traditional
- Czech
- Danish
- Dutch
- Finnish
- German
- Hungarian
- Italian
- Japanese
- Korean
- Latvian
- Lithuanian
- Norwegian
- Polish
- Russian
- Slovak
- Spanish
- Swedish
- Turkish
Changelog
User Changes
- Added support for Max OS X (bug 4392).
- Added support for Bloody Good Time (bug 4780).
- Added support for E.Y.E Divine Cybermancy (bug 5035).
- Added gamedata for Nuclear Dawn.
- Added gamedata for International Online Soccer: Source (bug 5019).
- Added gamedata for Half-Life 2 Capture the Flag (bug 5114).
- Updated mapchooser and other base plugins with Nuclear Dawn specific fixes (bug 5117).
- Fixed ServerLang value not being read properly on startup (bug 4675).
- Added support for aliases in languages.cfg (bug 4858).
- Added output display to sm_rcon command (bug 5018).
- Flood protection bypass access can now be overridden with command name sm_flood_access (bug 4584).
- Added a reset argument to sm cvars command to revset cvar values to default (bug 5043).
- Fixed incorrect language identifiers for Chinese (both Trad. and Simplified) and Brazilian Portuguese not matching cl_language values (bug 5067).
- Added translation support for Bulgarian (bg).
- Fixed incorrect number of slots being hidden for reserve with sm_hideslots on Source 2009 with SourceTV or replay (bug 5094).
- sm_hideslots changes now take effect immediately instead of waiting until a client joins or leaves (bug 5094).
- Fixed sv_visiblemaxplayers getting stuck at previous max clients in some cases with reserves and SourceTV or replay (bug 5094).
- Removed error logging if an optional extension is not found (bug 5112).
- Fixed bots with semicolon in name being unkickable (bug 5120).
- Changed strings in ice-related funcommands to be translatable (bug 4540).
- Changed Bintools extension to use a single build for every engine (bug 4548).
Developer Changes
- Provided native interface for basecomm (bug 2594).
- Client language detection is too late. (bug 3714) (Tony A. "GoD-Tony").
- Added ServerCommandEx native to execute server command and retrieve output (bug 3873).
- Added ability to update clientprefs cookies values on clients not currently connected (bug 3882) (databomb).
- Added library "matchmaking_ds" support to gamedata lookups (bug 4158).
- Rooted menu handles to callbacks (bug 4353).
- Fixed corner cases with ExplodeString (bug 4629). (Michael "LumiStance").
- Fixed return omission with else-after-return (bug 4852).
- Added OnConditionAdded and OnConditionRemoved forwards to TF2 extension (bug 4851).
- Added new natives and forward to the cstrike extension (bug 4732, bug 4985) (Dr!fter).
- Added WaitingForPlayers forwards to the TF2 extension (bug 4704) (CrimsonGT).
- Updated and added more TF2 condition, weapon, and damagecustom defines (multiple bug#s).
- Fixed TF2_RemoveCondition not always removing conditions (bug 4981).
- Fixed MaxClients not being updated correctly in some places with SourceTV or replay active (bug 4986).
- Fixed some vars not being marked for init on first compile pass (bug 4643).
- Increased symbol name limit to 63 characters (bug 4564) (javalia).
- Fixed crash when dynamic arrays run out of memory (bug 4632).
- Fixed a crash that could happen from looking up out-of-bounds edict or entity indexes (bug 5080).
- Fixed client serials not getting cleared on disconnect (bug 5121).
- Added error on declaring arrays that the compiler is too buggy to handle (bug 4977).
- Removed reliance on gamedata for multiple SDKTools functions in ep2 and later (bug 4899).
- Added InvalidateClient and ReconnectClient natives to SDKTools (bug 4931) (Brian "Afronanny" Simon).
- Added ability to lookup and set values on the gamerules class (bug 4983.
- BaseComm now uses AddCommandListener for chat hooks (bug 4991).
- Fixed shutdown bug in SDKTools (bug 5063).
- Fixed MM-enabled extensions continuing to load after failing MM attach (bug 5042).
- Added GetDistGainFromSoundLevel native to SDKTools (bug 5066) (javalia).
- Added CheckAccess native to check an AdminId's command access (bug 5083).
- Fixed GetEntProp not sign-extending unsigned values less than 32 bits (bug 5105).
- Fixed crashing when calling CreateEntityByName or CreateFakeClient when no map is running (now errors) (bug 5119).
- Fixed erring in kick function (e. bad translation) causing client to become unkickable until disconnect (bug 5120).
- Fixed KickClientEx not immediately kicking client if client was in kick queue (bug 5120).
- Added IsClientSourceTV and IsClientReplay natives (bug 5124).
- Added support for getting and setting individual array elements with Get/Set EntProp functions (bug 4160).
- Added support for threaded query handles to SQL_GetInsertId and SQL_GetAffectedRows (bug 4699) (Nephyrin).
- Added a GetGameRules function to ISDKTools for extensions to easily get the GameRules class pointer (bug 4707).
- Added GetMessageName to IUserMessages (bug 4573) (Zach "theY4Kman" Kanzler)
- Added HintTextMsg to IGameHelpers (bug 4950).
- Added ProcessTargetString simple filter API (bug 4404).
- Moved much functionality from core bins to logic bin (bug 4406, bug 4402).
- Fixed bogus asserts in sp compiler (bug 4486, bug 4487).
- Greatly improved sp compiler performance (~5x overall speedup) (bug 3820, bug 4493, bug 4495).
- Changed entity output detours to use CDetour (bug 4416).
- Enhanced nominations API (bug 4677) (CrimsonGT).
- Added Linux support for profiling natives (bug 4927).
- Added a new ValveCallType that allows for arbitrary |this| parameters, as well as associated features in gamedata and for reading/writing memory (bug 3520) (Downtown1).
- Updated TF2 extension to handle Valve's changes to the "holiday" system (bug 5150). | https://wiki.alliedmods.net/index.php?title=SourceMod_1.4.0_Release_Notes&oldid=8341?title=SourceMod_1.4.0_Release_Notes&oldid=8341 | CC-MAIN-2017-09 | refinedweb | 1,134 | 59.8 |
contribution back to the Jython community, I wrote an article that=20=
describes the various options Jython users have to write threaded=20
applications. The complete and formatted article is available on the=20
PushToTest Web site at:
I have also copied the text of the article below. I am open to=20
feedback, corrections, and additions. Hopefully this will be a living=20
document as Jython grows.
My thanks goes to Clark Updike (Clark.Updike@...), Jeff Emanuel=20=
(jemanuel@...), Fred Sells (fred@...) for providing=20
feedback, comments, and help.
---
Writing Threaded Applications in Jython
Abstract:
Jython is a popular object oriented scripting language among software=20
developers, QA technicians, and IT managers. It is also the scripting=20
language in TestMaker and TestNetwork. In this article, Frank Cohen=20
looks at Jython=92s ability to construct threaded multi-tasking =
software,=20
shows the best practice to build scalable and thread-safe code, and=20
points out how to avoid common mistakes and misunderstandings
Feel free to share this document in its entirety with your
friends and associates; However, this document remains
Jython and Threading
Jython is an object oriented scripting language that is popular with=20
software developers, QA technicians, and IT managers. Jython is a 100%=20=
Java application. At runtime Jython scripts compile into Java bytecodes=20=
and run in the Java virtual machine. Jython classes are first class=20
Java objects, so Jython can import any Java object on the classpath and=20=
call its methods. Jython gives Java developers the best of both worlds.=20=
Consequently, more and more test automation software, installation=20
scripts, system monitoring code, and utility script code is being=20
written in Jython.
Jython provides an easy environment to build objects. One of my first=20
Jython scripts looked like this:
class myclass:
def setMyparam( self, myparam ):
self.storeit =3D myparam
def getMyparam( self ):
return self.storeit
a =3D myclass()
a.setMyparam( "frank" )
b =3D myclass()
b.setMyparam( "lorette" )
print "a.storeit =3D", a.getMyparam()
print "b.storeit =3D", b.getMyparam()
This script implements a class name myclass. It has two methods, one to=20=
set a parameter and the second to get the stored value. Here is the=20
output when I run the script:
a =3D frank
b =3D lorette
While this is straightforward enough, I envision using an object like=20
myclass in a threaded application. These questions come to mind:
Which dictionary is the storeit variable stored?
Do I have to worry that some other call to another instance of myclass=20=
will get the storeit value from the wrong instance?
Is myclass thread safe?
Jython stores variables in dictionaries. Each new class gets its own=20
dictionary when Jython instantiates the class. In myclass, self.storeit=20=
refers to the instance of storeit in the dictionary for the instance of=20=
myclass. As long as the script uses self.storeit then no other instance=20=
of myclass will get the self.storeit value. However, imagine the script=20=
includes a bug such as:
def getMyparam ( self ):
return storeit
In this example, I forgot to use self.storeit in the getMyparam method.=20=
Jython implements the equivalent of a Java Static class when myclass is=20=
defined in the script. This faulty print method retrieves the storeit=20
value from the static class version of myclass and not from the=20
instance of myclass referred to by a or b.
When multiple threads concurrently call setMyparam on the same instance=20=
of myclass, then it is anyone=92s guess which thread uses the setMyparam=20=
method last and actually sets the final value of Myparam. This is=20
commonly referred to as a race condition. Consider the following=20
example program:
import thread
class myclass:
def setMyparam( self, myparam ):
self.storeit =3D myparam
def getMyparam( self ):
return self.storeit",) )
This script defines myclass with two methods: one to set a value and=20
one to get a value. Then it defines a runthenumbers method that gets=20
the value from a myclass object, prints it to the screen, and stores a=20=
new myclass value. The script then instantiates a myclass that will be=20=
referred to by a and sets the initial value to =93frank=94. Lastly, the=20=
script instantiates two concurrently running threads that operate on=20
the instance of myclass.
When the script runs, both threads use the setMyparam and getMyparam=20
method of myclass. It is likely that eventually one thread will=20
interrupt the other when using setMyparam. In this case it anyone=92s=20
guess which thread=92s call to setMyparam stores the final value since=20=
threads are meant to run concurrently by timesharing the system=20
resources. In summary, this approach to coding a threaded application=20
has these problems:
You have no way of telling the conditions of the threads: Have they=20
started? Have they finished?
Multiple threads may try to call setMyparam concurrently. In the=20
ensuing race condition, the last thread to call setMyparam wins. And=20
there is no way to tell.
This is not to say that Jython cannot produce thread safe code. Jython=20=
does! However, there are multiple designs to create thread safe classes=20=
that avoid these problems.
The Many Ways To Thread An Application in Jython
Jython's ability to use Java objects introduces a variety of options=20
when it comes to building threaded applications. This section describes=20=
four options and examines their relative merits and problems.
Python Threads
This example uses the Jython thread library. However, to overcome=20
possible race conditions the script uses Jython's synchronized library=20=
to guarantee that only one thread can call a method at a time:
import thread, synchronize
class myclass:
def setMyparam( self, myparam ):
self.storeit =3D myparam
setMyparam=3Dsynchronize.make_synchronized( setMyparam )
def getMyparam( self ):
return self.storeit
getMyparam=3Dsynchronize.make_synchronized( getMyparam )",) )
In this example, make_synchonized uses the same technique as Java to=20
synchronize method calls. Jython implements the synchronize library=20
using this Java code:
public static PyObject make_synchronized(PyObject callable)
{
return new SynchronizedCallable(callable);
}
and SynchronizedCallable has a __call__ operator to call the argument=20
callable's __call__ method in a synchronized block like this:
synchronized(synchronize._getSync(arg))
{
return callable.__call__(arg);
}
Python Threads provides an easy way in a Jython script to create a=20
threaded application and synchronized thread safe methods within a=20
class object. A newer Python technique uses the Threading library. Here=20=
is an example:
import threading
def greet( name ):
print "greetings", name
count =3D 0
t =3D threading.Thread(
target=3Dgreet,
name=3D"MyThread %d" % count,
args=3D( "threading.Thread", )
)
t.start()
The new Python technique provides a slightly more Java-like feel to the=20=
syntax to create threads and provides a simple way to name a thread.=20
Aside from those advantages I observe no performance or functional=20
difference from the older Python technique.
Java Threads
This example uses the Java Thread library to implement a threaded=20
example:
from java.lang import Thread, Runnable
class GreetJob( Runnable ):
def __init__( self, name ):
self.name =3D name
def run( self ):
print self.name
count =3D 1
t =3D Thread( GreetJob( "Runnable" ), "MyThread %d" % count )
t.start()
Jython can also implement threads by extending the Java Thread class.=20
Between these two techniques I have observed no differences in=20
performance or functionality:
from java.lang import Thread
class GreetThread( Thread ):
def __init__( self, name, count ):
Thread.__init__( self, "MyThread %d" % count )
self._name =3D name # Thread has a 'name' attribute
def run( self ):
print self._name
count =3D 2
t =3D GreetThread( "Thread subclass", count )
t.start()
I find it very unusual in a Python environment to have so many=20
different ways to accomplish the same goal. Especially considering=20
Python has a "one obvious way to do it" design principle. Therefore,=20
next I describe what I believe to be the best practice to design Jython=20=
scripts that implement threads.
The Best Practice
Based on my experience writing threaded applications in Jython, using=20
Java Threads and the Runnable interface is the best practice. The=20
following Jython script implements the best practice for building=20
threaded applications in Jython:
from java.lang import Thread, Runnable
import synchronize
class myclass( Runnable ):
def __init__( self, myparam ):
self.storeit =3D myparam
def setMyparam( self, myparam ):
self.storeit =3D myparam
setMyparam=3Dsynchronize.make_synchronized( setMyparam )
def printMyparam( self ):
print "myclass: myparam =3D",self.storeit
printMyparam=3Dsynchronize.make_synchronized( printMyparam )
def run( self ):
for self.i in range(5):
self.printMyparam()
count =3D 2
a =3D myclass()
a.setMyparam( "frank" )
t =3D Thread( a, "MyThread %d" % count )
t.start()
In summary, the best practice makes these points:
The above code example defines myclass to implement the Runnable=20
interface from the Java Thread object. Runnable works best because it=20
offers the thread management APIs to check status, set daemon thread=20
status, and kill a thread.
I use the make_synchronized method of the synchronize library to make=20
certain that only only one call to the method is possible at any given=20=
time.
The __init__ method creates the storeit object and sets the initial=20
value. When the class is instantiated Jython calls the __init__ method=20=
on the instance of the new class so there is no need to synchronize=20
__init__ because only the new instantiation of the class has access to=20=
it. __init__ is thread safe.
Joining Threads
An additional technique supported by the Java Thread technique is that=20=
threads may be joined. Your scripts use the current thread and one new=20=
thread and then "join" the threads so the current thread doesn't=20
proceed until the new one finishes. Here's an example of that:
import threading
import time
def pause(threadName, sleepSeconds):
# create an attribute
threading.currentThread.isDone =3D 0
print "Thread %s is sleeping for %s seconds." \
% (threadName, sleepSeconds)
time.sleep(sleepSeconds)
print "Thread %s is waking up." % threadName
threading.currentThread().isDone =3D 1
newThread =3D threading.Thread(name=3D'newThread',\
target=3Dpause,args=3D(=20.
Where To Find Additional Information
Try these URLs for information that helped me write this article:
About The Author
Frank Cohen is the "go to" guy for enterprises needing to test and=20
solve problems in complex interoperating information systems,=20
especially Web Services. Frank is founder of PushToTest, a test=20
automation solutions business and author of Java Testing and Design:=20
=46rom Unit Tests to Automated Web Tests (Prentice Hall.) Frank =
maintains=20
TestMaker, a free open-source utility that uses Jython to build=20
intelligent test agents to check Web Services for scalability,=20
performance and functionality. PushToTest Global Services customizes=20
TestMaker to an enterprise's specific needs, conducts scalability and=20
performance tests, and trains enterprise developers, QA technicians and=20=
IT managers on how to use the test environment for themselves. Details=20=
are at. Contact Frank at fcohen@...=
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/jython/mailman/message/11763412/ | CC-MAIN-2016-44 | refinedweb | 1,880 | 57.16 |
NAME
gl_line - draw a line
SYNOPSIS
#include <vgagl.h> void gl_line(int x1, int y1, int x2, int y2, int c);
DESCRIPTION
Draw a line from point (x1, y1) to (x2, y2) inclusively in color c. You should not assume that the same drawing trajectory is used when you exchange start and end points. To use this program one first sets up a mode with a regular vga_setmode call and vga_setpage(0), with possibly a vga_setlinearaddressing call. Then a call to gl_setcontextvga(mode) is made. This makes the information about the mode available to gl_line. The pixels are placed directly into video memory using inline coded commands.(3).
AUTHOR
This manual page was edited by Michael Weller <eowmob@exp-math.uni- essen.de>. The exact source of the referenced demo. | http://manpages.ubuntu.com/manpages/hardy/man3/gl_line.3.html | CC-MAIN-2014-35 | refinedweb | 130 | 66.33 |
Interacting with Excel in python
Posted March 08, 2013 at 02:39 PM | categories: programming | tags: | View Comments
There will be times it is convenient to either read data from Excel, or write data to Excel. This is possible in python (). You may also look at ().
import xlrd wb = xlrd.open_workbook('data/example.xlsx') sh1 = wb.sheet_by_name(u'Sheet1') print sh1.col_values(0) # column 0 print sh1.col_values(1) # column 1 sh2 = wb.sheet_by_name(u'Sheet2') x = sh2.col_values(0) # column 0 y = sh2.col_values(1) # column 1 import matplotlib.pyplot as plt plt.plot(x, y) plt.savefig('images/excel-1.png')
[u'value', u'function'] [2.0, 3.0]
1 Writing Excel workbooks
Writing data to Excel sheets is pretty easy. Note, however, that this overwrites the worksheet if it already exists.
import xlwt import numpy as np x = np.linspace(0, 2) y = np.sqrt(x) # save the data book = xlwt.Workbook() sheet1 = book.add_sheet('Sheet 1') for i in range(len(x)): sheet1.write(i, 0, x[i]) sheet1.write(i, 1, y[i]) book.save('data/example2.xls') # maybe can only write .xls format
2 Updating an existing Excel workbook
It turns out you have to make a copy of an existing workbook, modify the copy and then write out the results using the
xlwt module.
from xlrd import open_workbook from xlutils.copy import copy rb = open_workbook('data/example2.xls',formatting_info=True) rs = rb.sheet_by_index(0) wb = copy(rb) ws = wb.add_sheet('Sheet 2') ws.write(0, 0, "Appended") wb.save('data/example2.xls')
3 Summary
Matlab has better support for interacting with Excel than python does right now. You could get better Excel interaction via COM, but that is Windows specific, and requires you to have Excel installed on your computer. If you only need to read or write data, then xlrd/xlwt or the openpyxl modules will server you well.
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/03/08/Interacting-with-Excel-in-python/ | CC-MAIN-2018-30 | refinedweb | 330 | 64.07 |
There are two parameter passing methods passing (or calling) data to functions in C language, i.e., Call by Value or Call by Reference (also Called pass by value and pass by reference). The most significant distinction between call by value and call by reference is that in the call by value, passing an actual value of the parameter. While, in a call by reference, passing the reference of the parameter; hence some change made to formal arguments will also reflect in actual arguments.
Let’s understand call by value and call by reference in c language.
In C, all function arguments are passed “by value” because C doesn’t support references as C++ and Java do. In C, both the calling and called functions don’t share any memory they have their copy, along with the called function can’t directly change a variable in the calling function; it may only alter its private, temporary copy.
In C, we use pointers to attain call by reference. In C++, we could use pointers or references to for pass by reference. In Java, primitive types passed as values, and non-primitive types are constant references.
We’ll be covering the following topics in this tutorial:
Call by value in C
• In call by value, a copy of actual arguments passed to formal arguments and the two types of parameters stored in different stack memory locations. So any changes made inside functions parameter, it is changed for the current function only. It will not change the value of a variable inside the caller method such as main().
• In call by value function, we can’t alter the value of the actual parameter from the formal parameter.
• In call by value, distinct memory allocated to actual and formal parameters because the value of the actual parameter replicated into the formal parameter.
• The actual parameter is the argument that’s used at the function call, whereas the formal parameter is the argument used in the function definition.
Consider the following example for the Call by Value in C
If the parameter passed by value, the parameter replicated from the variable used in, for example, main() into a variable used by the function. So if the parameter passed altered within the function, the value only changed from the variable used within the function. Let’s take us to look at a call by value example:
#include <stdio.h> void pass_by_value(int p) { printf("Inside pass_by_value p = %d before adding 10.\n", p); p += 10; printf("Inside pass_by_value p = %d after adding 10.\n", p); } int main() { int Var = 10; printf("Var = %d before function pass_by_value.\n", Var); pass_by_value(Var); printf("Var = %d after function pass_by_value.\n", Var); return 0; }
The output of this call by value code example:
Var = 10 before function pass_by_value. Inside pass_by_value p = 10 before adding 10. Inside pass_by_value p = 20 after adding 10. Var = 10 after function pass_by_value.
Let us take a look at what is going on within this pass-by-value source code example. From the main() we create an integer with the value of 10. We print some information at each point, starting by printing our variable Var. Then function pass_by_value is called, and we enter the variable Var. This variable (Var) subsequently copied to the function variable p. At the function, we add 10 to p (and call some print statements). Then when another statement called in main() the value of variable Var printed. We can understand that the value of variable Var isn’t affected by the call of the function pass_by_value().
Call by reference in C
• Simply call by reference method, the location (address) of the variable passed to the function call as the actual parameter.
• The value of the actual parameters could be altered by altering the formal parameters because the address of the actual parameters passed.
• In call by reference method, Both the actual and formal parameters refer to the same locations. Any changes made inside function performed on the value stored at the address of the actual parameters and the modified value gets stored at the same address.
Consider the following example for the pass_by_reference(int *p) { printf("Inside pass_by_reference p = %d before adding 10.\n", *p); (*p) += 10; printf("Inside pass_by_reference p = %d after adding 10.\n", *p); } int main() { int Var = 10; printf("Var = %d before function pass_by_reference.\n", Var); pass_by_reference(&amp;amp;amp;amp;amp;Var); printf("Var = %d after function pass_by_reference.\n", Var); return 0; }
The output of this call by reference source code example will look like this:
Var = 10 before function pass_by_reference. Inside pass_by_reference p = 10 before adding 10. Inside pass_by_reference p = 20 after adding 10. Var = 20 after function pass_by_reference.
Let’s explain what’s going on within this source code example. We begin with an integer Var with the value 10. The function pass_by_reference() called, and also the address of the variable Var passed to the function. In a function, there’s some before and after a print statement is completed and there is 10 added to value at memory pointed by p. Thus, at the end of the function, value is 20. Then in main(), we print the variable Var, and as you can see, the value is altered (as expected) to 20.
Difference between call by value and call by reference in C
When to Use Call by Value and When to use Call by Reference?
One Benefit of the call by reference method is that it is using pointers, so there’s is no doubling of the memory used by the variables (like the copy of the call by value method). It’s, naturally, good, lowering the memory footprint is almost always a good thing. So why don’t we create all the parameters call by reference?
There are two reasons why this isn’t a good idea, and you have to select between call by value and call by reference. The reason is the side Impacts and privacy. Unwanted side effects are often due to accidentally changes that made into a call by reference parameter. Additionally, typically, you would like the data to be private and that someone calling a function can change if you want it. So it’s better to use a call by value by default and only use call by reference if data changes expected.
Advantages of using Call by value method
• The method does not alter the original value.; therefore, it’s keeping data.
• Whenever a function is called it, never affect the real contents of the actual arguments.
• Value of actual arguments passed into the formal arguments; therefore, any modifications made in the formal argument doesn’t impact the actual instances.
Advantages of using Call by reference method
• The function may alter the value of the argument, which can be very helpful.
• It doesn’t create replicate data for holding only one value which can help you to conserve memory space.
• Inside this method, there isn’t any copy of the argument created. Therefore it’s processed extremely fast.
• Helps you to avoid alterations done by mistake.
• A individual studying the code never understands that the value could be modified at the function.
Disadvantages of using Call by value method
• Changes to actual parameters may also alter corresponding argument variables.
• Inside this method, arguments should be variables.
• You can not directly alter a variable in a function body.
• Sometime argument may be complicated expressions.
• You will find two copies made for the same variable that’s is not memory efficient.
Disadvantages of using Call by reference method
• Powerful non-null guarantee. A function taking at a reference ought to be certain the input is non-null. Therefore, the null check does not need to be made.
• Passing by reference makes the function maybe not pure theoretically.
• A lifetime warranty is a large issue with references. It can be especially risky when working with lambdas and multi-threaded programs. | https://ecomputernotes.com/what-is-c/call-by-value-and-call-by-reference-in-c | CC-MAIN-2022-21 | refinedweb | 1,325 | 55.95 |
Hide Forgot
Spec URL:
SRPM URL:
Description:
Cpqarrayd is a daemon to monitor HP (compaq) arraycontrollers. It reports any
status changes, like failing disks, to the syslog and optionally to a remote
host using SNMP traps.
Cpqarrayd was part of the kernel-utils package back in Red Hat Linux 9 but was then left out in the cold when then package was split up into it's components.
The original source contains support for the old(?) IDA arrays but this functionality requires undocumented IOCTL:s, i.e. they are not available in the header files exported by the kernel. Rather then copying the IOCTL definitions from the kernel's source code, I've disabled support for the IDA arrays in the package presented here.
I was preparing my own packages after the internal discussion some time ago, so
I will review this one.
OK source files match upstream:
9d76dfe75507eabcc9e406bc88eeab7a660b057f cpqarrayd-2.3 in package. no shared libraries are added to the regular linker search paths.
N/A no headers.
OK no pkgconfig files.
OK no libtool .la droppings.
OK not a GUI app.
some notes:
- ask upstream to run automake with the "--copy" to include not the links but
real copies of files in the source archive
- I would enclose the support for IDA in #ifdef WITH_IDA/#endif instead of
commenting it out, but that's not a blocker
- and maybe we could work a bit on the IDA support ...
this package is APPROVED
New Package CVS Request
=======================
Package Name: cpqarrayd
Short Description: daemon to monitor HP (compaq) arraycontrollers
Owners: djuran
Branches: F-8 F-9 EL-4 EL-5
Cvsextras Commits: yes
cvs done.
cpqarrayd-2.3-6.fc9 has been submitted as an update for Fedora 9
cpqarrayd-2.3-6.fc8 has been submitted as an update for Fedora 8
cpqarrayd-2.3-6.fc9 has been pushed to the Fedora 9 stable repository. If problems still persist, please make note of it in this bug report.
cpqarrayd-2.3-6.fc8 has been pushed to the Fedora 8 stable repository. If problems still persist, please make note of it in this bug report. | https://partner-bugzilla.redhat.com/show_bug.cgi?id=455699 | CC-MAIN-2020-10 | refinedweb | 357 | 65.52 |
How do I deploy artifacts to Amazon S3 in a different account using CodePipeline?
Last updated: 2020-10-20
I want to deploy artifacts to Amazon Simple Storage Service (Amazon S3) in a different account using AWS CodePipeline with an S3 deploy action provider. I also want to set the owner of the artifacts as the target account.
Short description
The following resolution is based on an example scenario that assumes the following:
- You have two accounts: a development (dev) account and a production (prod) account.
- The input bucket in the dev account is called codepipeline-input-bucket (with versioning enabled).
- The default artifact bucket in the dev account is called codepipeline-us-east-1-0123456789.
- The output bucket in the prod account is called codepipeline-output-bucket.
- You're deploying artifacts from the dev account to an S3 bucket in the prod account.
- You want to assume a cross-account role created in the prod account, and then deploy the artifacts. The role sets the object owner of artifacts as the target prod account instead of the dev account.
Resolution
Create an AWS Key Management Service (AWS KMS) key to use with CodePipeline in the dev account
1. Open the AWS KMS console in the dev account.
2. In the navigation pane, choose Customer managed keys.
3. Choose Create Key.
4. For Key type, choose Symmetric Key.
5. Expand Advanced Options.
6. For Key material origin, choose KMS, and then choose Next.
7. For Alias, enter s3deploykey.
Note: Replace s3deploykey with the alias for your key.
8. Choose Next.
9. In the Key administrators section of the Define key administrative permissions page, select an AWS Identity and Access Management (IAM) user or role as your key administrator, and then choose Next.
10. In the Other AWS accounts section of the Define key usage permissions page, choose Add another AWS account.
11. In the text box that appears, add the account ID of the prod account, and then choose Next.
Note: You can also select an existing service role in the This Account section, and then skip the steps in the Update the KMS usage policy in the dev account section.
12. Review the key policy, and then choose Finish.
Important: You must use the KMS customer managed key for cross-account deployments. If the key isn't configured, then CodePipeline encrypts the objects with default encryption, which can't be decrypted by the role in the target account.
Create a CodePipeline in the dev account
1. Open the CodePipeline console, and then choose Create pipeline.
2. For Pipeline name, enter crossaccountdeploy.
Note: Replace crossaccountdeploy with the name of your pipeline.
The Role name text box is auto populated with the service role name AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy. You can also choose an existing service role with access to the KMS key.
3. Expand the Advanced settings section.
4. For Artifact store, select Default location.
Note: You can select Custom location if that's required for your scenario.
5. For Encryption key, select Customer Managed Key.
6. For KMS customer master key, select s3deploykey from the list, and then choose Next.
Important: Replace s3deploykey with the alias of your key.
7. On the Add source stage page, for Source provider, choose Amazon S3.
8. For Bucket, enter codepipeline-input-bucket.
Note: Replace codepipeline-input-bucket with the name of your input bucket.
Important: The input bucket must have versioning enabled to work with CodePipeline.
9. For S3 object key, enter sample-website.zip.
Important: To use a sample AWS website instead of your own website, see Tutorial: Create a pipeline that uses Amazon S3 as a deployment provider. Then, search for "sample static website" in the Prerequisites of the 1: Deploy Static Website Files to Amazon S3 section.
10. For Change detection options, choose Amazon CloudWatch Events (recommended), and then choose Next.
11. On the Add build stage page, choose Skip build stage, and then choose Skip.
12. On the Add deploy stage page, for Deploy provider, choose Amazon S3.
13. For Region, choose US East (N. Virginia).
Important: Replace US East (N. Virginia) with the AWS Region of your output bucket.
14. For Bucket, enter the name of the prod bucket codepipeline-output-bucket.
Note: Replace codepipeline-output-bucket with the name of the output bucket in your prod account.
15. Select the Extract file before deploy check box.
Note: If required, enter a path for Deployment path.
16. Choose Next.
17. Choose Create pipeline.
Now, your pipeline is triggered, but the source stage fails. Then, you receive the following error:
The object with key 'sample-website.zip' does not exist.
The Upload the sample website to the input bucket section shows you how to resolve this error later on.
Update the KMS usage policy in the dev account
Important: Skip this section if you're using an existing CodePipeline service role and already added that role as a key user in the Create an AWS Key Management Service (AWS KMS) key to use with CodePipeline in the dev account section.
1. Open the AWS KMS console in the dev account, and then choose s3deploykey.
Important: Replace s3deploykey with the alias of your key.
2. In the Key users section, choose Add.
3. In the search box, enter the service role AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy, and then choose Add.
Configure a cross-account role in the prod account
To create policies:
1. Open the IAM console in the prod account.
2. In the navigation pane, choose Policies, and then choose Create policy.
3. Choose the JSON tab, and then enter the following policy in the JSON editor:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::codepipeline-output-bucket/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-output-bucket" ] } ] }
Note: Replace codepipeline-output-bucket with the name of the output bucket in your prod account.
4. Choose Review policy.
5. For Name, enter outputbucketfullaccess.
6. Choose Create policy.
7. To create another policy, choose Create policy.
8. Choose the JSON tab, and then enter the following policy in the JSON editor:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-east-1:<dev-account-id>:key/<key id>" ] }, { "Effect": "Allow", "Action": [ "s3:Get*" ], "Resource": [ "arn:aws:s3:::codepipeline-us-east-1-0123456789/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-us-east-1-0123456789" ] } ] }
Note: Replace the ARN of the KMS key that you created. Replace codepipeline-us-east-1-0123456789 with the name of the artifact bucket in your dev account.
9. Choose Review policy.
10. For Name, enter devkmss3access.
11. Choose Create policy.
To create a role:
1. Open the IAM console in the prod account.
2. In the navigation pane, choose Roles, and then choose Create role.
3. Choose Another AWS account.
4. For Account ID, enter the dev account id.
5. Choose Next: Permissions.
6. From the list of policies, select outputbucketfullaccess and devkmss3access.
7. Choose Next: Tags.
8. (Optional) Add tags, and then choose Next: Review.
9. For Role name, enter prods3role.
10. Choose Create role.
11. From the list of roles, choose prods3role.
12. Choose the Trust relationship tab, and then choose Edit Trust relationship.
13. In the Policy Document editor, enter the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<dev-account-id>:role/service-role/AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy" ] }, "Action": "sts:AssumeRole", "Condition": {} } ] }
Note: Replace dev-account-id with the account ID of your dev environment and the service role for your pipeline.
14. Choose Update Trust Policy.
Configure the CodePipeline artifact bucket and service role in the dev account
1. Open the Amazon S3 console in the dev account.
2. In the Bucket name list, choose codepipeline-us-east-1-0123456789.
Note: Replace codepipeline-us-east-1-0123456789 with the name of your artifact bucket.
3. Choose Permissions, and then choose Bucket Policy.
4. In the text editor, update your existing policy with the following statements:
{ "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<prod-account-id>:root" }, "Action": [ "s3:Get*", "s3:Put*" ], "Resource": "arn:aws:s3:::codepipeline-us-east-1-0123456789/*" }, { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<prod-account-id>:root" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::codepipeline-us-east-1-0123456789" }
Important: To align with proper JSON formatting, add a comma after the existing statements.
Note: Replace prod-account-id with the account ID of your prod environment. Replace codepipeline-us-east-1-0123456789 with your artifact bucket name.
5. Choose Save.
6. Open the IAM console in the dev account.
7. In the navigation pane, choose Policies, and then choose Create policy.
8. Choose the JSON tab, and then enter the following policy in the JSON editor:
{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:iam::<prod-account-id>:role/prods3role" ] } }
Note: Replace prod-account-id with the account ID of your prod environment.
9. Choose Review policy.
10. For Name, enter assumeprods3role.
11. Choose Create policy.
12. In the navigation pane, choose Roles, and then choose AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy.
Note: Replace AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy with your service role if applicable.
13. Choose Attach Policies, and then select assumeprods3role.
14. Choose Attach Policy.
Update the CodePipeline to use a cross-account role in the dev account
1. To get the pipeline definition into a file called codepipeline.json, run the following command:
aws codepipeline get-pipeline --name crossaccountdeploy > codepipeline.json
Note: Replace crossaccountdeploy with the name of your pipeline.
2. Update the deploy section in codepipeline.json to include the roleArn. For example:
"roleArn": "arn:aws:iam::your-prod-account id:role/prods3role",
To add the roleArn, make the following updates:
{ "name": "Deploy", "actions": [ { "name": "Deploy", "actionTypeId": { "category": "Deploy", "owner": "AWS", "provider": "S3", "version": "1" }, "runOrder": 1, "configuration": { "BucketName": "codepipeline-output-bucket", "Extract": "true" }, "outputArtifacts": [], "inputArtifacts": [ { "name": "SourceArtifact" } ], "roleArn": "arn:aws:iam::<prod-account-id>:role/prods3role", "region": "us-east-1", "namespace": "DeployVariables" } ] }
Note: Replace the prod-account-id with the account ID of your prod environment.
3. Remove the metadata section at the end of your codepipeline.json file. For example:
"metadata": { "pipelineArn": "arn:aws:codepipeline:us-east-1:<dev-account-id>:crossaccountdeploy", "created": 1587527378.629, "updated": 1587534327.983 }
Important: To align with proper JSON formatting, remove the comma before the metadata section.
4. To update the pipeline, run the following command:
aws codepipeline update-pipeline --cli-input-json
Upload the sample website to the input bucket
1. Open the Amazon S3 console in the dev account.
2. In the Bucket name list, choose codepipeline-input-bucket.
Note: Replace codepipeline-input-bucket with the name of your input bucket.
3. Choose Upload, and then choose Add files.
4. Select the sample-website.zip file that you downloaded earlier.
5. Choose Upload.
Now, CodePipeline is triggered and the following happens:
1. The source action selects the sample-website.zip from codepipeline-input-bucket. Then, the source action places the website as a source artifact inside the codepipeline-us-east-1-0123456789 artifact bucket.
2. In the deploy action, the CodePipeline service role AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy assumes the prods3role of the prod account.
3. CodePipeline uses the prods3role access for the KMS key and artifact bucket in the dev account to get the artifacts. Then, CodePipeline deploys the extracted files to the codepipeline-output-bucket in the prod account.
The extracted objects in codepipeline-output-bucket now have the prod account as the owner.
Did this article help?
Do you need billing or technical support? | https://aws.amazon.com/premiumsupport/knowledge-center/codepipeline-artifacts-s3/ | CC-MAIN-2021-10 | refinedweb | 1,942 | 50.53 |
View Questions and Answers by Category
Advertisements
How can we use try and catch block in exception handling?
Here is an example of Exception handling.
Example:
public class TryCatchExample{ public static void main(String [] args){ int a= 10; int b= 0; int x; try{ x=a/b; }catch (Exception er){ System.out.println("Division by zero"); System.out.println(er); } } }
Result Display:
Division by zero Java.lang.ArithmeticException: / by zero
Description:- We have created a class named TryCatchExample. In the main method, we have declared some variables of integer type. Then to perform exception handling, we have used try and catch block where we have divided the integer value by 0 in the try block. This results in an Arithmetic Exception. The catch block will show this exception. | http://www.roseindia.net/answers/viewqa/Java-Beginners/26330-Try-and-catch-in-Exception-Handling-.html | CC-MAIN-2015-14 | refinedweb | 128 | 59.6 |
well...As far as I know, we could use any variable or names in the argument list or parameter providing that the data type and the function name are the same, right? However, this code won't work if I replace factor_num with n in the argument_list of the function definition?
This program doesn't work.This program doesn't work.Code:#include <iostream> int triangle(int); int main() { using namespace std; // Declare a variable as fact_num that represent factorial number int fact_num; // Promt user for input cout << "Please enter a number: "; cin >> fact_num; cout << "Your factorial number is: " << triangle(fact_num) << endl; system("pause"); return 0; } // The function body. That's the function definition int triangle(int n){ int n; int sum = 0; for(n = 1; n <= fact_num; n++) sum += n; return sum; }
But this one with different names for the argument_list in the main() or in the function definition works really well. I am just wondering why. That really baffles me.
Besides, how come the author has to set [b]while (1) {()} before the function call?Besides, how come the author has to set [b]while (1) {()} before the function call?Code:#include <iostream> #include <cmath> using namespace std; int prime(int n); int main(){ int i; while(1){ cout << "Enter a number (0 to exit)"; cout << "and press ENTER:"; cin >> i; if(i == 0) break; if(prime(i)) // This is what I am talking about, see // the (i) variable in the argument_list cout << i << " is prime" << endl; else cout << i << " is not prime" << endl; } return 0; } int prime(int n){ // See this. the varialbe in the argument_list int i; // is now n, not i. for(i = 2; i <= sqrt(static_cast<double>(n)); i++){ if(n % i == 0) return false; } return true; } | https://cboard.cprogramming.com/cplusplus-programming/110329-variable-argument_list.html | CC-MAIN-2017-51 | refinedweb | 293 | 59.33 |
Improve This Listing
This is probably the best day hike close to Santiago. Its an easy hike up a beautiful glacier valley. The valley is surrounded by snow covered peaks, and ends at the San Francisco glacier. There is nice mountain lake 2/3 of the way up, which...More
The hike to the top of the valley to see the glaciers is about 6 kilometers the first three of which are steep. But when you reach the top the valley opens up and you can see the three or four glaciers that are called...More
It is a wonderful trip to this Glacier. It starts just outside the town uphill to the right, over a small river up the hill to the guides hut. After being charged double the price of local one begins the walk. The first part is...More
On November 2 and 3, 2017 the trail was not open. We were not allowed to enter. Guy of Conaf said the trail needed repairs (bridges,...); Weather was nice. No more snow ! Left disappointed .
We drove up the Camino Al Volcan to start this well marked trail in the Monumento Natural El Morado. The 8 km hike (16 km round trip) was steep at the start and very uneven at the end, but the bulk of the hike was...More
Two years ago we hike El Morado national monument and this year to acclimate before Atacama we hike "El Colgante" in valle de animas.
It's a relatively long drive from San Alfonso to the far end of the road but after the hike is nice...More
In January and February, one bus daily goes to Banos Morales. It leaves from the city and stops anywhere on the road and drops you at the beginning of the hike. It took us 5 hours return (including lunch break). It starts with a steep...More
What an awesome hike! A friend and I hiked to the San Francisco Glacier in November and were able to snap some amazing photographs. If you're into hiking, it's definitely worth renting a car and driving from Santiago to do.
Words would not do this place justice. Check it out for yourself. I promise you wont be disappointed
To get to the trailhead, its a 90 minute or so drive from Santiago, with the last 30 minutes on gravel road. The hike is fairly easy with a moderate climb at the beginning followed by a long, gradual slope to the glacier. Its 6-8km...More | https://www.tripadvisor.com/Attraction_Review-g303680-d318343-Reviews-San_Francisco_Glacier-San_Jose_de_Maipo_Santiago_Metropolitan_Region.html | CC-MAIN-2019-13 | refinedweb | 422 | 82.95 |
Solution for numpy datetime64 comparison slower than pandas Timestamp
is Given Below:
I’ve been quite surprised to find that comparing scalar numpy
datetime64 objects is significantly slower than comparing pandas
Timestamp objects. My understanding is that internally pd.Timestamp is using
datetime64[ns] so I’m a bit baffled as to how
pd.Timestamp is faster in this case.
Here’s my simple attempt at comparing the performance of doing a less than comparison.
import pandas as pd import numpy as np # create datetime64 and timestamp objects dt1 = np.datetime64("1900-01-01", "ns") dt2 = np.datetime64("2020-01-01", "ns") ts1 = pd.Timestamp("1900-01-01") ts2 = pd.Timestamp("2020-01-01") # time datetime64 comparisons %% timeit for _ in range(1000000): _ = dt1 < dt2 # NOTE: 3.07 s +/- 796 ms per loop # time Timestamp comparisons %%timeit for _ in range(1000000): _ = ts1 < ts2 # NOTE: 125 ms +/- 6.2 ms per loop
It seems that Pandas is approximately 25x faster here. I’ve tried looking at the source code but am not sufficiently familiar with C or cython to understand what Pandas might be doing to achieve such an improvement. I did look at this somewhat related question but it’s quite old and the timings there were not consistent with what I found (quite possibly due to updates to the libraries over the last 6 years). | https://codeutility.org/numpy-datetime64-comparison-slower-than-pandas-timestamp/ | CC-MAIN-2021-49 | refinedweb | 229 | 65.32 |
C# Tutorial
C# (C-Sharp) is a programming language developed by Microsoft that runs on the .NET Framework.
C# is used to develop web apps, desktop apps, mobile apps, games and much more.Start learning C# now »
Examples in Each Chapter
Our "Try it Yourself" tool makes it easy to learn C#. You can edit C# code and view the result in your browser.
Example
using System; namespace HelloWorld { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } }
Click on the "Run example" button to see how it works.
We recommend reading this tutorial, in the sequence listed in the left menu. | https://localdev.w3schools.com/cs/index.php | CC-MAIN-2022-40 | refinedweb | 104 | 65.62 |
Build Single Page Application with Java EE and AngularJS
The definition about Single Page Application (SPA) it was all the resources of HTML, CSS, and JavaScript is retrieved with a single page load and the appropriate resources are dynamically loaded and added to the page as necessary. SPA will reduce the amount of page refreshes which can be done with heavy usage of AJAX to communicate with the back-end (server side). This become popular lately, since with SPA you could maximize the audience of your software since it fit perfectly on both website and mobile environment.
I would like to show you about how to build a SPA with Java EE 7 as the back-end (server side) and AngularJS on the front-end (client-side). To be able following this tutorial you need to prepare JDK version 8 and Apache Maven for the back-end and on the front-end make sure you already have NPM, Bower, and GRUNT.
Before I started, I need to proclaim that most of my experience is with Java EE as back-end developer and I have a little experience with front-end. You could see that in this tutorial I’m still using AngularJS version 1 and with (maybe for some front-end developer) old tools such as Bower and GRUNT. What I want to share is about the big picture and how to developing SPA especially with Java EE, since there is very little resource if you try to looking building SPA with Java EE rather than using NodeJS or Python and I think the best tutorial building SPA with Java EE is come from Roberto Cortez.
Structure of the Project
You will build a SPA with theme of bookshop, the case was you need to separate between customer page and admin page which is for customer he or she can view all list of the books and for admin page you must create a page for creating, updating and listing all of the books. Of course for the sake of this tutorial I wouldn’t make a complex project rather than I will show you the basic of SPA with Java EE and AngularJS.
On this project, I would like to separate between back-end and front-end project. Back-end would require Java 8, Apache Maven, and application server such as Wildfly, Payara, Glashfish or TomEE. Like I said before, I don’t have many experience with front-end so this will using my latest knowledge to build the front-end using AngularJS 1, NPM, Bower and Grunt. For the sake of this tutorial, I will put both project under same git version control and it can be access on my github account.
PROJECT_HOME
|-- README.md
|-- Client
| |--- Gruntfile.js
| |--- assets
| |--- bower.json
| |--- index.html
| |--- package.json
| |--- src
| | |--- app.js
| | |--- controllers
| | |--- services
| | |--- directives
| |--- views
| | |--- templates
|-- Server
|--- pom.xml
|--- src
Back-End Project
Back-end project provided a REST which consumed by front-end, and you just need Java EE 7 to build a REST Web Service. Started with Java EE 7 Essentials Archetype you already have everything to build REST Web Service and just with this few lines of code to make sure you can use JAX-RS for making REST in Java EE 7.
@ApplicationPath("resources")
public class JAXRSConfiguration extends Application {
}
Since you will need a database to store the book, you will work with JPA. Let’s started write an entity class for the book.
Your bookshop have a circumstance that some of books price already include tax, but some of books not. So, you need to check if the books not include tax on the price you need to add the Tax since you don’t want to break a law, right?
Next, you need to write an EJB class to communicate with JPA to perform simple Create, Read, Update, and Delete (CRUD) action.
Finally, you could start writing a resources class to become your REST endpoint.
Ok, everything is done, with all of this sort of code your back-end is ready. Yes, of course I’m not mention about
persistence.xml since it’s depend on what kind of database you will use, but if you want using default datasource for example
H2 from Wildfly or
Derby from Payara / Glassfish is still possible.
Java EE is very simple and make developer productive, and this is how your response and request json from back-end will be look like.
Front-End Project
Prerequisite in front-end project you need to have NPM, yes everything started with NPM then you can install bower with
npm install -g bower and grunt with
npm install -g grunt plus grunt-cli
npm install -g grunt-cli. Actually NPM and bower have similarity in managing dependencies, but I more prefer flat hierarchy that bower provide so it’s just a matter of choice but in this tutorial I will show you how to use both NPM and bower.
Because you will made a different project, now you need to create your front-end project and everything can be started with
npm init and
bower init to create a file of
package.json and
bower.json.
Building SPA is need a server to host and make sure the Angular worked, you could use http-server to run your SPA with embedded server but since you already have Java EE as the back-end you will used this. Grunt is have similarity like Apache Maven, as task runner you will have grunt to do concatenation for the JavaScript files, minify them and finally copy all the final front-end code into the back-end.
For your information, the back-end is just simple Java EE Web Archive so you need to put the front-end files on
webapp directory and that is why you need Grunt to automate this repetitive task. I know some of you maybe heard about Yeoman, to build an Angular project easily but I want to show you how to build Angular project without any fancy tools so you will understand the basic.
With this your SPA is almost completed, you just need to execute
grunt build in front-end directory then move into your back-end directory to execute
mvn clean install to build the war file and this is can be deploy into the application server, of course everything can be more simplified with IDE but I hope you grasp the basic concept.
Let’s started work on front-end code, first of all you need to create an
index.html as the entry point of the SPA and of course to load all of our JavaScript and CSS files.
To navigate between state or you could say between pages, I’m choose Angular Route so you need to define the
route provider in your Angular config.
Ok, now you will start to write some controller, services and directives to be able build your bookshop. First I will show you about
AlertCtrl, just like you can see it’s defined on the
index.html above. This controller responsible to retrieve a broadcast and then generate an alert depend on it’s success or error alert.
Working with REST it’s mean you do a lot of asynchronous call, in Angular there is
$http which is already using
Promise so your code will be look like this.
var books = $http.get('resources/books');
books.then(function(result) {
$scope.users = result.data
});
But, there is Angular Resource that will simplify your code, you just need to build a services for this Angular Resource like this.
Now, you don’t need to handle promise or anything and you can just write your code like this.
$scope.users = BookService.query();
I would like to give an example about how to perform simple cread, read, update and delete with Angular. First, you need to build a page that responsible to show list of all the resource data give some buttons for create new book, update book, and remove book.
Little explanation about something might be not familiar,
<an-search> it’s a directive, just simple input tag to binding search filter on the table.
<dir-paginate-controls> is a third party library for pagination with Angular, but the idea is same with
ng-repeate but with advance feature to bring a pagination. That is all, and then now this is the controller for this page.
From controller above, I believe you could understand just read from the code right? I just give some explanation about
extractSlug function that responsible to split the full URI from
_links since you need the
slug to retrieve single information. Next, about
remove function which is success rather than reload the page (which is I usually do) to refresh the data you could
broadcast an event to render again the table with new data.
Next, you need to made form page which is responsible in creating or updating the data.
This is the controller that responsible in creating and updating book resource.
Simple explanation about how to decide when to create a new resource or updating an existing resource is you could use a parameter on your URI, in this case I use
slug since existing resource must have slug.
Finally, this is the bare minimum tutorial of building a SPA with Java EE and AngularJS. Well, it’s become longer than I expected (I know I should split it) but on this story I just want to show you that Java EE is possible in building modern website, it’s not inferior from another programming language for example like Ruby or Nodejs.
If you enjoyed reading this article, please recommend with click the 👏 below and share it to help others find it! | https://medium.com/@swhp/build-single-page-application-with-java-ee-and-angularjs-4eaacbdfcd?source=---------1------------------ | CC-MAIN-2019-18 | refinedweb | 1,626 | 63.43 |
Creating your own DSL in Scala is very easy. You don't need language proficiency at an expert level for this. This post shows you how to get started. It is not our scope to teach basic concepts of Scala, like traits and objects, but even if you are not familiar with the language, it should be possible to follow the logic here.
I have been dabbling in Scala for some time. At my place of work, though, it does not seem feasible to use Scala for code in production, so I decided that I could get away with using it in unit testing at least. Since I am of the school of total white-box-testing, I do not hesitate to set and access private members of classes whenever it saves a clean design from being disrupted. I know doing so is a bone of contention, but this is not what this post is about – it’s what the DSL is about.
In Java, as in Scala, accessing private class members by reflection is quite tedious and repetitive, so I decided to write a utility for that. And, using Scala, I had something like this in mind:
val bar = new Bar set field “foo” of bar to “baz” val qux = value of method “someInternalMethod” of bar
Looks nice, right? This would make use of the fact that Scala allows one-argument methods without dot operator and parentheses. So, how to go about it? Regarding Scala, I would characterize myself as an advanced beginner, so my toolset does not include any arcane stuff that only gurus understand. But wait, this can't be so difficult. Let’s take it piece by piece.
set field “foo” of bar to “baz”
Ok, to begin with, I need something called
set. This should probably be an instance of some class. How do I get it into my unit test class? Well, a trait should do fine.
trait Decapsulation { val set = new Set() }
Then I can use it like this:
@RunWith(classOf[JUnitRunner]) class BarTest extends FlatSpec with Decapsulation { "A bar" should "foo" in { val bar = new Bar set field "foo" of bar to "baz" // do more stuff and assert something now // ... } }
For those not familiar with Scala, this exemplifies the structure of a
FlatSpec-based, behavior-driven test.
It looks like
set has a method called
field , and this method takes a string parameter (“foo” in the example). Since there seems to be only one instance of
set necessary, we can make it a Scala object instead of a class.
object Set { def field(name: String): ??? = ??? }
Then the trait can look like this:
trait Decapsulation { val set = Set }
The
field method must return something that has a method called
of. And this method takes any type of object as a parameter (the object under test
bar in our case). Right, let’s create a class
Of (call it whatever you like, it does not matter for the DSL. What matters is the name of the method):
class Of(val name: String) { def of(o: AnyRef): ??? = ??? }
We will see in a minute that we need different instances of
Of, so
Of needs to be a class, not a Scala object. The
field method returns an instance of an
Of. On this instance we call the method (lowercase)
of. The
field method looks like this now:
def field(name: String): Of = new Of(name)
Remember –
name is the name of the field we are going to manipulate and we hand that over to the new instance of
Of. Now, the
of method needs to return something we can use to call
to on. Well, I guess, you get the hang of it by now ...
class To(name: String, obj: AnyRef) { def to(value: Any):??? = ??? }
This makes the
of method look like this:
def of(obj: AnyRef): To = new To(name, obj)
We pass our collected arguments (field name and the object containing the field) to a new
To instance. The (lowercase)
to method can get the new value for our field as the single parameter. And since we now have all we need, we can do the tedious reflection call and actually set the field:
def to(value: Any): Unit = { val f = obj.getClass.getDeclaredField(name) f.setAccessible(true) f.set(obj, value) }
That’s it, really – this is all there is to get started on a DSL in Scala. In hindsight, it looks almost trivially easy. The example is highly simplified – the field might belong to a base class or be final or static or both or whatever – but all this can be handled in gory detail in the
to method.
By the way, the DSL works in Java, too:
@Test public void test() { Bar bar = new Bar(); Set.field("foo").of(bar).to("baz"); // do more stuff and assert something now // ... }
Just import the
Set class —
field is a static method of the class.
In conclusion, there are these principles at work:
- Start with the instance of a class or a Scala object that represents the first instruction.
- Use one class for each instruction so the user is guided through the statement and autocompletion in IDEs works correctly. The class contains a method (or several methods, if there are branches), that represents the current instruction.
- The methods take arguments if necessary. If each method takes maximally one argument, there is no need for parentheses and the DSL looks rather like a natural language.
Pass through all data you collect as constructor arguments to the next class on the way to the final instruction in the DSL statement and process them there.
The complete working code is right here. Paste it into a Scala file and test it.
package getting.started.on.a.dsl.in.scala trait Decapsulation { val set = Set } object Set { def field(name: String): Of = new Of(name) } class Of(val name: String) { def of(obj: AnyRef): To = new To(name, obj) } class To(name: String, obj: AnyRef) { def to(value: Any): Unit = { val f = obj.getClass.getDeclaredField(name) f.setAccessible(true) f.set(obj, value) } }
I leave it to you to figure out how to implement:
val qux = value of method “someInternalMethod” of bar
It should not be difficult after the field example. One hint though — you'll need to call the container class for the new
of method something other than
Of, or put it in another package than the field's
Of class. And probably generics need to come into play to make the instruction return objects of the right type instead of
Nothing.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/rolling-your-own-dsl-in-scala | CC-MAIN-2017-26 | refinedweb | 1,108 | 69.52 |
Pages: 1
Hi,
I'm trying to install Kazam using the PKGBUILD 'kazam-git' or 'kazam' found in the Aur repo. After solving a couple of issues with the dependencies, I manage to install it correctly. But now when I try to run it I've got this error message:
INFO Kazam - Logger intialized. INFO Kazam - Running on: Traceback (most recent call last): File "/usr/bin/kazam", line 140, in <module> from kazam.app import KazamApp File "/usr/lib/python2.7/site-packages/kazam/app.py", line 39, in <module> from kazam.frontend.window_area import AreaWindow File "/usr/lib/python2.7/site-packages/kazam/frontend/window_area.py", line 29, in <module> from gi.repository import Gtk, GObject, Gdk, Wnck, GdkX11 File "/usr/lib/python2.7/site-packages/gi/importer.py", line 76, in load_module dynamic_module._load() File "/usr/lib/python2.7/site-packages/gi/module.py", line 242, in _load version) File "/usr/lib/python2.7/site-packages/gi/module.py", line 97, in __init__ repository.require(namespace, version) gi.RepositoryError: Requiring namespace 'Gtk' version '2.0', but '3.0' is already loaded
Any one know a route to solve this, or maybe how to debug the cause of the error?
thanks!
Offline
No solution here. Instead I have a slightly different error to report. When I try to fire up Kazam (kazam-bzr from AUR), I get the following error:
** (kazam:19099): WARNING **: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Traceback (most recent call last): File "/usr/bin/kazam", line 146, in <module> from kazam.app import KazamApp File "/usr/lib/python3.3/site-packages/kazam/app.py", line 39, in <module> from kazam.frontend.window_area import AreaWindow File "/usr/lib/python3.3/site-packages/kazam/frontend/window_area.py", line 29, in <module> from gi.repository import Gtk, GObject, Gdk, Wnck, GdkX11 File "<frozen importlib._bootstrap>", line 1558, in _find_and_load File "<frozen importlib._bootstrap>", line 1525, in _find_and_load_unlocked File "/usr/lib/python3.3/site-packages/gi/importer.py", line 76, in load_module dynamic_module._load() File "/usr/lib/python3.3/site-packages/gi/module.py", line 242, in _load version) File "/usr/lib/python3.3/site-packages/gi/module.py", line 97, in __init__ repository.require(namespace, version) gi.RepositoryError: Requiring namespace 'Gtk' version '2.0', but '3.0' is already loaded
I have no idea what any of this means. I have a kind of weird, self-built GUI environment that uses the evilwm window manager, so I suspect my problems are related to that. These sorts of applications seem to be built by and for folks running more full-blown, pre-configured GUI environments.
Anyway, I'm anxious to try out this program if anyone can interpret the error message I'm getting and offer pointers for resolving the problems. If that doesn't happen, I'll go back to my command-line utilities (ffmpeg's x11grab these days) or maybe try out ffcast2.
TIA,
James
Last edited by jamtat (2013-01-07 08:02:51)
Offline
Sounds like this error. My guess is you need libwnck3.
6EA3 F3F3 B908 2632 A9CB E931 D53A 0445 B47A 0DAB
Great things come in tar.xz packages.
Offline
Thanks for your reply, ConnorBehan. The thread to which you link actually talks about libwebkit rather than libwnck3. I had earlier run across indications that I needed libwebkit and installed webkitgtk2 and webkitgtk3, which is what pacman lists when I search for libwebkit. But that was prior to posting the error output above, so installing it didn't resolve the issue. libwnck3 is, however, a different package and it is not installed on this system. I'll do some further research on that library to see whether it might resolve my problem.
Offline
It never hurts to try. The issue is that gobject-introspection loaded gtk3 (as it was designed to do) but then came across a package whose gtk3 version was not available. It then loaded the gtk2 version to see if "the next best thing" would work, and it did not.
6EA3 F3F3 B908 2632 A9CB E931 D53A 0445 B47A 0DAB
Great things come in tar.xz packages.
Offline
Yeah, you're right, ConnorBehan: installing libwnck did get rid of that error message. But then I got another error message about pulse audio, so I've given up. You see, I'm still lingering in the audio stone age and using alsa without pulse; I'd just begun to get a rudimentary grasp pn alsa when pulse came out, and I'm not eager to start again from scratch learning about pulse and its relation to alsa. So unless I can find a way to make Kazam use alsa instead of pulse, I'll just stick with the command-line screencasting solutions I've cobbled together. Again, thanks for your input.
Offline
Pages: 1 | https://bbs.archlinux.org/viewtopic.php?pid=1214897 | CC-MAIN-2016-22 | refinedweb | 836 | 59.7 |
Hi, I have a problem understanding these 2 utilities. Not able to figure out what it does.
For eg. I was trying to replicate this with example from
I have followed the pytorch documentation and coded with batch First
import torch
import torch.nn as nn
from torch.autograd import Variable
batch_size = 3
max_length = 3
hidden_size = 2
n_layers =1
num_input_features = 1
input_tensor = torch.zeros(batch_size,max_length,num_input_features)
input_tensor[0] = torch.FloatTensor([1,2,3])
input_tensor[1] = torch.FloatTensor([4,5,0])
input_tensor[2] = torch.FloatTensor([6,0,0])
batch_in = Variable(input_tensor)
seq_lengths = [3,2,1]
pack = torch.nn.utils.rnn.pack_padded_sequence(batch_in, seq_lengths, batch_first=True)
print (pack)
Here I get output as
PackedSequence(data=Variable containing:
1
4
6
2
5
3
[torch.FloatTensor of size 6x1]
, batch_sizes=[3, 2, 1])
I could retrieve the original sequence back if I do
torch.nn.utils.rnn.pad_packed_sequence(pack,[3,2,1])
which is obvious.
But can somebody help me to understand how and why we got that output 'pack' with size (6,1). Also the whole functionality, in general, I mean why we need these 2 utilities and how it is useful.Thanks in advance for the help.
Cheers,Vijendra | https://discuss.pytorch.org/t/understanding-pack-padded-sequence-and-pad-packed-sequence/4099 | CC-MAIN-2017-26 | refinedweb | 199 | 52.56 |
In my last post, I presented a brute-force solution to a rather innocuous problem. I challenged my readers to think of a way to optimize the solution. The problem is trivial, but the excercise is important: naive algorithms get the job done but optimization is key to getting them done in a reasonable amount of time. In my naive solution I managed to get the answer to the problem but if I added just one more number to the input list of my function, I was looking at increasing the time it took to reach that solution exponentially (still no word on whether this is actually exponential or logarithmic… more investigation on my part will be required). This of course isn’t ideal in our day to day work as programmers.
In this series of posts, I’m going to investigate optimization techniques and how I like to approach these sorts of problems.
My first step in optimizing an algorithm is to first have an algorithm to optimize. When approaching a problem I like to break it down to small cases and solve for those first. Once I understand the problem and its solution I end up with a function that should get the correct answer in most realistic cases. However, the work is far from done. This initial solution is usually a brute-force solution or else terribly inefficient in some other way.
The next step is to analyze the solution you’ve come up with. Once you get correct answers from your function, increase the number (frequency) and size (space) of your inputs. If you still get correct answers and it’s pretty fast no matter how large the input, you’ve probably gotten lucky and landed on an optimal solution from the get-go. However, unless you’re a seasoned expert you’re probably going to need to analyze your solution and figure out a strategy for improving it.
Let’s put all of this theory to practice and see how we can analyze my naive solution!
The first most primitive test we can apply to analyze our solution is `time`. (Note: this is typically a *nix tool, Windows users… you may have to find some sort of alternative that I’m not aware of. Sorry.). It’s not the most precise and in-depth tool you can use, but it will give you a general sense of how well you’re doing. If your solution can give you an answer in less than a minute you’re doing okay. The key to this tool is to run it several times and take the results in aggregate. However, this isn’t a post on statistics so you’ll have to figure that out on your own.
On a deeper level, you’ll run across profiling tools. In Python there’s a built-in module called “cProfile” (or just “profile” if the former isn’t available). It will do its best to accurately report the timing of every function call in your method, the cumulative time spent in the function at each call, and the total number of calls to each function. This useful module also includes a statistics module for better reporting. Go ahead, read up on it, and run it on my brute-force function.
You’ll get a table that looks like:
4194340 function calls in 9.890 CPU seconds Ordered by: call count ncalls tottime percall cumtime percall filename:lineno(function) 4194281 3.085 0.000 3.085 0.000 {sum} 22 0.000 0.000 0.000 0.000 {method 'strip' of 'str' objects} 22 0.000 0.000 0.000 0.000 largest_sub.py:45() 1 0.000 0.000 0.000 0.000 {map} 1 0.000 0.000 0.000 0.000 {method 'read' of 'file' objects} 1 0.000 0.000 0.000 0.000 {open} 1 0.000 0.000 0.000 0.000 {len} 1 0.001 0.001 9.890 9.890 {execfile} 1 6.803 6.803 9.888 9.888 largest_sub.py:4(largest_sub) 1 0.000 0.000 0.000 0.000 {method '__enter__' of 'file' objects} 1 0.000 0.000 0.000 0.000 {method 'split' of 'str' objects} 1 0.000 0.000 0.000 0.000 cProfile.py:66(Profile) 1 0.000 0.000 9.890 9.890 :1() 1 0.000 0.000 0.000 0.000 {sorted} 1 0.001 0.001 9.889 9.889 largest_sub.py:1() 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 {range} 1 0.000 0.000 0.000 0.000 cProfile.py:5()
There’s a lot of information here, so I went ahead and ordered it by the number of calls to each function. The big sore spot is right at the top. It’s telling me that my function is spending almost all of its time calculating an enormous number of sums. If we look back at the code we can easily see that this is true. The function is essentially running through every combination of sub-sets from two up to the entire set and calling sum on each one!
So one of the most simple and common optimization strategies you might encounter is divide and conquer! If we think about our function and all the sums it’s doing it’s not hard to realize that we’re doing a lot of sums that don’t make sense. For example, the sum of the combination of the two largest integers in our list will never equal any number in our list. So why bother summing it? So the core of this strategy is to divide the problem space up so that we can do fewer calculations.
So when I go back to the drawing board, I’m going to think of the most simple way possible that I can divide up my problem space so that I can reduce the number of calls to sum that my function has to do. It’s okay if it’s still not the most efficient possible algorithm. Getting the most efficient algorithm takes time and accurate observation!
So my second attempt:
def largest_sub2(nums): combinations = 0 sorted_nums = sorted(nums) for num in nums: rest = filter(lambda x: x < num, nums) up_to = len(rest) up_to = up_to + 1 if up_to > 2 else up_to for i in range(2, up_to): for combination in itertools.combinations(rest, i): if sum(combination) == num: combinations +=1 return combinations
So here we have a slightly better function. Same answer, but slightly faster. In this function, I’ve iterated over the list of input numbers and divided up the number of combinations to check to make sure that I’m only generating combinations of numbers that are less than the current one that we’re looking at in the input list. This way, when we’re looking at the second largest number in the input list, we’re not going to try and find combinations with the largest number in the list.
To see if our change actually improved anything, run the `time` command. You should see a modest improvement! Then run the profiler. The table should look something like this:
4194150 function calls in 6.782 CPU seconds Ordered by: call count ncalls tottime percall cumtime percall filename:lineno(function) 4194049 3.035 0.000 3.035 0.000 {sum} 22 0.000 0.000 0.000 0.000 largest_sub.py:46() 22 0.000 0.000 0.000 0.000 {len} 22 0.000 0.000 0.000 0.000 {method 'strip' of 'str' objects} 22 0.000 0.000 0.000 0.000 {range} 1 3.746 3.746 6.780 6.780 largest_sub.py:29(largest_sub2) 1 0.000 0.000 0.000 0.000 {map} 1 0.000 0.000 0.000 0.000 {method 'read' of 'file' objects} 1 0.000 0.000 0.000 0.000 {open} 1 0.001 0.001 6.782 6.782 {execfile} 1 0.000 0.000 0.000 0.000 {method '__enter__' of 'file' objects} 1 0.000 0.000 0.000 0.000 {method 'split' of 'str' objects} 1 0.000 0.000 0.000 0.000 cProfile.py:66(Profile) 1 0.000 0.000 6.782 6.782 :1() 1 0.000 0.000 0.000 0.000 {sorted} 1 0.001 0.001 6.781 6.781 largest_sub.py:1() 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} 1 0.000 0.000 0.000 0.000 cProfile.py:5()
Looks like we only shaved off a few corner cases. There’s only a difference of 232 calls to sum. Yet there should be an improvement of at least a couple of seconds. Looks like we could think of a better strategy! See if you can come up with one. I’ll post a better strategy next time and we’ll compare results. | https://agentultra.com/blog/optimizations-techniques-divide-and-conquer/ | CC-MAIN-2017-30 | refinedweb | 1,518 | 74.49 |
In normal Asp.net web applications, writing precise unit test cases with high code coverage is difficult because of the strong coupling of the GUI and the server code. But Asp.Net MVC is a framework that Microsoft has built in a completely unit testable way. 100% unit test case code coverage can be achieved in Asp.Net MVC applications because the Models, Views and Controllers are very loosely coupled.
In this article I will be explaining about the support for test driven development and writing unit test cases in an Asp.Net MVC application.
Test Driven Development (TDD) and Best Practices
Test Driven Development is the process where the developer creates the test case first and then fixes the actual implementation of the method. It happens this way, first create a test case, fail it, do the implementation, ensure the test case success, re-factor the code and then continue with the cycle again as indicated in Fig 1.0.
Fig 1.0 - Test Driven Development
TDD also states that every code which is deployed to production should be covered by the unit test case.
The best practices in writing the unit test cases are to follow the F.I.R.S.T principles. Here is a brief explanation for each of them.
Fast – Write the unit test cases by keeping their performance in mind. This is required because you will have to run thousands of them during every release.
Isolated – Each test case to be isolated to a particular behavior i.e. in-case of a failure the developer should know what went wrong without having to go through the execution flow. A test case should be broken down into multiple smaller ones for this purpose.
Repeatable – The test case should be stable so that it provides the consistent results over multiple runs.
Self validating – The test should result in a pass or a failure. There should not be any ambiguity scenarios with assertions.
Timely – This is more important for TDD as the test cases should be created before the actual implementation is.
TDD with Asp.Net MVC
Asp.Net MVC is a perfect adoption for unit testing since the entire server code is embedded in the controllers, which are de- coupled from other layers like GUI (View). When you create an Asp.Net MVC application Visual Studio by default will prompt for adding a Unit Test case project as shown in Fig 2.0. This should show how well the MVC architecture would support writing Unit Tests.
Fig 2.0 - Prompt for adding a Unit Test case project
Testing the Controller
Let us take a look at few samples that will demonstrate writing the unit test cases for the Controllers in an As.Net MVC application. Create an Asp.Net MVC application in Visual Studio 2012 and check the option of creating a unit test project by default. Let's assume that we are creating an online movie store application. In the MVC application create a controller named MovieStoreController, a model named Movie; for now we don’t create any views, which can be done at a later point in time.
The requirement is to display a welcome message to the user on the MovieStore index and to list the movies from the application database.
As it is a TDD we should write the test case first for the requirement and then proceed with the implementation. In the UnitTest project add a unit test class named MovieStoreControllerTest. As a standard the naming convention followed in MVC for the unit test classes is the controller name followed by ‘Test’. First let's concentrate on building the functionality for the welcome message requirement. The following code is the test case to validate the welcome message.
namespace TddWithMvc.Tests { [TestClass] public class MovieStoreControllerTest { [TestMethod] public void MovieStoreProvidesCorrectWelcomeMessage() { MovieStoreController movieController = new MovieStoreController(); ViewResult result = movieController.Index() as ViewResult; Assert.AreEqual("Welcome to the movie store!", result.ViewBag.Message); } } }
The index returns the ViewResult, as the MVC controller methods will return Views, unlike normal Asp.net C# methods. Running the above test case will fail and let us implement the Index method in the controller as shown below, allowing the functionality to pass. This is how the items in the controller ViewBag can be unit tested.
namespace TddWithMvc.Controllers { public class MovieStoreController : Controller { public ActionResult Index() { ViewBag.Message = "Welcome to the movie store!"; return View(); } } }
Let us now write the unit test case for the Movie list functionality.
[TestMethod] public void MovieStoreReturnsTheListOfMovies() { MovieStoreController movieController = new MovieStoreController(); ViewResult result = movieController.Movies() as ViewResult; var movies = result.Model as List<Movie>; Assert.AreEqual(movies.Count, 200); } public ActionResult Movies() { var movies = moviesDataAccess.GetMoviesFromDatabase(); return View(movies); }
Now go and do coding until the above test case goes for a pass. Below is the implementation, which fetches the movies from the database and encapsulate it with the View.
public ActionResult Movies() { var movies = moviesDataAccess.GetMoviesFromDatabase(); return View(movies); }
Running the unit test case will succeed as the current movie count is 200 in the database. But what if a few more movies are added to the database? Or what if the database connection in the test case execution environment does not go through? The test case will start failing. The intention of the unit test case is now broken. The intention was to test only the controller method if it returns the list of movies and not to test the number of records in the database or the database connectivity. This is when you need to still decouple the dependent classes. One of the ways to achieve it is by doing dependency injection.
To correct the above unit test case, let us introduce dependency injection. Create an interface to mock data access object and inject it while creating the controller instance. Here is the code for interface and the test case using the mock object.
public interface IDataAccessManager { List<Movie> GetMoviesFromDatabase(); }
In the test method notice that the mock object is being injected into the controller constructor.
[TestMethod] public void MovieStoreReturnsTheListOfMovies() { MovieStoreController movieController = new MovieStoreController(new MockDataAccessManager()); ViewResult result = movieController.Movies() as ViewResult; var movies = result.Model as List<Movie>; Assert.AreEqual(movies.Count, 2); }
Now in the implementation we will have to make the changes to use the actual data access class during the application execution and to use the mock object only during unit test case. Following is the code to do it exactly as required. Notice that the default constructor is explicitly injecting the concrete data access object.
IDataAccessManager moviesDataAccess; public MovieStoreController(IDataAccessManager dataAccessManager) { moviesDataAccess = dataAccessManager; } public MovieStoreController():this(new MoviesDataAccessManager()) { } public ActionResult Movies() { var movies = moviesDataAccess.GetMoviesFromDatabase(); return View(movies); }
For testing the Views you may have to use automation testing software like Selenium, Coded UI, etc.
Happy reading! | http://mobile.codeguru.com/columns/experts/test-driven-development-in-asp.net-mvc-architecture.htm | CC-MAIN-2017-30 | refinedweb | 1,128 | 57.16 |
Screen real-estate is at a premium on smaller Windows Mobile smartphone devices, so it is important to maximise the use of every available pixel in conveying useful information to the user. This blog entry demonstrates a technique to maximise the usability of combo boxes within .NET Compact Framework applications by reusing some of the existing screen real-estate.
Defining the problem
When you place a Combobox control on a form within a Smartphone application you get a control which shows a single item with left/right arrows that allow you to scroll through the list of options.
As an alternative if you press the Action key (middle of the D-PAD) on the combobox the full list is displayed in a fullscreen window. This window however is always labelled “Select an Item”. Because the window is fullscreen it is possible for the user to loose context (they can’t see any labels etc you have placed on your form), and forget what they are meant to be selecting. What we would like to do is to replace the “Select an Item” title with something more appropriate for the current field.
Developing a solution
This is where having knowledge of the underlying native (C/C++) APIs that implement Windows Mobile is useful. When targeting a smartphone device a .NET Compact Framework Combobox control is actually implemented via two separate controls as far as the operating system is concerned, a 1 item high listbox coupled to an up/down control. The native Win32 smartphone documentation calls this type of configuration a Spinner Control.
By using a utility included with Visual Studio called Remote Spy++ we can see this collection of controls. In the screenshot to the left you can see that one of the combo boxes in the sample application is selected, and underneath it you can clearly see the listbox and up/down (msctls_updown32) controls it is made up of.
In order to change the title of the popup window associated with a combo box we need to:
- Find the native window handle for the up/down control
- Change it’s window title to the desired prompt text
The ComboBox class has a Handle property that returns a native window handle (HWND) that is associated with the managed control. For a ComboBox the Handle property actually returns the handle of the listbox control and not the parent “NETCFITEMPICKERCLASS_A” control as may be expected. This was probably done for native code interop compatibility reasons. So to find the native window handle of the up/down control we simply need to find the handle for the window immediately after the window returned by the ComboBox.Handle property.
Once we have found the window handle for the up/down control we are finally ready to replace the popup window title. According to the Spin Box Control documentation, the popup window title comes from the title of the up/down control, and it defaults to the “Select an Item” prompt if a title isn’t specified. We can change the title of a native window by calling the SetWindowText API.
All these individual steps can be wrapped up into an easy to call method as follows:
using System; using System.Windows.Forms; using System.Runtime.InteropServices; public static class ComboboxExtender { public static void SetPromptText(ComboBox combo, String text) { // Obtain the native window handle of the up/down spinner control IntPtr hWndListBox = combo.Handle; IntPtr hWndSpinner = GetWindow(hWndListBox, GW_HWNDNEXT); // Set the title of the spinner SetWindowText(hWndSpinner, text); } [DllImport("coredll.dll")] private static extern bool SetWindowText(IntPtr hWnd, String lpString); [DllImport("coredll.dll")] private static extern IntPtr GetWindow(IntPtr hWnd, UInt32 uCmd); private const UInt32 GW_HWNDNEXT = 2; }
Sample Application
[Download ComboBoxPromptExample.zip - 16KB]
The sample application available for download demonstrates using the ComboBoxExtender class developed above. The interface consists of two combo boxes which have been configured identically. The first combo box shows the default prompt text, while the second has had its prompt text replaced via a call to ComboBoxExtender.SetPromptText within the Form’s Load event handler as shown below:
ComboBoxExtender.SetPromptText(comboBox2, "Select Car Colour");
I am a stickler for improving the quality and polish of Windows Mobile applications. This tip is a very minimal code change that can be implemented quickly, yet can have a profound impact on the usability of your application if it is full of combo boxes. A similar trick can also be implemented for Expandable Edit controls.
If you are developing .NET Compact Framework applications that target the Windows Mobile Smartphone (Standard) platform, I seriously encourage you to consider making this usability change to your applications.
What if I don’t want that windows to show at all. I just want to arrow through the combobox to select and then I go enter to based in my selection perform some other action.
is there a way to do that?
thanks a lot | http://www.christec.co.nz/blog/archives/270 | CC-MAIN-2018-09 | refinedweb | 815 | 51.68 |
In C#, Dictionary is a generic collection which is generally used to store key/value pairs. Dictionary is defined under System.Collection.Generics namespace. It is dynamic in nature means the size of the dictionary is growing according to the need.
Example:
Output:
Key:- a.01 and Value:- C Key:- a.02 and Value:- C++ Key:- a.03 and Value:- C#
A Hashtable is a collection of key/value pairs that are arranged based on the hash code of the key. Or in other words, a Hashtable is used to create a collection which uses a hash table for storage. It is the non-generic type of collection which is defined in System.Collections namespace. In Hashtable, key objects must be immutable as long as they are used as keys in the Hashtable.
Example:
Output:
Key:- A3 and Value:- GeeksforGeeks Key:- A2 and Value:- to Key:- A1 and Value:- Welcome
Hashtable Vs Dictionary
Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. | https://www.geeksforgeeks.org/difference-between-hashtable-and-dictionary-in-c-sharp/?ref=leftbar-rightbar | CC-MAIN-2021-17 | refinedweb | 183 | 66.54 |
DESTDIR: Support for Staged Installs
The GNU coding standards, last updated April 7,: Legal Issues, Previous: Top, Up: Top [Contents][Index].
If you did not obtain this file directly from the GNU project and recently, please check for a newer version. You can get the GNU Coding Standards from the GNU web server in many different formats, including the Texinfo source, PDF, HTML, DVI, plain text, and more, at:.
If you are maintaining an official GNU package, in addition to this document, please read and follow the GNU maintainer information (see Contents in.
The GNU Hello program serves as an example of how to follow the GNU coding standards for a trivial program..
This release of the GNU Coding Standards was last updated April 7, 2012.
Next: Design Advice, Previous: Preface, Up: Top [Contents][Index]
This chapter discusses how you can make sure that GNU software avoids legal difficulties, and other related issues.
Next: Contributions, Up: Legal Issues [Contents][Index] memory.
Next: Trademarks, Previous: Reading Non-Free Code, Up: Legal Issues [Contents][Index] sent GNU packages. If you have reached the stage of maintaining a GNU program (whether released or not), please take a look: see Legal Matters in Information for GNU Maintainers.
Previous: Contributions, Up: Legal Issues [Contents][Index].
What is legally required, as regards other people’s trademarks, is to avoid using them in ways which a reader might reasonably understand as naming or labeling our own programs or activities. For example, since “Objective C” is (or at least was) a trademark, we made sure to say that we provide a “compiler for the Objective C language” rather than an “Objective C compiler”. The latter would have been meant as a shorter way of saying the former, but it does not explicitly state the relationship, so it could be misinterpreted as using “Objective C” as a label for the compiler rather than for the language.
Please don’t use “win” as an abbreviation for Microsoft Windows in GNU software or documentation. In hacker terminology, calling something a “win” is a form of praise. If you wish to praise Microsoft Windows when speaking on your own, by all means do so, but not in GNU software. Usually we write the name “Windows” in full, but when brevity is very important (as in file names and sometimes symbol names), we abbreviate it to “w”. For instance, the files and functions in Emacs that deal with Windows start with ‘w32’.
Next: Program Behavior, Previous: Legal Issues, Up: Top [Contents][Index]
This chapter discusses some of the issues you should take into account when designing your program.
Next: Compatibility, Up: Design Advice [Contents][Index]). Guile also includes bindings for GTK+/GNOME, making it practical to write modern GUI functionality within Guile. We don’t reject programs written in other “scripting languages” such as Perl and Python, but using Guile is very important for the overall consistency of the GNU system.
Next: Using Extensions, Previous: Source Language, Up: Design Advice [Contents][Index] ‘--ansi’, ‘--posix’, or ‘-.
Next: Standard C, Previous: Compatibility, Up: Design Advice [Contents][Index].
Next: Conditional Compilation, Previous: Using Extensions, Up: Design Advice [Contents][Index]
Previous: Standard C, Up: Design Advice [Contents][Index]. Of course, the former method assumes that
HAS_FOO is defined as either 0 or 1.
While this is not a silver bullet solving all portability problems, and is not always appropriate, following this policy would have saved GCC developers many hours, or even
Next: Writing C, Previous: Design Advice, Up: Top [Contents][Index]
This chapter describes conventions for writing robust software. It also describes general standards for error messages, the command line interface, and how libraries should behave.
Next: Semantics, Up: Program Behavior [Contents][Index]
The GNU Project regards standards published by other organizations as suggestions, not orders. We consider those standards, but we do not “obey” them. In developing a GNU program, you should implement an outside standard’s specifications when that makes the GNU system better overall in an objective sense. When it doesn’t, you shouldn’t.
In most cases, following published standards is convenient for users—it means that their programs or scripts will work more portably. For instance, GCC implements nearly all the features of Standard C as specified by that standard. C program developers would be unhappy if it did not. And GNU utilities mostly follow specifications of POSIX.2; shell script writers and users would be unhappy if our programs were incompatible.
But we do not follow either of these specifications rigidly, and there are specific points on which we decided not to follow them, so as to make the GNU system better for users.
For instance, Standard C says that nearly all extensions to C are prohibited. How silly! GCC implements many extensions, some of which were later adopted as part of the standard. If you want these constructs to give an error message as “required” by the standard, you must specify ‘--pedantic’, which was implemented only so that we can say “GCC is a 100% implementation of the standard”, not because there is any reason to actually use it.
POSIX.2 specifies that ‘df’ and ‘du’ must output sizes by default in units of 512 bytes. What users want is units of 1k, so that is what we do by default. If you want the ridiculous behavior “required” by POSIX, you must set the environment variable ‘POSIXLY_CORRECT’ (which was originally going to be named ‘POSIX_ME_HARDER’).
GNU utilities also depart from the letter of the POSIX.2 specification when they support long-named command-line options, and intermixing options with ordinary arguments. This minor incompatibility with POSIX is never a problem in practice, and it is very useful.
In particular, don’t reject a new feature, or remove an old one, merely because a standard says it is “forbidden” or “deprecated”.
Next:; UTF-8 is the most important.
Check every system call for an error return, unless you know you wish
to ignore errors. Include the system error text (from
perror,: Errors, Previous: Semantics, Up: Program Behavior [Contents][Index] ‘_’. The ‘_’ should be followed by the chosen name prefix for the library, to prevent collisions with other libraries. These can go in the same files with user entry points if you like.
Static functions and variables can be used as you like and need not fit any naming convention.
Next::: Dynamic Plug-In Interfaces, Previous: Graphical Interfaces, Up: Program Behavior [Contents][Index] in a browser should
output the same information as invoking ‘p.cgi --help’ from the
command line.
Next: --help, Up: Command-Line Interfaces [Contents][Index]
The standard
--version option should direct the program to
print information about its name, version, origin and legal status,
all on standard output, and then exit successfully. Other options and
arguments should be ignored once this is seen, and the program should
not perform its normal function.
The first line is meant to be easy for a program to parse; the version number proper starts after the last space. In addition, it contains the canonical name for this program, in this format:
GNU Emacs 19.30
The
If line stating the license, preferably using one of abbrevi in Information for GNU Maintainers.)
Translations of the above lines must preserve the validity of the copyright notices (see Internationalization). If the translation’s character set supports it, the ‘(C)’ should be replaced with the copyright symbol, as follows:
©
Write the word “Copyright” exactly like that, in English. Do not translate it into another language. International treaties recognize the English word “Copyright”; translations into other languages do not have legal significance.
Finally, here is the table of our suggested license abbreviations. Any abbreviation can be followed by ‘vversion[+]’, meaning that particular version, or later versions with the ‘+’, as shown above.
In the case of exceptions for extra permissions with the GPL, we use ‘/’ for a separator; the version number can follow the license abbreviation as usual, as in the examples below.
GNU General Public License,.
GNU Lesser General Public License,.
GNU GPL with the exception for Ada.
The Apache Software Foundation license,.
The Artistic license used for Perl,.
The Expat license,.
The Mozilla Public License,.
The original (4-clause) BSD license, incompatible with the GNU GPL.
The license used for PHP,.
The non-license that is being in the public domain,.
The license for Python,.
The revised (3-clause) BSD, compatible with the GNU GPL,.
The simple non-copyleft license used for most versions of the X Window System,.
The license for Zlib,.
More information about these licenses and many more are on the GNU licensing web pages,.
Previous: - <>, and the
general page for help using GNU programs. The format should be like this:
Report bugs to: mailing-address pkg home page: <> General help using GNU software: <>
It is ok to mention other appropriate mailing lists and web pages.
Next:: OID Allocations, Previous: Dynamic Plug-In Interfaces, Up: Program Behavior [Contents][Index].
‘-N’ in
tar.
‘-a’ in
du,
ls,
nm,
stty,
uname,
and
unexpand.
‘-a’ in
diff.
‘-A’ in
ls.
‘-a’ in
etags,
tee,
time;
‘-r’ in
tar.
‘-a’ in
cp.
‘-n’ in
shar.
‘-l’ in
m4.
‘-a’ in
diff.
‘-v’ in
gawk.
‘-W’ in
make.
‘-o’ in
make.
‘-a’ in
recode.
‘-a’ in
wdiff.
‘-A’ in
ptx.
‘-n’ in
wdiff.
For server programs, run in the background.
‘-B’ in
ctags.
‘-f’ in
shar.
Used in GDB.
Used in GDB.
‘-b’ in
tac.
‘-b’ in
cpio and
diff.
‘-b’ in
shar.
Used in
cpio and
tar.
‘-b’ in
head and
tail.
‘-b’ in
ptx.
Used in various programs to make output shorter.
‘-c’ in
head,
split, and
tail.
‘-C’ in
etags.
‘-A’ in
tar.
Used in various programs to specify the directory to use.
‘-c’ in
chgrp and
chown.
‘-F’ in
ls.
‘-c’ in
recode.
‘-c’ in
su;
‘-x’ in GDB.
‘-d’ in
tar.
Used in
gawk.
‘-Z’ in
tar and
shar.
‘-A’ in
tar.
‘-w’ in
tar.
Used in
diff.
‘-W copyleft’ in
gawk.
‘-C’ in
ptx,
recode, and
wdiff;
‘-W copyright’ in
gawk.
Used in GDB.
‘-q’ in
who.
‘-l’ in
du.
Used in
tar and
cpio.
‘-c’ in
shar.
‘-x’ in
ctags.
‘-d’ in
touch.
‘-d’ in
make and
m4;
‘-t’ in Bison.
‘-D’ in
m4.
‘-d’ in Bison and
ctags.
‘-D’ in
tar.
‘-L’ in
chgrp,
chown,
cpio,
du,
ls, and
tar.
‘-D’ in
du.
Specify an I/O device (special file name).
‘-d’ in
recode.
‘-d’ in
look.
‘-d’ in
tar.
‘-n’ in
csplit.
Specify the directory to use, in various programs. In
ls, it
means to show directories themselves rather than their contents. In
rm and
ln, it means to not treat links to directories
specially.
‘-x’ in
strip.
‘-X’ in
strip.
‘-n’ in
make.
‘-e’ in
diff.
‘-z’ in
csplit.
‘-x’ in
wdiff.
‘-z’ in
wdiff.
‘-N’ in
diff.
‘-e’ in
make.
‘-e’ in
xargs.
Used in GDB.
Used in
makeinfo.
‘-o’ in
m4.
‘-b’ in
ls.
‘-X’ in
tar.
Used in GDB.
‘-x’ in
xargs.
‘-e’ in
unshar.
‘-t’ in
diff.
‘-e’ in
sed.
‘-g’ in
nm.
‘-i’ in
cpio;
‘-x’ in
tar.
‘-f’ in
finger.
‘-f’ in
su.
‘-E’ in
m4.
‘-f’ in
gawk,
info,
make,
mt,
sed, and
tar.
‘-F’ in
gawk.
‘-b’ in Bison.
‘-F’ in
ls.
‘-T’ in
tar.
Used in
makeinfo.
‘-F’ in
ptx.
‘-y’ in Bison.
‘-f’ in
tail.
Used in
makeinfo.
‘-f’ in
cp,
ln,
mv, and
rm.
‘-F’ in
shar.
For server programs, run in the foreground; in other words, don’t do anything special to run the server in the background.
Used in
ls,
time, and
ptx.
‘-F’ in
m4.
Used in GDB.
‘-g’ in
ptx.
‘-x’ in
tar.
‘-i’ in
ul.
‘-g’ in
recode.
‘-g’ in
install.
‘-z’ in
tar and
shar.
‘-H’ in
m4.
‘-h’ in
objdump and
recode
‘-H’ in
who.
Used to ask for brief usage information.
‘-d’ in
shar.
‘-q’ in
ls.
In
makeinfo, output HTML.
‘-u’ in
who.
‘-D’ in
diff.
‘-I’ in
ls;
‘-x’ in
recode.
‘-w’ in
diff.
‘-B’ in
ls.
‘-B’ in
diff.
‘-f’ in
look and
ptx;
‘-i’ in
diff and
wdiff.
‘-i’ in
make.
‘-i’ in
ptx.
‘-I’ in
etags.
‘-f’ in Oleo.
‘-i’ in
tee.
‘-I’ in
diff.
‘-b’ in
diff.
‘-i’ in
tar.
‘-i’ in
etags;
‘-I’ in
m4.
‘-I’ in
make.
‘-G’ in
tar.
‘-i’, ‘-l’, and ‘-m’ in Finger.
In some programs, specify the name of the file to read as the user’s init file.
‘-i’ in
expand.
‘-T’ in
diff.
‘-i’ in
ls.
‘-i’ in
cp,
ln,
mv,
rm;
‘-e’ in
m4;
‘-p’ in
xargs;
‘-w’ in
tar.
‘-p’ in
shar.
Used in
date
‘-j’ in
make.
‘-n’ in
make.
‘-k’ in
make.
‘-k’ in
csplit.
‘-k’ in
du and
ls.
‘-l’ in
etags.
‘-l’ in
wdiff.
‘-g’ in
shar.
‘-C’ in
split.
Used in
split,
head, and
tail.
‘-l’ in
cpio.
Used in
gawk.
‘-t’ in
cpio;
‘-l’ in
recode.
‘-t’ in
tar.
‘-N’ in
ls.
‘-l’ in
make.
Used in
su.
Used in
uname.
‘-M’ in
ptx.
‘-m’ in
hello and
uname.
‘-d’ in
cpio.
‘-f’ in
make.
Used in GDB.
‘-n’ in
xargs.
‘-n’ in
xargs.
‘-l’ in
xargs.
‘-l’ in
make.
‘-P’ in
xargs.
‘-T’ in
who.
‘-T’ in
who.
‘-d’ in
diff.
‘-M’ in
shar.
‘-m’ in
install,
mkdir, and
mkfifo.
‘-m’ in
tar.
‘-M’ in
tar.
‘-a’ in Bison.
‘-L’ in
m4.
‘-a’ in
shar.
‘-W’ in
make.
‘-r’ in
make.
‘-w’ in
shar.
‘-x’ in
shar.
‘-3’ in
wdiff.
‘-c’ in
touch.
‘-D’ in
etags.
‘-1’ in
wdiff.
‘-d’ in
cp.
‘-2’ in
wdiff.
‘-S’ in
make.
‘-l’ in Bison.
‘-P’ in
shar.
‘-e’ in
gprof.
‘-R’ in
etags.
‘-p’ in
nm.
Don’t print a startup splash screen.
Used in
makeinfo.
‘-a’ in
gprof.
‘-E’ in
gprof.
‘-m’ in
shar.
Used in
makeinfo.
Used in
emacsclient.
Used in various programs to inhibit warnings.
‘-n’ in
info.
‘-n’ in
uname.
‘-f’ in
cpio.
‘-n’ in
objdump.
‘-0’ in
xargs.
‘-n’ in
cat.
‘-b’ in
cat.
‘-n’ in
nm.
‘-n’ in
cpio and
ls.
Used in GDB.
‘-o’ in
tar.
‘-o’ in
make.
‘-l’ in
tar,
cp, and
du.
‘-o’ in
ptx.
‘-f’ in
gprof.
‘-F’ in
gprof.
‘-o’ in
getopt,
fdlist,
fdmount,
fdmountd, and
fdumount.
In various programs, specify the output file name.
‘-o’ in
shar.
‘-o’ in
rm.
‘-c’ in
unshar.
‘-o’ in
install.
‘-l’ in
diff.
Used in
makeinfo.
‘-p’ in
mkdir and
rmdir.
‘-p’ in
ul.
‘-p’ in
cpio.
‘-P’ in
finger.
‘-c’ in
cpio and
tar.
Used in
gawk.
‘-P’ in
m4.
‘-f’ in
csplit.
Used in
tar and
cp.
‘-p’ in
su.
‘-m’ in
cpio.
‘-s’ in
tar.
‘-p’ in
tar.
‘-l’ in
diff.
‘-L’ in
cmp.
‘-p’ in
make.
‘-w’ in
make.
‘-o’ in
nm.
‘-s’ in
nm.
‘-p’ in
wdiff.
‘-p’ in
ed.
Specify an HTTP proxy.
‘-X’ in
shar.
‘-q’ in
make.
Used in many programs to inhibit the usual output. Every program accepting ‘--quiet’ should accept ‘--silent’ as a synonym.
‘-Q’ in
shar
‘-Q’ in
ls.
‘-n’ in
diff.
Used in
gawk.
‘-B’ in
tar.
Used in GDB.
‘-n’ in
make.
‘-R’ in
tar.
Used in
chgrp,
chown,
cp,
ls,
diff,
and
rm.
‘-r’ in
touch.
‘-r’ in
ptx.
‘-r’ in
tac and
etags.
‘-r’ in
uname.
‘-R’ in
m4.
‘-r’ in
objdump.
‘-r’ in
cpio.
‘-i’ in
xargs.
‘-s’ in
diff.
‘-a’ in
cpio.
‘-r’ in
ls and
nm.
‘-f’ in
diff.
‘-R’ in
ptx.
‘-s’ in
tar.
‘-p’ in
tar.
‘-g’ in
stty.
Used in GDB.
‘-S’ in
ptx.
‘-S’ in
du.
‘-s’ in
tac.
Used by
recode to chose files or pipes for sequencing passes.
‘-s’ in
su.
‘-A’ in
cat.
‘-p’ in
diff.
‘-E’ in
cat.
‘-F’ in
diff.
‘-T’ in
cat.
Used in many programs to inhibit the usual output. Every program accepting ‘--silent’ should accept ‘--quiet’ as a synonym.
‘-s’ in
ls.
Specify a file descriptor for a network server to use for its socket, instead of opening and binding a new socket. This provides a way to run, in a non-privileged process, a server that normally needs a reserved port number.
Used in
ls.
‘-W source’ in
gawk.
‘-S’ in
tar.
‘-H’ in
diff.
‘-E’ in
unshar.
‘-L’ in
shar.
‘-s’ in
cat.
‘-w’ in
wdiff.
‘-y’ in
wdiff.
Used in
tar and
diff to specify which file within
a directory to start processing with.
‘-s’ in
wdiff.
‘-S’ in
shar.
‘-S’ in
make.
‘-s’ in
recode.
‘-s’ in
install.
‘-s’ in
strip.
‘-S’ in
strip.
‘-s’ in
shar.
‘-S’ in
cp,
ln,
mv.
‘-b’ in
csplit.
‘-s’ in
gprof.
‘-s’ in
du.
‘-s’ in
ln.
Used in GDB and
objdump.
‘-s’ in
m4.
‘-s’ in
uname.
‘-t’ in
expand and
unexpand.
‘-T’ in
ls.
‘-T’ in
tput and
ul.
‘-t’ in
wdiff.
‘-a’ in
diff.
‘-T’ in
shar.
Used in
ls and
touch.
Specify how long to wait before giving up on some operation.
‘-O’ in
tar.
‘-c’ in
du.
‘-t’ in
make,
ranlib, and
recode.
‘-t’ in
m4.
‘-t’ in
hello;
‘-W traditional’ in
gawk;
‘-G’ in
ed,
m4, and
ptx.
Used in GDB.
‘-t’ in
ctags.
‘-T’ in
ctags.
‘-t’ in
ptx.
‘-z’ in
tar.
‘-u’ in
cpio.
‘-U’ in
m4.
‘-u’ in
nm.
‘-u’ in
cp,
ctags,
mv,
tar.
Used in
gawk; same as ‘--help’.
‘-B’ in
shar.
‘-V’ in
shar.
Print more information about progress. Many programs support this.
‘-W’ in
tar.
Print the version number.
‘-V’ in
cp,
ln,
mv.
‘-v’ in
ctags.
‘-V’ in
tar.
‘-W’ in
make.
‘-l’ in
shar.
‘-w’ in
ls and
ptx.
‘-W’ in
ptx.
‘-T’ in
who.
‘-z’ in
gprof.
Next: Memory Usage, Previous: Option Table, Up: Program Behavior [Contents][Index].8 GNU Dico:.
Previous: Memory Usage, Up: Program Behavior [Contents][Index]
Programs should be prepared to operate when ‘/usr’ and ‘/etc’ are read-only file systems. Thus, if the program manages log files, lock files, backup files, score files, or any other files which are modified for internal purposes, these files should not be stored in ‘/usr’ or ‘/etc’.
There are two exceptions. ‘/etc’ is used to store system configuration information; it is reasonable for a program to modify files in ‘/etc’ when its job is to update the system configuration. Also, if the user explicitly asks to modify one file in a directory, it is reasonable for the program to store other files in the same directory.
Next: Documentation, Previous: Program Behavior, Up: Top [Contents][Index]
This chapter provides advice on how best to use the C language when writing GNU software.:
static char * concat (char *s1, char *s2) { … }
or, if you want to use traditional C syntax, format the definition like this:
static char * concat (s1, s2) /* Name starts in column one here */ char *s1, *s2; { /* Open brace in column one here */ … }
In Standard C, if the arguments don’t fit nicely on one line, split it like this:
int lots_of_args (int an_integer, long a_long, short a_short, double a_double, float a_float) …
For
struct and
enum types, likewise put the braces in
column one, unless the whole contents fits on one line:
struct foo { int a, b; }
or
struct foo { int a, b; }.
Next: Syntactic Conventions, Previous: Formatting, Up: Writing C [Contents][Index] ‘#endif’ should have a comment, except in the case of short conditionals (just a few lines) that are not nested. The comment should state the condition of the conditional that is ending, including its sense. ‘#else’ should have a comment describing the condition and sense of the code that follows. For example:
#ifdef foo … #else /* not foo */ … #endif /* not foo */
#ifdef foo … #endif /* foo */
but, by contrast, write the comments this way for a ‘ ‘-Wall’ option, and change the code whenever it issues a warning. If you want to do this, then do. Other programmers prefer not to use ‘-Wall’, because it gives warnings for valid and legitimate code which they do not want to change. If you want to do this, then do. The compiler should be your servant, not your master.
Don’t make the program ugly just to placate static analysis tools such
as
lint,
clang, and GCC with extra warnings
options such as ‘-Wconversion’ and ‘ (assignments
inside
while-conditions are ok). For example, don’t write
this:
if ((foo = (char *) malloc (sizeof *foo)) ==.
Next: System Portability, Previous: Syntactic Conventions, Up: Writing C [Contents][Index] ‘#define’. GDB knows about enumeration
constants.
You might want to make sure that none of the file names would conflict
if.
Next: CPU Portability, Previous: Names, Up: Writing C [Contents][Index].
Next:: Internationalization, Previous: CPU Portability, Up: Writing C [Contents][Index]
Historically, C implementations differed substantially, and many systems lacked a full implementation of ANSI/ISO C89. Nowadays, however, very few systems lack a C89 compiler and GNU C supports almost all of C99. Similarly, most systems implement POSIX.1-1993 libraries and tools, and many have POSIX.1-2001.
Hence, there is little reason to support old C or non-POSIX systems, and you may want to take advantage of C99 and POSIX-1.2001 to write clearer, more portable, or faster code. You should use standard interfaces where possible; but if GNU extensions make your program more maintainable, powerful, or otherwise better, don’t hesitate to use them. In any case, don’t make your own declaration of system functions; that’s a recipe for conflict.
Despite the standards, nearly every library function has some sort of portability issue on some system or another. Here are some examples:
open
Names with trailing
/’s are mishandled on many platforms.
printf
long double may be unimplemented; floating values Infinity and
NaN are often mishandled; output for large precisions may be
incorrect.
readlink
May return
int instead of
ssize_t.
scanf
On Windows,
errno is not set on failure.
Gnulib is a big help in this regard. Gnulib provides implementations of standard interfaces on many of the systems that lack them, including portable implementations of enhanced GNU interfaces, thereby making their use portable, and of POSIX-1.2008 interfaces, some of which are missing even on up-to-date GNU systems.
Gnulib also provides many useful non-standard interfaces; for example,
C implementations of standard data structures (hash tables, binary
trees), error-checking type-safe wrappers for memory allocation
functions (
xmalloc,
xrealloc), and output of error
messages.
Gnulib integrates with GNU Autoconf and Automake to remove much of the burden of writing portable code from the programmer: Gnulib makes your configure script automatically determine what features are missing and use the Gnulib code to supply the missing pieces.
The Gnulib and Autoconf manuals have extensive sections on portability: Introduction in Gnulib and see Portable C and C++ in Autoconf. Please consult them for many more details.
Next:.
Next: Mmap, Previous: Character Set, Up: Writing C [Contents][Index]
In the C locale, the output of GNU programs should stick to plain ASCII for quotation characters in messages to users: preferably 0x22 (‘"’) or 0x27 (‘'’) for both opening and closing quotes. Although GNU programs traditionally used 0x60 (‘`’) for opening and 0x27 (‘'’) for closing quotes, nowadays quotes ‘`like this'’ are typically rendered asymmetrically, so quoting ‘"like this"’ or ‘'like this'’ typically looks better.
It is ok, but not required, for GNU programs to generate locale-specific quotes in non-C locales. For example:
printf (gettext ("Processing file '%s'..."), file);
Here, a French translation might cause
gettext to return the
string
"Traitement de fichier
‹ %s ›...", yielding quotes
more appropriate for a French locale.
Sometimes a program may need to use opening and closing quotes
directly. By convention,
gettext translates the string
‘"`"’ to the opening quote and the string ‘"'"’ to the
closing quote, and a program can use these translations. Generally,
though, it is better to translate quote characters in the context of
longer strings.
If the output of your program is ever likely to be parsed by another
program, it is good to provide an option that makes this parsing
reliable. For example, you could escape special characters using
conventions from the C language or the Bourne shell. See for example
the option ‘--quoting-style’ of GNU
ls.
Previous: Quote Characters, Up: Writing C [Contents][Index].
Next: Managing Releases, Previous: Writing C, Up: Top [Contents][Index]
A GNU program should ideally come with full free documentation, adequate for both reference and tutorial purposes. If the package can be programmed or extended, the documentation should cover programming or extending it, as well as just using it.
Next: Doc Strings and Manuals, Up: Documentation [Contents][Index].. Don’t just tell the reader what each feature can do—say what jobs it is good for, and show how to use it for those jobs. Explain what is recommended usage, and what kinds of usage users should avoid. Making Index Entries in GNU Texinfo, and see Defining the Entries of an Index in GNU Texinfo. text prohibited by law.
Please do not write ‘()’ after a function name just to indicate
it is a function.
foo () is not a function, it is a function
call with no arguments.
Next: Manual Structure Details, Previous: GNU Manuals, Up: Documentation [Contents][Index] redundancy looks bad. Meanwhile, the informality that is acceptable in a documentation string is totally unacceptable in a manual.
The only good way to use documentation strings in writing a good manual is to use them as a source of information for writing good text.
Next:.
Next: Manual Credits, Previous: Manual Structure Details, Up: Documentation [Contents][Index].
Next:.
Next: NEWS File, Previous: Manual Credits, Up: Documentation [Contents][Index].
Next: Change Logs, Previous: Printed Manuals, Up: Documentation [Contents][Index]
In addition to its manual, the package should have a file named ‘NEWS’ ‘NEWS’ file gets very long, move some of the older items into a file named ‘ONEWS’ and put a note at the end referring the user to that file.
Next:.
Next: Style of Change Logs, Up: Change Logs [Contents][Index] ‘ChangeL ‘ChangeLog’ file using
rcs2log; in Emacs, the command
C-x v a (
vc-update-change-log) does the job.
There’s no need to describe the full purpose of the changes or how they work together. However, sometimes it is useful to write one line to describe the overall purpose of a change or a batch of changes. If you think that a change calls for explanation, you’re probably right. Please do explain it—but please put the full explanation in comments in the code, where people will see it whenever they see the code. For example, “New function” is enough for the change log when you add a function, because there should be a comment before the function definition to explain what it does.
In the past, we recommended not mentioning changes in non-software files (manuals, help files, etc.) in change logs. However, we’ve been advised that it is a good idea to include them, for the sake of copyright records.
The easiest way to add an entry to ‘ChangeL.
Next: Simple Changes, Previous: Change Log Concepts, Up: Change Logs [Contents][Index]
Here are some simple examples of change log entries, starting with the header line that says who made the change and when it was installed, ‘* ‘)’, rather than ‘,’, and opening the continuation with ‘(’ as in this example:
* keyboard.c (menu_bar_items, tool_bar_items) (Fexecute_extended_command): Deal with 'keymap' property.
When you install someone else’s changes, put the contributor’s name in the change log entry rather than in the text of the entry. In other words, write this:
2002-07-14 John Doe <jdoe@gnu.org> * sewing.c: Make it sew.
rather than this:
2002-07-14 Usual Maintainer <usual@gnu.org> * sewing.c: Make it sew. Patch by jdoe@gnu.org.
As for the date, that should be the date you applied the change.
Next: Conditional Changes, Previous: Style of Change Logs, Up: Change Logs [Contents][Index] technical.
However, you should keep change logs for documentation files when the project gets copyright assignments from its contributors, so as to make the records of authorship more accurate.
Next: Indicating the Part Changed, Previous: Simple Changes, Up: Change Logs [Contents][Index]
Source files can often contain code that is conditional to build-time
or static conditions. For example, C programs can contain
compile-time
#if conditionals; programs implemented in
interpreted languages can contain module imports of function
definitions that are only performed for certain versions of the
interpreter; and Automake ‘Makefile.am’ files can contain
variable definitions or target declarations that are only to be
considered if a configure-time Automake conditional is true.
Many changes are conditional as well: sometimes you add a new variable, or function, or even a new program or library, which is entirely dependent on a build-time condition. It is useful to indicate in the change log the conditions for which a change applies.
Our convention for indicating conditional changes is to use square brackets around the name of the condition.
Conditional changes can happen in numerous scenarios and with many variations, so here are some examples to help clarify. This first example describes changes in C, Perl, and Python files which are conditional but do not have an associated function or entity name:
* xterm.c [SOLARIS2]: Include <string.h>. * FilePath.pm [$^O eq 'VMS']: Import the VMS::Feature module. * framework.py [sys.version_info < (2, 6)]: Make "with" statement available by importing it from __future__, to support also python 2.5.
Our other examples will for simplicity be limited to C, as the minor changes necessary to adapt them to other languages should be self-evident.
Next, here is an entry describing a new definition which is entirely
conditional: the C macro
FRAME_WINDOW_P is defined (and used)
only when the macro
HAVE_X_WINDOWS is defined:
* frame.h [HAVE_X_WINDOWS] (FRAME_WINDOW_P): Macro defined.
Next, an entry for a change within the function
init_display,
whose definition as a whole is unconditional, but the changes
themselves are contained in a ‘#ifdef HAVE_LIBNCURSES’
conditional:
* dispnew.c (init_display) [HAVE_LIBNCURSES]: If X, call tgetent.
Finally, here is an entry for a change that takes effect only when a certain macro is not defined:
(gethostname) [!HAVE_SOCKETS]: Replace with winsock version.
Previous:.
Next: Reading other Manuals, Previous: Change Logs, Up: Documentation [Contents][Index].
Be sure that man pages include a copyright statement and free license. The simple all-permissive license is appropriate for simple man pages (see License Notices for Other Files in Information for GNU Maintainers).
For long man pages, with enough explanation and documentation that they can be considered true manuals, use the GFDL (see License for Manuals).
Finally, the GNU help2man program () is one way to automate generation of a man page, in this case from ‘--help’ output. This is sufficient in many cases.
Previous: Man Pages, Up: Documentation [Contents][Index].
Next: References, Previous: Documentation, Up: Top [Contents][Index].
Next: ‘config.h’ to the proper
configuration file for the chosen system. If you use this technique,
the distribution should not contain a file named
‘config.h’. This is so that people won’t be able to build the
program without configuring it first.
Another thing that
configure can do is to edit the Makefile. If
you do this, the distribution should not contain a file named
‘Makefile’. Instead, it should include a file ‘Makefile.in’ which
contains the input used for editing. Once again, this is so that people
won’t be able to build the program without configuring it first.
If
configure does write the ‘Makefile’, then ‘Makefile’
should have a target named ‘Makefile’ which causes
configure
to be rerun, setting up the same configuration that was set up last
time. The files that
configure reads should be listed as
dependencies of ‘Makefile’.
All the files which are output from the
configure script should
have comments at the beginning explaining that they were generated
automatically using
configure. This is so that users won’t think
of trying to edit them by hand.
The
configure script should write a file named ‘config ‘.’ and ‘..’
‘config.sub’ that you can use as a subroutine to validate system
types and canonicalize aliases.
The
configure script should also take the option
‘-
‘config ‘config ‘-: Releases, Previous: Configuration, Up: Managing Releases :
foo.1 : foo.man sedscript sed -f sedscript foo.man > foo.1
will fail when the build directory is not the source directory, because ‘foo.man’ and ‘sed ‘foo ‘/usr/local/bin/foo’ and ‘/usr/local/lib/libfoo.a’, then an installation invoked as in the example above would install ‘/tmp/stage/usr/local/bin/foo’ and ‘ ‘Make ‘/usr/local’.
When building the complete GNU system, the prefix will be empty and
‘/usr’ will be a symbolic link to ‘/’.
‘/usr/local/bin’, but write it as ‘$(exec_prefix)/bin’. (If you are using Autoconf, write it as ‘@bindir@’.)
sbindir
The directory for installing executable programs that can be run from the shell, but are only generally useful to system administrators. This should normally be ‘/usr/local/sbin’, but write it as ‘$(exec_prefix)/sbin’. (If you are using Autoconf, write it as ‘@sbindir@’.)
libexecdir
The directory for installing executable programs to be run by other programs rather than by users. This directory should normally be ‘/usr/local/libexec’, but write it as ‘$ ‘$(libexecdir)/package-name/’, possibly within additional subdirectories thereof, such as ‘$ ‘/usr/local/share’, but write it as ‘$ ‘/usr/local/share’, but write it as ‘$ ‘$(datadir)/package-name/’.
The directory for installing read-only data files that pertain to a single machine–that is to say, files for configuring a host. Mailer and network configuration files, ‘/etc/passwd’, and so forth belong here. All the files in this directory should be ordinary ASCII text files. This directory should normally be ‘/usr/local/etc’, but write it as ‘$(prefix)/etc’. (If you are using Autoconf, write it as ‘@sysconfdir@’.)
Do not install executables here in this directory (they probably belong in ‘$(libexecdir)’ or ‘$(sbindir)’). Also do not install files that are modified in the normal course of their use (programs whose purpose is to change the configuration of the system excluded). Those probably belong in ‘$(localstatedir)’.
The directory for installing architecture-independent data files which the programs modify while they run. This should normally be ‘/usr/local/com’, but write it as ‘$ ‘$(datadir)’ or ‘$(sysconfdir)’. ‘$(localstatedir)’ should normally be ‘/usr/local/var’, but write it as ‘$(prefix)/var’. (If you are using Autoconf, write it as ‘@local ‘/usr/local/include’, but write it as ‘$(prefix)/include’. (If you are using Autoconf, write it as ‘@includedir@’.)
Most compilers other than GCC do not look for header files in directory
‘ ‘ ‘foo.h’, then it should install the header
file in the
oldincludedir directory if either (1) there is no
‘foo.h’ there or (2) the ‘foo.h’ that exists came from the Foo
package.
To tell whether ‘foo.h’ came from the Foo package, put a magic
string in the file—part of a comment—and
grep for that string.
The directory for installing documentation files (other than Info) for this package. By default, it should be ‘/usr/local/share/doc/yourpkg’, but it should be written as ‘$(datarootdir)/doc/yourpkg’. (If you are using Autoconf, write it as ‘@docdir@’.) The yourpkg subdirectory, which may include a version number, prevents collisions among files with common names, such as ‘README’.
The directory for installing the Info files for this package. By
default, it should be ‘/usr/local/share/info’, but it should be
written as ‘$ ‘$(libexecdir)’
instead. The value of
libdir should normally be
‘/usr/local/lib’, but write it as ‘$(exec_prefix)/lib’.
(If you are using Autoconf, write it as ‘@libdir@’.)
The directory for installing any Emacs Lisp files in this package. By default, it should be ‘/usr/local/share/emacs/site-lisp’, but it should be written as ‘$(datarootdir)/emacs/site-lisp’.
If you are using Autoconf, write the default as ‘@lispdir@’. In order to make ‘@lispdir@’ work, you need the following lines in your ‘configure.in’ file:
lispdir='${datarootdir}/emacs/site-lisp' AC_SUBST(lispdir)
The directory for installing locale-specific message catalogs for this package. By default, it should be ‘/usr/local/share/locale’, but it should be written as ‘$ ‘/usr/local/share/man’, but you should write it as ‘$(datarootdir)/man’. (If you are using Autoconf, write it as ‘@mandir@’.)
The directory for installing section 1 man pages. Write it as ‘$(mandir)/man1’.
The directory for installing section 2 man pages. Write it as ‘$ ‘$(infodir)’
with
$(INSTALL_DATA) (see Command Variables), and then run
the
install-info program if it is present.
install-info
is a program that edits the Info ‘dir’ ‘ ‘configure’ even
if ‘configure’ can be remade using a rule in the Makefile. More
generally, ‘make maintainer-clean’ should not delete anything
that needs to exist in order to run ‘configure’ ‘gcc ‘gcc-1.40.tar.gz’.
It is ok to support other free compression formats as well.
The
dist target should explicitly depend on all non-source files
that are in the distribution, to make sure they are up to date in the
distribution.
See Making Releases. ‘$(bindir)’ is in the search path.
installdirs
It’s useful to add a target named ‘installdirs’ to create the directories where files are installed, and their parent directories. There is a script called ‘m ‘-s’ option to
make is needed to silence messages about entering
subdirectories):
make -s -n install -o all \ PRE_INSTALL=pre-install \ POST_INSTALL=post-install \ NORMAL_INSTALL=normal-install \ | gawk -f pre-install.awk
where the file ‘pre-install.awk’ could contain this:
$0 ~ /^(normal-install|post-install)[ \t]*$/ {on = 0} on {print $0} $0 ~ /^pre-install[ \t]*$/ {on = 1}
Previous: Makefile Conventions, Up: Managing Releases [Contents][Index].LESSER’.
Naturally, all the source files must be in the distribution. It is
okay to include non-source files in the distribution along with the
source files they are generated from, provided they are up-to-date
with the source they are made from, and machine-independent, so that
normal building of the distribution will never modify them. We
commonly include non-source files produced by Autoconf, Automake, all the files in the distribution are world-readable, and
that directories are world-readable and world-searchable (octal mode 755).
We used to recommend that all directories in the distribution also be
world-writable (octal mode 777), because ancient versions of
tar
would otherwise not cope when extracting the archive as an unprivileged
user. That can easily lead to security issues when creating the archive,
however, so now we recommend against that..
Next:: Index, Previous: References,] | http://www.gnu.org/software/hello/manual/standards.html#Mmap | crawl-003 | refinedweb | 6,395 | 70.29 |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
A
safe<T, PP , EP> can be used anywhere a type T
can be used. Any expression which uses this type is guaranteed to return
an arithmetically correct value or to trap in some way.
This type inherits all the notation, associated types and template parameters and valid expressions of SafeNumeric types. The following specify additional features of this type.
Implements all expressions and only those expressions defined by the
SafeNumeric<T>
type requirements. Note that all these expressions are
constexpr. Thus, the result type of such an expression will
be another safe type. The actual type of the result of such an expression
will depend upon the specific promotion policy template parameter.
When a binary operand is applied to two instances of safe<T, PP, EP>on of the following must be true:
The promotion policies of the two operands must be the same or one of them must be void
The exception policies of the two operands must be the same or one of them must be void
If either of the above is not true, a compile error will result.
The most common usage would be safe<T> which uses the default promotion and exception policies. This type is meant to be a "drop-in" replacement of the intrinsic integer types. That is, expressions involving these types will be evaluated into result types which reflect the standard rules for evaluation of C++ expressions. Should it occur that such evaluation cannot return a correct result, an exception will be thrown.
There are two aspects of the operation of this type which can be customized with a policy. The first is the result type of an arithmetic operation. C++ defines the rules which define this result type in terms of the constituent types of the operation. Here we refer to these rules as "type promotion" rules. These rules will sometimes result in a type which cannot hold the actual arithmetic result of the operation. This is the main motivation for making this library in the first place. One way to deal with this problem is to substitute our own type promotion rules for the C++ ones.
The following program will throw an exception and emit an error message at runtime if any of several events result in an incorrect arithmetic result. Behavior of this program could vary according to the machine architecture in question.
#include <exception> #include <iostream> #include <safe_integer.hpp> void f(){ using namespace boost::numeric; safe<int> j; try { safe<int> i; std::cin >> i; // could overflow ! j = i * i; // could overflow } catch(std::exception & e){ std::cout << e.what() << std::endl; } std::cout << j; }
The term "drop-in replacement" reveals the aspiration of this library. In most cases, this aspiration is realized. In the following example, the normal implicit conversions function the same for safe integers as they do for built-in integers.
//>; int main(){ const long x = 97; f(x); // OK - implicit conversion to int const safe_t y = 97; f(y); // Also OK - checked implicit conversion to int return 0; }
When
the
safe<long> is implicitly converted to an
int when calling
f, the value is checked to be
sure that it is within the legal range of an int and will invoke an
exception if it cannot. We can easily verify this by altering the
exception handling policy in the above example to
loose_trap_policy. This will invoke a compile time error on
any conversion might invoke a runtime exception.
//, native, loose_trap_policy>; int main(){ const long x = 97; f(x); // OK - implicit conversion to int can never fail const safe_t y = 97; f(y); // could overflow so trap at compile time return 0; }
But this raises it's own questions. We can see that in this example, the program can never fail:
The value 97 is assigned to y
y is converted to an int
and used as an argument to f
The conversion can never fail because the value of 97 can always fit into an int. But the library code can't detect this and emits the checking code even though it's not necessary.
This can be addressed by using a
safe_literal. A
safe literal can contain one and only one value. All the functions in
this library are marked
constexpr. So it can be determined
at compile time that conversion to an
int can never fail
and no runtime checking code need be emitted. Making this small change
will permit the above example to run with zero runtime overhead while
guaranteeing that no error can ever occur.
// Copyright (c) 2018 Robert Ramey // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at //) #include <boost/safe_numerics/safe_integer.hpp> #include <boost/safe_numerics/safe_integer_literal.hpp> using namespace boost::safe_numerics; int f(int i){ return i; } template<intmax_t N> using safe_literal = safe_signed_literal<N, native, loose_trap_policy>; int main(){ const long x = 97; f(x); // OK - implicit conversion to int const safe_literal<97> y; f(y); // OK - y is a type with min/max = 97; return 0; }
With this trivial example, such efforts would hardly be deemed
necessary. But in a more complex case, perhaps including compile time
arithmetic expressions, it could be much more difficult to verify that
the constant is valid and/or no checking code is needed. And there is
also possibility that over the life time of the application, the compile
time constants might change, thus rendering any ad hoc analyse obsolete.
Using
safe_literal
will future-proof your code against well-meaning, but code-breaking
updates.
Another way to avoid arithmetic errors like overflow is to promote types to larger sizes before doing the arithmetic.
Stepping back, we can see that many of the cases of invalid
arithmetic wouldn't exist if the result types were larger. So we can
avoid these problems by replacing the C++ type promotion rules for
expressions with our own rules. This can be done by specifying a
promotion policy
. The policy stores
the result of an expression in the smallest size type that can
accommodate the largest value that an expression can yield. No checking
for exceptions is necessary. The following example illustrates
this.
automatic
#include <boost/safe_numerics/safe_integer.hpp> #include <iostream> int main(int, char[]){ using safe_int = safe< int, boost::numeric::automatic, boost::numeric::default_exception_policy >; safe_int i; std::cin >> i; // might throw exception auto j = i * i; // won't ever trap - result type can hold the maximum value of i * i static_assert(boost::numeric::is_safe<decltype(j)>::value); // result is another safe type static_assert( std::numeric_limits<decltype(i * i)>::max() >= std::numeric_limits<safe_int>::max() * std::numeric_limits<safe_int>::max() ); // always true return 0; } | https://www.boost.org/doc/libs/develop/libs/safe_numerics/doc/html/safe.html | CC-MAIN-2020-50 | refinedweb | 1,126 | 52.7 |
Return a Response directly
When you create a FastAPI path operation you can normally return any data from it: a
dict, a
list, a Pydantic model, a database model, etc.
By default, FastAPI would automatically convert that return value to JSON using the
jsonable_encoder.
Then, behind the scenes, it would put that JSON-compatible data (e.g. a
dict) inside of a Starlette
JSONResponse that would be used to send the response to the client.
But you can return a
JSONResponse directly from your path operations.
It might be useful, for example, to return custom headers or cookies.
Starlette
Response¶
In fact, you can return any Starlette
Response or any sub-class of it.
Tip
JSONResponse itself is a sub-class of
Response.
And when you return a Starlette fastapi import FastAPI from fastapi.encoders import jsonable_encoder from pydantic import BaseModel from starlette.responses import JSONResponse class Item(BaseModel): title: str timestamp: datetime description: str = None app = FastAPI() @app.put("/items/{id}") def update_item(id: str, item: Item): json_compatible_item_data = jsonable_encoder(item) return JSONResponse(content=json_compatible_item_data)
Note
Notice that you import it directly from
starlette.responses, not from
fastapi. you want to return a response that is not available in the default Starlette
Responses.
Let's say that you want to return XML.
You could put your XML content in a string, put it in a Starlette Response, and return it:
from fastapi import FastAPI from starlette.responses import.
In the next sections you will see how to use/declare these custom
Responses while still having automatic data conversion, documentation, etc.
You will also see how to use them to set response Headers and Cookies. | https://fastapi.tiangolo.com/tutorial/response-directly/ | CC-MAIN-2020-05 | refinedweb | 274 | 57.06 |
2009/11/2 Martin Aspeli <optilude+li...@gmail.com>: > I think it's better to use top-level namespaces to indicate ownership, > if nothing else to avoid the chance of things clashing. For the repoze > project to "claim" the wsgi.* namespace seems both a bit presumteuous > and clash-prone.
It does not a claiming of a namespace, it's a usage. Obviously other parties are free to use it too. > You think so? First of all, repoze != zope, and secondly, I'd rather > hope people had grown out of discarding code based on a name or a > namespace. With all the reach-out work Chris and others have done, I'd > be surprised if the repoze.* name was turning people off. I think my worry with repoze.* is that it claims to want to be "plumbing Zope into the WSGI Pipeline"; that might be deprecated, but as it is, repoze as a project has some sort of direction, it's not just a group of contributors. Some of the software I've contributed under the repoze name is not very much related to this direction (e.g. repoze.formapi). > Plone people (and me personally) prefer to "own" a namespace. What if > someone else had the idea to call something Chameleon, which is not > entirely unlikely? What about a more generic name? Must we always come > up with a suitably quirky name when a more functional one would do? I agree to some extent; but certain pieces of software benefit from having a real name. > Incidentally, I bloggedt about this stuff in the context of Plone > yesterday: > Yes, I saw it; I'm not sure we're in alignment although I'm not sure we're not :) \malthe _______________________________________________ Repoze-dev mailing list Repoze-dev@lists.repoze.org | https://www.mail-archive.com/repoze-dev@lists.repoze.org/msg01550.html | CC-MAIN-2017-17 | refinedweb | 297 | 74.59 |
created a simple demo about “Creating a Simple Registration Form using the ADO.NET way”. In this article, I'm going to demonstrate how to create a simple form that would allows users to insert data to the database using L2S.
As an overview, LINQ to SQL is a technology that allow you to query sql server. LINQ to SQL is an O/RM (object relational mapping) implementation that ships in the .NET Framework "Orcas" release, and which allows you to model a relational database using .NET classes. You can then query the database using LINQ, as well as update/insert/delete data from it.
I will not cover much on details about it in this article so if you need to know more about this technology then you can refer to this link:
STEP 1: Creating a new Website in Visual Studio
To get started then lets go ahead and fire up Visual Studio 2008 and create a new WebSite by selecting File > New WebSite.
STEP 2: Adding a DBML file
Since we are going to use L2S then we need to add .dbml file. To do this, just right click on the application root and select Add New Item. On the template select LINQ to SQL Classes file. See below screen shot:
Now rename your dbml file the way you want it and then click OK. Note that I’m using the Northwind database for this demo and on that case I renamed the dbml file to Northwind to make it friendlier.
Now open up server explorer in Visual Studio and browse the database that you wan’t to work on (in this case the Northwind database). Just for the purpose of this example I’m going to use the Customers table from the northwind database and drag it to the Northwind.dbml design surface. See the screen shot below:
That’s simple! Isn’t it?
What happens there is that by time you drag a table in the design surface, L2S will automatically generates by implementing the same operator pattern as the standard query operators such as Where and Select
STEP 3: Setting up the GUI
Now let’s go ahead and create our form for data entry. For the simplicity of this demo, I just set up the form like below:
<html xmlns="">
<head runat="server">
<title>Untitled Page</title>
<style type="text/css">
.style1{width: 400px;}
.style1 td {width:200px;}
</style>
</head>
<body>
<form id="form1" runat="server">
<asp:Literal</asp:Literal>
<table class="style1">
<tr>
<td>Company ID</td>
<td><asp:TextBox</td>
</tr>
<td>Company Name</td>
<td><asp:TextBox</td>
>
<asp:Button
</form>
</body>
</html>
STEP 4: Creating the SaveCustomerInfo() method
After setting up our GUI then let’s go ahead and create the method for inserting the data to the database using L2S. Here are the code blocks below:
using System;
using System.Configuration;
using System.Data;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Xml.Linq;
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
}
protected void Button1_Click(object sender, EventArgs e)
SaveCustomerInfo();
private void SaveCustomerInfo()
using (NorthwindDataContext context = new NorthwindDataContext())
{
//Create a new instance of the Customer object
Customer cust = new Customer();
//Add new values to each fields
cust.CustomerID = TextBoxID.Text;
cust.CompanyName = TextBoxCompanyName.Text;
cust.ContactName = TextBoxContactName.Text;
cust.ContactTitle = TextBoxContactTitle.Text;
cust.Address = TextBoxAddress.Text;
cust.City = TextBoxCity.Text;
cust.Region = TextBoxRegion.Text;
cust.PostalCode = TextBoxPostalCode.Text;
cust.Country = TextBoxCountry.Text;
//Insert the new Customer object
context.Customers.InsertOnSubmit(cust);
//Sumbit changes to the database
context.SubmitChanges();
//Display Message for successful operation
LiteralMessage.Text = "<p style='color:Green;'>Information Successfully saved!</p>";
}
}
As you can see, the code above was very straight forward. First we have created a new instance of the DataContext which we had created on STEP 2 and wrapped it inside the “using” block; this is to ensure that the DataContext will be disposed after its processing. Second we created a new instance of the Customer object that was defined within the DataContext, this object has properties which will be filled with values that comes from the user inputs. Third we inserted a new Customer object to the Customers set and then call the context.SubmitChanges to update our database. Lastly, L2S will do the rest for you ;).
Note: The Customer and Customers set objects are automatically created once you’ve added the Customer table in the .dmbl design surface.
STEP 5: Run the code
Running the code above will look something like below on the browser:
From there we can fill in those fields with values we want. Just for this demo, notice that I have filled in those fields with a sample data. Hitting the save button will invoke the method SaveCustomerInfo() which is responsible for doing the insert operation. Now if we look at the database we can see that the data we entered was successfully being saved to the database. See the screen shot below:
Cool! | http://gamecontest.geekswithblogs.net/dotNETvinz/archive/2010/03/11/inserting-data-to-database-using-linq-to-sql.aspx | CC-MAIN-2019-43 | refinedweb | 847 | 54.63 |
Smart Unit Tests – Test to Code Binding, Test Case Management
April 18, 2015
[Editor’s note: “Smart Unit Tests” has been renamed to “IntelliTest” with effect from Visual Studio 2015 Release Candidate (RC).]
In an earlier post we had mentioned how Smart Unit Tests can emit a suite of tests for a given code-under-test, and how it can manage this test suite as the code-under-test itself evolves. For any given method serving as the code-under-test, the emitted test suite comprises of a “parameterized unit test” and one or more “generated unit tests”, and the following figure illustrates the Pex* custom attributes used to identify the test-to-code binding, that in turn enables such management. These attributes are defined within the Microsoft.Pex.Framework namespace.
At compile time, these attributes are baked into the assembly as metadata for the types and methods to which they were applied. On subsequent invocations, Smart Unit Tests can reflect on the assembly and, with the help of this metadata, re-discover the test-to-code binding.
As you would have noticed, the generated unit tests are just traditional unit tests; indeed, they will show up in the Visual Studio Test Explorer just like any other hand written test, although it is not expected that such generated unit tests will be edited by hand. Each generated unit test calls into the parameterized unit test which then calls into the code-under-test.
The programmatic separation between the generated unit tests and the parameterized unit test allows the parameterized unit test to serve as a single location where you can specify correctness properties about the code-under-test, that all of the generated unit tests can validate. In a future post we will discuss ways to express such correctness properties, but it is not a matter of concern for the test-to- code binding, the topic of this post. What does matter is that the generated unit tests and the parameterized unit test are placed into the same assembly.
The management of these generated unit tests involves the following:
- Preventing the emission of duplicate tests: the code-under-test is always explored from scratch by the testing engine. Therefore it might generate tests that had already been generated before, and such tests should not be emitted.
- Deleting tests that have become irrelevant: when the code-under-test changes, previously relevant tests might become irrelevant, and might need to be replaced with new tests.
Here is how it is done:
Given that the parameterized unit test and the generated unit tests are placed into the same assembly, the testing engine pre-processes any existing “generated unit tests” (emitted previously) by scanning the assembly for the test fixture (identified by the type having the PexClass annotation). Generated unit tests are recognized by the Test attribute (the TestMethod, and PexGeneratedBy annotations in this case). For each such unit test method, it fetches the source code, trims away all whitespaces and uses the resulting string as a “hash” of the generated test. Once this is done, the testing engine has a dictionary of such hashes that it can consult to determine whether a newly generated test already exists. Duplicate tests are not emitted, and if a previously existing test case is not generated, it is deleted.
Keeping a suite of unit tests in sync with a fast-evolving code base is a challenge. Manually adapting the code of a potentially large number of unit tests incurs significant costs. We hope such automatic management of the suite of generated unit tests helps in addressing this challenge. As ever, report any issues or overall feedback below, or through the Send a Smile feature in Visual Studio.
I thought that the main purpose of unit tests was to act as a "security net" for detecting regressions when the code changes. However, this approach implies that when the code changes, the tests will change also to reflect those changes, so… what are they useful for?
@Tao,
When the code changes, the tests will be updated automatically “only if” you rerun IntelliTest.
This is useful in 2 cases:
(1) IntelliTest might need to be invoked multiple times as you progressively get it to generate tests to cover all of our code. During this process, new tests might need to be generated, or previously generated tests might need to be deleted. The test-to-code binding enables IntelliTest to do all of that automatically. The resultant suite of tests can be used as the net you mention.
(2) Over time, your application code itself will evolve. And that is when Test case maintenance comes into picture. New tests might be needed; some existing tests might no longer be relevant and might need to be deleted, etc. A significant cost is incurred in such maintenance . If the application code had IntelliTest based tests already, then rerunning IntelliTest on your code can help automate such maintenance. | https://blogs.msdn.microsoft.com/devops/2015/04/18/smart-unit-tests-test-to-code-binding-test-case-management/ | CC-MAIN-2017-43 | refinedweb | 824 | 56.79 |
I’m doing a little Ember app, and if there’s one thing I’ve learned from writing software, it’s to blog about error messages. Two-years-later me has ended up finding my own posts when searching for help!
So today, when getting started with Ember Data 1.13, I was trying to use the
new
JSONAPIAdapter. I saw this code snippet:
App.ApplicationAdapter = DS.JSONAPIAdapter.extend({ namespace: 'v1', });
Using that gave me an error when
ember serve-ing, though:
app.js: line 16, col 26, ‘DS’ is not defined.
Turns out,
DS isn’t in scope by default, even though
ember-cli installs
Ember Data by default.
Fixing this just means importing it at the top of
app/app.js:
import DS from 'ember-data';
Easy enough! | http://blog.steveklabnik.com/posts/2015-07-05-ember-data--ds-is-not-defined | CC-MAIN-2018-47 | refinedweb | 130 | 68.47 |
useState is considered to be the most basic of all the hooks provided by React. It is also the one you are most likely to use (no pun intended), alongside
useEffect.
Yet over the last couple of months, I have seen this hook being misused a lot. This has mostly nothing to do with the hook itself, but because state management is never easy.
This is the first part of a series I'm calling useState pitfalls, where I will try to outline common scenarios with the useState hook that might better be solved differently.
What is state?
I think it all boils down to understanding what state is. Or more precisely, what state isn't. To comprehend this, we have to look no further than the official react docs:.
So far, so easy. Putting props to state (1) is a whole other topic I will probably write about another time, and if you are not using the setter at all (2), then it is hopefully pretty obvious that we are not dealing with state.
That leaves the third question: derived state. It might seem quite apparent that a value that can be computed from a state value is not it's own state. However, when I reviewed some code challenges for a client of mine lately, this is exactly the pattern I have seen a lot, even from senior candidates.
An example
The exercise is pretty simple and goes something like this: Fetch some data from a remote endpoint (a list of items with categories) and let the user filter by the category.
The way the state was managed looked something like this most of the time:
import { fetchData } from './api' import { computeC <>...</> }
At first glance, this looks okay. You might be thinking: We have an effect that fetches the data for us, and another effect that keeps the categories in sync with the data. This is exactly what the useEffect hook is for (keeping things in sync), so what is bad about this approach?
Getting out of sync
This will actually work fine, and it's also not totally unreadable or hard to reason about. The problem is that we have a "publicly" available function
setCategories that future developers might use.
If we intended our categories to be solely dependent on our data (like we expressed with our useEffect), this is bad news:
import { fetchData } from './api' import { computeCategories, getMoreC ( <> ... <Button onClick={() => setCategories(getMoreCategories())}>Get more</Button> </> ) }
Now what? We have no predictable way of telling what "categories" are.
- The page loads, categories are X
- User clicks the button, categories are Y
- If the data fetching re-executes, say, because we are using react-query, which has features like automatic re-fetching when you focus your tab or when you re-connect to your network (it's awesome, you should give it a try), the categories will be X again.
Inadvertently, we have now introduced a hard to track bug that will only occur every now and then.
No-useless-state
Maybe this is not so much about useState after all, but more about a misconception with useEffect: It should be used to sync your state with something outside of React. Utilizing useEffect to sync two react states is rarely right.
So I'd like to postulate the following:
Whenever a state setter function is only used synchronously in an effect, get rid of the state!
— TkDodo
This is loosely based on what @sophiebits posted recently on twitter:
This is solid advice, and I'd go even further and suggest that unless we have proven that the calculation is expensive, I wouldn't even bother to memoize it. Don't prematurely optimize, always measure first. We want to have proof that something is slow before acting on it. For more on this topic, I highly recommend this article by @ryanflorence.
In my world, the example would look just like this:
import { fetchData } from './api' import { computeCategories } from './utils' const App = () => { const [data, setData] = React.useState(null) - const [categories, setCategories] = React.useState([]) + const categories = data ? computeCategories(data) : [] React.useEffect(() => { async function fetch() { const response = await fetchData() setData(response.data) } fetch() }, []) - - React.useEffect(() => { - if (data) { - setCategories(computeCategories(data)) - } - }, [data]) return <>...</> }
We've reduced complexity by halving the amount of effects and we can now clearly see that categories is derived from data. If the next person wants to calculate categories differently, they have to do it from within the
computeCategories function. With that, we will always have a clear picture of what categories are and where they come from.
A single source of truth.
Discussion (8)
That's really interesting and I think it holds up.
One common pattern for derived state is the selector pattern. Your selector is a function of state that returns your derived state value. This pattern is especially helpful for reuse.
If the app is large or complex enough, it becomes worth it to bring in a library like reselect to memoize selector return values.
reselect works well if you need to select things from „outside“ of react, like a redux store. If you are working with hooks based libraries, I prefer writing custom hooks and doing
useMemo, which is pretty much the same.
If you just store it in a
constand remove any updates to const, how does it update
categories? Or is it because arrays are objects?
categoriesis created new in every render cycle by
computeCategories. After that, we don’t update it at all. Every time
datachanges, React will re-render and thus call
computeCategoriesagain - giving us a new Array of categories.
Oh I didn't know React rerenders the entire component for a state change. Thanks for the explanation!
We've reduced complexity... But we've produced a lot of unnecessary re renders :) Or am I not right?)
I don’t see why - quite the opposite actually.
Before: initial render - fetch effect does setState, triggers render - effect that syncs data runs, triggers render. That’s 3 renders. In the final version, it’s just 2 renders: initial render + render from the one setState in the effect.
I didn’t mention this in the article because it shouldn’t be an argument though. The amount of renders usually doesn’t matter because renders are very fast. If they are not, try to make them fast rather than minimizing the amount of re-renders :)
What we do is calling
computeCategoriesin every render, which we didn’t do before. This doesn’t matter unless this function is expensive. If we have proof that we need to optimize it, we can do:
and now we call that function the same amount of times as before. | https://dev.to/tkdodo/don-t-over-usestate-4k72 | CC-MAIN-2021-10 | refinedweb | 1,110 | 64.91 |
Error in Opencv stitching project
(/upfiles/14449968823861101.jpg) I'm building the Opencv stitching project by adding the available files by myself. The project gets build without any error but there is an error, when I run it without debugging. I'm using static libraries. The error is shown in this screenshot. When I debug this program then it throws the following message while going from line no. 97 to line no. 98.
Error when debugged: "First-chance exception at 0x000007FEDB3CC7CB (opencv_world300.dll) in ConsoleApplication1.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF.
If there is a handler for this exception, the program may be safely continued."
Thank-you in advance.
#include <iostream> #include <opencv2\core\core.hpp> #include <opencv2\highgui\highgui.hpp> #include <opencv2\imgproc\imgproc.hpp> #include <opencv2\stitching.hpp> #include <vector> using namespace std; using namespace cv; vector<Mat> imgs; int main(){ Mat img1 = imread("E:/seecs/thesis/ConsoleApplication1/ConsoleApplication1/panorama_image1.jpg"); if (img1.empty()) { cout << "Can't read image '" << "'\n"; return -1; } imgs.push_back(img1); Mat img2 = imread("E:/seecs/thesis/ConsoleApplication1/ConsoleApplication1/panorama_image2.jpg"); if (img2.empty()) { cout << "Can't read image '" << "'\n"; return -1; } imgs.push_back(img2); Mat panoramaImage; Stitcher stitcher = Stitcher::createDefault(); Stitcher::Status stitcherStatus = stitcher.stitch(imgs,panoramaImage); waitKey(0); return 0; }
CMAKE OUTPUT
(more)(more)
Performing Test HAVE_CXX_FSIGNED_CHAR Performing Test HAVE_CXX_FSIGNED_CHAR - Failed Performing Test HAVE_C_FSIGNED_CHAR Performing Test HAVE_C_FSIGNED_CHAR - Failed - not found Looking for unistd.h Looking for unistd.h - not found Check size of off64_t Check size of off64_t - failed Looking for assert.h Looking for assert.h - found Looking for fcntl.h Looking for fcntl.h - found Looking for io.h Looking for io.h - found Looking for jbg_newlen Looking for jbg_newlen - not found Looking for mmap Looking for mmap - not found Looking for search.h Looking for search.h - found Looking for string.h Looking for string.h - found Looking for unistd.h Looking for unistd.h - not found ICV: Removing previous unpacked package: C:/opencv_3.0/opencv/sources/3rdparty/ippicv/unpack ICV: Unpacking ippicv_windows_20141027.zip to C:/opencv_3.0/opencv/sources/3rdparty/ippicv/unpack... ICV: Package successfully downloaded found IPP (ICV version): 8.2.1 [8.2.1] at: C:/opencv_3.0/opencv/sources ...
Okay, screenshots with debug errors --> avoid those. They are unreadable and clog up a topic. Secondly, provide the code that you are running, which file and what exact piece of code is generating this error? It seems to me that you are trying to read something which is in a directory where you do not have reading rights OR you are trying to load something from memory that is not there.
Are you sure you have loaded the images? it seems to be an empty address
I'm using this website first time so, sorry for any inconvinience in reading the code
MkZHh, next time, code should go into your question , not into a comment. also there's a "10101" button for code formatting. you'll learn ;) (no prob, we all start like this)
I'm neither using CUDA nor OPENCL or etc. I'm only opencv 3.0 on Windows 7 and x64 platform. I've added the following header files: autocalib, blenders, camera exposure_compensate, matchers, motion_estimators, opencl_kernels_stitching, opencv, seam_finders, stitching, timelapsers, util, util_inl, wrapers, wrapers_inl. The CPP files are: blenders, camera exposure_compensate, matchers, motion_estimators, opencl_kernels_stitching, seam_finders, stitching, timelapsers, util, wrapers, wrapers_cuda
You're missing an "o" here:
Stitcher::Status stitcherStatus = stitcher.stitch(imgs,panramaImage);(should be
Stitcher::Status stitcherStatus = stitcher.stitch(imgs,panoramaImage);)
While I was decreasing the spaces between the lines, this "o" was erased but actually, it is present in my code as I've checked it. So, the problem lies somewhere else.
Acccording to Matlab the size of the two images are 600x400 each and the images are present in the directory.If I just write a code in the same file with imread command to just read image and then imshow command.Then the image is displayed correctly as the result.How can I avoid this problem that I've told you above. It seems that the data goes into "imgs" but in the stitcher.cpp (that comes with stitching directory), the "estimatetransform" gets no data. I think that the problem lies there. As you are an expert so, you can tell it better that where is the problem and what should be its solution. :) Please ignore the screenshot and consider this error written in the first statement:
I see no estimateTransform in the code you have posted...
@MkZHh could you provide the images? | https://answers.opencv.org/question/73354/error-in-opencv-stitching-project/ | CC-MAIN-2021-39 | refinedweb | 755 | 61.73 |
[Date Index]
[Thread Index]
[Author Index]
Re: Re-virginating Manipulates?
On Fri, 3 Dec 2010 05:19:57 -0500 (EST), AES wrote:
>> In the Cell menu, "Delete All Output" will get rid of all of the
>> output, print, and message cells, which goes a long way toward what
>> you want. Of course, there may be latent kernel state which you'll
>> need to flush. And the only absolutely sure way of doing that is to
>> quit the kernel. Evaluation->Quit Kernel->Local will do it, or you
>> can just evaluate
>>
>> Quit
>>
>> in a new cell.
>>
>> Sincerely,
>>
>> John Fultz
>>
> Thanks much.
>
> As I'm sure you know, some apps have a Revert command (often in the same
> menu as their Save and Open commands), which generally means "Revert to
> last Saved version" and which can be handy -- but of course this applies
> only to documents, not the app itself; and it can be dangerous in the
> case of apps that periodically do auto-Saves on docs without visible
> indication to the user.
>
>?
Todd Gayley wrote a utility called CleanSlate. He also wrote an entire article
for the Mathematica Journal about how he did it (this was, I think, in 1994 or
1995). I don't remember the details, but it turned out that it was definitely
not trivial. It's easy enough to clear out the Global` namespace, which is
satisfactory enough for many purposes. But to truly reset the kernel to a
virgin state is a much more challenging problem.
I'm not sure whether CleanSlate is still around or not, or how well it still
works. Others on this list probably know better than I.
That having been said, it really is very easy to quit and restart your kernel.
It takes almost no time on modern computers. And there won't be any weirdo bugs
in corner cases of things that were overlooked in the "virgination".
Sincerely,
John Fultz
jfultz at wolfram.com
User Interface Group
Wolfram Research, Inc. | http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00127.html | CC-MAIN-2014-52 | refinedweb | 331 | 71.44 |
24 April 2012 12:06 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The country imported 283,951 tonnes of naphtha in March 2012, a 6% year-on-year decrease and a 17% drop from February, according to China Customs data.
The country imported 317,060 tonnes of polypropylene (PP) in March, a 7% year-on-year decrease, while its toluene imports in the same month dropped by 67% year on year to 30,015 tonnes, according to the data.
Imports of some petrochemicals increased year on year in March and the products include ethylene, propylene, butadiene, benzene, purified terephthalic acid (PTA) and caprolactam, the data showed. | http://www.icis.com/Articles/2012/04/24/9552916/chinas-imports-of-most-petrochemical-products-down-in-march.html | CC-MAIN-2015-18 | refinedweb | 106 | 55.58 |
Important: Please read the Qt Code of Conduct -
Help... Ongoing heap corruption issue and question about main(...)
I have had numerous issues with heap corruption in my program all revolving around using a QTcpSocket to send login data. My most recent post regarding the matter (found "here":) was resolved by using a different approach that read the data a byte at a time rather than the entire array. However, no matter how I read the object, I cannot abort or close the socket without running into the same crash that QTcpSocket.readAll() was doing. I get a "User Breakpoint" in dbgheap.c. (Method: _CrtIsValidHeapPointer(const void * pUserData)
I am wondering if the fact that the QByteArray being sent is created in main() has anything to do with it (using the new keyword). Does the main() method, the QMainWindow, and QApplication objects all share the same heap? or does the main() method use its own heap for anything created with the new keyword? Do namespaces have anything to do with it? Do namespaces each have their own heap?
The above issue doesn't seem to be the problem. I moved the entire method to the mainwindow class so that no objects used are created inside main().
Another possibility that has occurred to me is that the TcpSocket actually used to send the data is located in a separate dll. Will this have a separate heap space? I will try and see if I can make copies of the objects sent across the "boundary" to avoid this possibility works. Meanwhile... if anyone has any suggestions it would be greatly appreciated.
Doing the above didn't help at all. I am really frustrated at this point. I cannot figure this out. In every other way than the things I have explained above, this part of my code matches the examples in the fortune cookie server/client pair. Could this be some sort of bug? | https://forum.qt.io/topic/46205/help-ongoing-heap-corruption-issue-and-question-about-main | CC-MAIN-2022-05 | refinedweb | 321 | 74.49 |
Parent Directory
|
Revision Log
Updates for single-genome loader.
package ERDB; use strict; use base qw(Exporter); use vars qw(@EXPORT_OK); @EXPORT_OK = qw(encode); use Tracer; use Data::Dumper; use XML::Simple; use ERDBQuery; use ERDBObject; use Stats; use Time::HiRes qw(gettimeofday); use Digest::MD5 qw(md5_base64); use CGI qw(-nosticky); use WikiTools; use ERDBExtras; use FreezeThaw; . Entities and relationships are collectively referred to in the documentation as I<objects>. Although this package is designed for general use, most examples are derived from the world of bioinformatics, which is where this technology was first deployed. C<Genome> and and information about how to display a diagram of the database. These are used to create web pages describing the data. Special support is provided for text searching. An entity field can be marked as I<searchable>, in which case it will be used to generate a text search index in which the user searches for words in the field instead of a particular field value. =head2 Loading Considerable support is provided for loading a database from flat files. The flat files are in the standard format expected by the MySQL C<LOAD DATA INFILE> command. This command expects each line to represent a database record and each record to have all the fields specified, in order, with tab characters separating the fields. The L<ERDBLoadGroup> object can be subclassed and used to create load files that can then be loaded using the L<ERDBLoader.pl> command; however, there is no requirement that this be done. =head3 Constructors In order to use the load facility, the constructor for the database object must be able to function with no parameters or with the parameters construed as a hash. The following options are used by the ERDB load facility. It is not necessary to support them all. =over 4 =item loadDirectory Data directory to be used by the loaders. =item DBD XML database definition file. =item dbName Name of the database to use. =item sock Socket for accessing the database. =item userData Name and password used to log on to the database, separated by a slash. =item dbhost Database host name. =back =head2 Data Types, Queries and Filtering =head3 Data Types The ERDB system supports many different data types. It is possible to configure additional user-defined types by adding PERL modules to the code. Each new type must be a subclass of L<ERDBType>. Standard types are listed in the compile-time STANDARD_TYPES constant. Custom types should be listed in the C<$ERDBExtras::customERDBtypes> variable of the configuration file. The variable must be a list reference containing the names of the ERDBType subclasses for the custom types. To get complete documentation of all the types, use the L</ShowDataTypes> method. The most common types are =over 4 =item int Signed whole number with a range of roughly negative 2 billion to positive 2 billion. Integers are stored in the database as a 32-bit binary number. =item string Variable-length string, up to around 250 characters. Strings are stored in the database as variable-length ASCII with some escaping. =item text Variable-length string, up to around 65000 characters. Text is stored in the database as variable-length ASCII with some escaping. Only the first 250 characters can be indexed. =item float Double-precision floating-point number, ranging from roughly -10^-300 to 10^-300, with around 14 significant digits. Floating-point numbers are stored in the database in IEEE 8-byte floating-point format. =item date Date/time value, in whole seconds. Dates are stored as a number of seconds from the beginning of the Unix epoch (January 1, 1970) in Universal Coordinated Time. This makes it identical to a date or time number in PERL, Unix, or Windows. =back All data fields are converted when stored or retrieved using the L</EncodeField> and L</DecodeField> methods. This allows us to store very exotic data values such as string lists, images, and PERL objects. The conversion is not, however, completely transparent because no conversion is performed on the parameter values for the various L</Get>-based queries. There is a good reason for this: you can specify general SQL expressions as filters, and it's extremely difficult for ERDB to determine the data type of a particular parameter. This topic is dealt with in more detail below. =head3 Standard Field Name Format There are several places in which field names are specified by the caller. The standard field name format is the name of the entity or relationship followed by the field name in parentheses. In some cases there a particular entity or relationship is considered the default. Fields in the default object can be specified as an unmodified field name. For example, Feature(species-name) would specify the species name field for the C<Feature> entity. If the C<Feature> table were the default, it could be specified as species-name without the object name. You may also use underscores in place of hyphens, which can be syntactically more convenient in PERL programs. species_name In some cases, the object name may not be the actual name of an object in the database. It could be an alias assigned by a query, or the converse name of a relationship. Alias names and converse names are generally specified in the object name list of a query method. The alias or converse name used in the query method will be carried over in all parameters to the method and any data value structures returned by the query. In most cases, once you decide on a name for something in a query, the name will stick for all data returned by the query. =head3 Queries Queries against the database are performed by variations of the L</Get> method. This method has three parameters: the I<object name list>, the I<filter clause>, and the I<parameter list>. There is a certain complexity involved in queries that has evolved over a period of many years in which the needs of the applications were balanced against a need for simplicity. In most cases, you just list the objects used in the query, code a standard SQL filter clause with field names in the L</Standard Field Name Format>, and specify a list of parameters to plug in to the parameter marks. The use of the special field name format and the list of object names spare you the pain of writing a C<FROM> clause and worrying about joins. For example, here's a simple query to look up all Features for a particular genome. my $query = $erdb->Get('Genome HasFeature Feature', 'Genome(id) = ?', [$genomeID]); For more complicated queries, see the rest of this section. =head4 Object Name List The I<object name list> specifies the names of the entities and relationships that participate in the query. This includes every object used to filter the query as well as every object from which data is expected. The ERDB engine will automatically generate the join clauses required to make the query work, which greatly simplifies the coding of the query. You can specify the object name list using a list reference or a space-delimited string. The following two calls are equivalent. my $query = $erdb->Get(['Genome', 'UsesImage', 'Image'], $filter, \@parms); my $query = $erdb->Get('Genome UsesImage Image', $filter, \@parms); If you specify a string, you have a few more options. =over 4 =item * You can use the keyword C<AND> to start a new join chain with an object further back in the list. =item * You can specify an object name more than once. If it is intended to be a different instance of the same object, simply put a number at the end. Each distinct number indicates a distinct instance. =item * You can use the converse name of a relationship to make the object name list read more like regular English. =back These requirements do not come up very often, but they can make a big differance. For example, let us say you are looking for a feature that has a role in a particular subsystem and also belongs to a particular genome. You can't use my $query = $erdb->Get(['Feature', 'HasRoleInSubsystem', 'Subsystem', 'HasFeature', 'Genome'], $filter, \@parms); because you don't want to join the C<HasFeature> table to the subsystem table. Instead, you use my $query = $erdb->Get("Feature HasRoleInSubsystem Subsystem AND Feature HasFeature Genome", $filter, \@parms); Now consider a taxonomy hierarchy using the entity C<Class> and the relationship C<BelongsTo> and say you want to find all subclasses of a particular class. If you code my $query = $erdb->Get("Class BelongsTo Class", 'Class(id) = ?', [$class]) Then the query will only return the particular class, and only if it belongs to itself. The following query finds every class that belongs to a particular class. my $query = $erdb->Get("Class BelongsTo Class2", 'Class2(id) = ?', [$class]); This query does the converse. It finds every class belonging to a particular class. my $query = $erdb->Get("Class BelongsTo Class2", 'Class(id) = ?', [$class]); The difference is indicated by the field name used in the filter clause. Because the first occurrence of C<Class> is specified in the filter rather than the second occurrence (C<Class2>), the query is anchored on the from-side of the relationship. =head4 Filter Clause The filter clause is an SQL WHERE clause (without the WHERE) to be used to filter and sort the query. The WHERE clause can be parameterized with parameter markers (C<?>). Each field used in the WHERE clause must be specified in L</Standard Field Name Format>. on the query. There is never a default object name for filter clause fields., unpredictable things may happen if a sort field is from an entity" as your filter clause. =head4 Parameter List The parameter list is a reference to a list of parameter values. The parameter values are substituted for the parameter marks in the filter clause in strict left-to-right order. In the parameter list for a filter clause, you must be aware of the proper data types and perform any necessary conversions manually. This is not normally a problem. Most of the time, you only query against simple numeric or string fields, and you only need to convert a string if there's a possibility it has exotic characters like tabs or new-lines in it. Sometimes, however, this is not enough. When you are writing programs to query ERDB databases, you can call L</EncodeField> directly, specifying a field name in the L</Standard Field Name Format>. The value will be converted as if it was being stored into a field of the specified type. Alternatively, you can call L</encode>, specifying a data type name. Both of these techniques are shown in the example below. my $query = $erdb->Get("Genome UsesImage Image", "Image(png) = ? AND Genome(description) = ?", [$erdb->EncodeFIeld('Image(png)', $myImage), ERDB::encode(text => $myDescription)]); You can export the L</encode> method if you expect to be doing this a lot and don't want to bother with the package name on the call. use ERDB qw(encode); # ... much later ... my $query = $erdb->Get("Genome UsesImage Image", "Image(png) = ? AND Genome(description) = ?", [$erdb->EncodeField('Image(png)', $myImage), encode(text => $myDescription)]); =head2 XML Database Description =head3 Global Tags The entire database definition must be inside a B<Database> tag. The display name of the database is given by the text associated with the B<Title> tag. The display name is only used in the automated documentation. The entities and relationships are listed inside the B<Entities> and B<Relationships> tags, respectively. There is also a C<Shapes> tag that contains additional shapes to display on the database diagram, and an C<Issues> tag that describes general things that need to be remembered. These last two are completely optional. <Database> <Title>... display title here...</Title> <Issues> ... comments here ... </Issues> <Regions> ... region definitions here ... </Regions> <Entities> ... entity definitions here ... </Entities> <Relationships> ... relationship definitions here ... </Relationships> <Shapes> ... shape definitions here ... </Shapes> </Database> =head3 Notes and Asides Entities, relationships, shapes, indexes, and fields all allow text tags called B<Notes> and B<Asides>. Both these tags contain comments that appear when the database documentation is generated. In addition, the text inside the B<Notes> tag will be shown as a tooltip when mousing over the diagram. The following special codes allow a limited rich text capability in Notes and Asides. [b]...[/b]: Bold text [i]...[/i]: Italics [p]...[/p]: Paragraph [link I<href>]...[/link]: Hyperlink to the URL I<href> [list]...[*]...[*]...[/list]: Bullet list, with B<[*]> separating list elements. . =item default This attribute specifies the default field value to be used while loading. The default value is used if no value is specified in an L</InsertObject> call or in the L<ERDBLoadGroup/Put> call that generates the load file. If no default is specified, then the field is required and must have a value specified in the call. The default value is specified as a string, so it must be in an encoded form. can only specify fields in the relationship. The alternate indexes for an entity or relationship are listed inside the B<Indexes> tag. The from-index of a relationship is specified using the B<FromIndex> tag; the to-index is specified using the B<ToIndex> tag. Be aware of the fact that in MySQL, the maximum size of an index key is 1000 bytes. This means at most four normal-sized strings. Each index can contain a B<Notes> tag. In addition, it will have an B<IndexFields> tag containing the B<IndexField> tags. The B<IndexField> tags Regions A large database may be too big to fit comfortably on a single page. When this happens, you have the option of dividing the diagram into regions that are shown one at a time. When regions are present, a combo box will appear on the diagram allowing the user to select which region to show. Each entity, relationship, or shape can have multiple B<RegionInfo> tags describing how it should be displayed when a particular region is selected. The regions themselves are described by a B<Region> tag with a single attribute-- B<name>-- that indicates the region name. The tag can be empty, or can contain C<Notes> elements that provide useful documentation. =over 4 =item name Name of the region. =back =head3 Diagram The diagram tag allows you to specify options for generating a diagram. If the tag is present, then it will be used to configure diagram display in the documentation widget (see L<ERDBPDocPage>). the tag has the following attributes. It should not have any content; that is, it is not a container tag. =over 4 =item width Width for the diagram, in pixels. The default is 750. =item height Height for the diagram, in pixels. The default is 800. =item ratio Ratio of shape height to width. The default is 0.62. =item size Width in pixels for each shape. =item nonoise If set to 1, there will be a white background instead of an NMPDR noise background. =item editable If set to 1, a dropdown box and buttons will appear that allow you to edit the diagram, download your changes, and make it pretty for printing. =item fontSize Maximum font size to use, in points. The default is 16. =item download URL of the CGI script that downloads the diagram XML to the user's computer. The XML text will be sent via the C<data> parameter and the default file name via the C<name> parameter. =item margin Margin between adjacent shapes, in pixels. The default is 10. =back =head3 DisplayInfo The B<DisplayInfo> tag is used to describe how an entity, relationship, or shape should be displayed when the XML file is used to generate an interactive diagram. A B<DisplayInfo> can have no elements, or it can have multiple B<Region> elements inside. The permissible attributes are as follows. =over 4 =item link URL to which the user should be sent when clicking on the shape. For entities and relationships, this defaults to the most likely location for the object description in the generated documentation. =item theme The themes are C<black>, C<blue>, C<brown>, C<cyan>, C<gray>, C<green>, C<ivory>, C<navy>, C<purple>, C<red>, and C<violet>. These indicate the color to be used for the displayed object. The default is C<gray>. =item col The number of the column in which the object should be displayed. Fractional column numbers are legal, though it's best to round to a multiple of 0.5. Thus, a column of C<4.5> would be centered between columns 4 and 5. =item row The number of the row in which the object should be displayed. Fractional row numbers are allowed in the same manner as for columns. =item connected If C<1>, the object is visibly connected by lines to the other objects identified in the C<from> and C<to> attributes. This value is ignored for entities, which never have C<from> or C<to>. =item caption Caption to be displayed on the object. If omitted, it defaults to the object's name. You may use spaces and C<\n> codes to make the caption prettier. =item fixed If C<1>, then the C<row> and C<col> attributes are used to position the object, even if it has C<from> and C<to> attributes. Otherwise, the object is placed in the midpoint between the C<from> and C<to> shapes. =back =head3 RegionInfo For large diagrams, the B<DisplayInfo> tag may have one or more B<RegionInfo> elements inside, each belonging to one or more named regions. (The named regions are desribed by the B<Region> tag.) The diagrammer will create a drop-down box that can be used to choose which region should be displayed. Each region tag has a C<name> attribute indicating the region to which it belongs, plus any of the attributes allowed on the B<DisplayInfo> tag. The name indicates the name of a region in which the parent object should be displayed. The other attributes override the corresponding attributes in the B<DisplayInfo> parent. An object with no Region tags present will be displayed in all regions. There is a default region with no name that consists only of objects displayed in all regions. An object with no B<DisplayInfo> tag at all will not be displayed in any region. Issues Issues are comments displayed at the top of the database documentation. They have no effect on the database or the diagram. The C<Issue> tag is a text tag with no attributes. =head3 Entities An entity is described by the B<Entity> tag. The entity can contain B<Notes> and B<Asides>, an optional B<DisplayInfo> tag,>. =item autonumber A value of C<1> means that after the entity's primary relation is loaded, the ID field will be set to autonumber, so that new records inserted will have automatic keys generated. Use this option with care. Once the relation is loaded, it cannot be reloaded unless the table is first dropped and re-created. In addition, the key must be an integer type. =back =head3 Relationships A relationship is described by the B<Relationship> tag. Within a relationship, there can be B<DisplayInfo>, B<Notes> and B<Asides> tags, a B<Fields> tag containing the intersection data fields, a B<FromIndex> tag containing the index used to cross the relationship in the forward direction, a B<ToIndex> tag containing the index used to cross the relationship in reverse, and an C<Indexes> tag containing the alternate indexes. The. =item converse A name to be used when travelling backward through the relationship. This value can be used in place of the real relationship name to make queries more readable. =back =head3 Shapes Shapes are objects drawn on the database diagram that do not physically exist in the database. Entities are always drawn as rectangles and relationships are always drawn as diamonds, but a shape can be either of those, an arrow, a bidirectional arrow, or an oval. The B<Shape> tag can contain B<Notes>, B<Asides>, and B<DisplayInfo> tags, and has the following attributes. =over 4 =item type Type of shape: C<arrow> for an arrow, C<biarrow> for a bidirectional arrow, C<oval> for an ellipse, C<diamond> for a diamond, and C<rectangle> for a rectangle. =item from Object from which this object is oriented. If the shape is an arrow, it will point toward the from-object. =item to Object toward which this object is oriented. If the shape is an arrow, it will point away from the to-object. =item name Name of the shape. This is used by other shapes to identify it in C<from> and C<to> directives. =back =cut # GLOBALS # Table of information about our datatypes. my $TypeTable; my @StandardTypes = qw(ERDBTypeBoolean ERDBTypeChar ERDBTypeCounter ERDBTypeDate ERDBTypeFloat ERDBTypeHashString ERDBTypeInteger ERDBTypeString ERDBTypeText); # Table translating arities into natural language. my %ArityTable = ( '1M' => 'one-to-many', 'MM' => 'many-to-many' ); # Options for XML input and output. my %XmlOptions = (GroupTags => { Relationships => 'Relationship', Entities => 'Entity', Fields => 'Field', Indexes => 'Index', IndexFields => 'IndexField', Issues => 'Issue', Regions => 'Region', Shapes => 'Shape' }, KeyAttr => { Relationship => 'name', Entity => 'name', Field => 'name', Shape => 'name' }, SuppressEmpty => 1, ); my %XmlInOpts = ( ForceArray => [qw(Field Index Issues IndexField Relationship Entity Shape)], ForceContent => 1, NormalizeSpace => 2, ); my %XmlOutOpts = ( RootName => 'Database', XMLDecl => 1, ); # Table for flipping between FROM and TO my %FromTo = (from => 'to', to => 'from'); # Name of metadata table. use constant METADATA_TABLE => '_metadata'; =head2 Special Methods =head3 new my $database = ERDB->new($dbh, $metaFileName, %options); Create a new ERDB object. =over 4 =item dbh L<DBKernel> database object for the target database. =item metaFileName Name of the XML file containing the metadata. =item options Hash of configuration options. =back The supported configuration options are as follows. Options not in this list will be presumed to be relevant to the subclass and will be ignored. =over 4 =item demandDriven If TRUE, the database will be configured for a I<forward-only cursor>. Instead of caching the query results, the query results will be provided at the rate in which they are demanded by the client application. This is less stressful on memory and disk space, but means you cannot have more than one query active at the same time. =back =cut sub new { # Get the parameters. my ($class, $dbh, $metaFileName, %options) = @_; # Insure we have a type table. GetDataTypes(); # See if we want to use demand-driven flow control for queries. if ($options{demandDriven}) { $dbh->set_demand_driven(1); } # Create the object. my $self = { _dbh => $dbh, _metaFileName => $metaFileName, _autonumbered => {}, }; # Bless it. bless $self, $class; # Load the meta-data. (We must be blessed before doing this, because it # involves a virtual method.) $self->{_metaData} = _LoadMetaData($self, $metaFileName, $options{externalDBD}); # Return the object. return $self; } GetDatabase my $erdb = ERDB::GetDatabase($name, $dbd, %parms); Return an ERDB object for the named database. It is assumed that the database name is also the name of a class for connecting to it. =over 4 =item name Name of the desired database. =item dbd Alternate DBD file to use when processing the database definition. =item parms Additional command-line parameters. =item RETURN Returns an ERDB object for the named database. =back =cut sub GetDatabase { # Get the parameters. my ($name, $dbd, %parms) = @_; # Get access to the database's package. require "$name.pm"; # Plug in the DBD parameter (if any). if (defined $dbd) { $parms{DBD} = $dbd; } # Construct the desired object. my $retVal = eval("$name->new(%parms)"); # Fail if we didn't get it. Confess("Error connecting to database \"$name\": $@") if $@; # Return the result. return $retVal; } =head3 ParseFieldName my ($tableName, $fieldName) = ERDB::ParseFieldName($string, $defaultName); or my $normalizedName = ERDB::ParseFieldName($string, $defaultName); Analyze a standard field name to separate the object name part from the field part. =over 4 =item string Standard field name string to be parsed. =item defaultName (optional) Default object name to be used if the object name is not specified in the input string. =item RETURN In list context, returns the table name followed by the base field name. In scalar context, returns the field name in a normalized L</Standard Field Name Format>, with underscores converted to hyphens and an object name present. If the parse fails, will return an undefined value. =back =cut sub ParseFieldName { # Get the parameters. my ($string, $defaultName) = @_; # Declare the return values. my ($tableName, $fieldName); # Get a copy of the input string with underscores converted to hyphens. my $realString = $string; $realString =~ tr/_/-/; # Parse the input string. if ($realString =~ /^(\w+)\(([\w\-]+)\)$/) { # It's a standard name. Return the pieces. ($tableName, $fieldName) = ($1, $2); } elsif ($realString =~ /^[\w\-]+$/ && defined $defaultName) { # It's a plain name, and we have a default table name. ($tableName, $fieldName) = ($defaultName, $realString); } # Return the results. if (wantarray()) { return ($tableName, $fieldName); } elsif (! defined $tableName) { return undef; } else { return "$tableName($fieldName)"; } } =head3 CountParameterMarks my $count = ERDB::CountParameterMarks($filterString); Return the number of parameter marks in the specified filter string. =over 4 =item filterString ERDB filter clause to examine. =item RETURN Returns the number of parameter marks in the specified filter clause. =back =cut sub CountParameterMarks { # Get the parameters. my ($filterString) = @_; # Declare the return variable. my $retVal = 0; # Get a safety copy of the filter string. my $filterCopy = $filterString; # Remove all escaped quotes. $filterCopy =~ s/\\'//g; # Remove all quoted strings. $filterCopy =~ s/'[^']*'//g; # Count the question marks. while ($filterCopy =~ /\?/g) { $retVal++ } # Return the result. return $retVal; } =head2 Query Methods =head3 GetEntity my $entityObject = $erdb->GetEntity($entityType, $ID); Return an object describing the entity instance with a specified ID. =over 4 =item entityType Entity type name. =item ID ID of the desired entity. =item RETURN Returns a L<ERDBObject> object representing the desired entity instance, or an undefined value if no instance is found with the specified key. =back =cut sub GetEntity { # Get the parameters. my ($self, $entityType, $ID) = @_; # Encode the ID value. my $coded = $self->EncodeField("$entityType(id)", $ID); # Create a query. my $query = $self->Get($entityType, "$entityType(id) = ?", [$coded]); # Get the first (and only) object. my $retVal = $query->Fetch(); if (T(3)) { if ($retVal) { Trace("Entity $entityType \"$ID\" found."); } else { Trace("Entity $entityType \"$ID\" not found."); } } # in L</Standard Field Name Format>. =item RETURN Returns a list of the distinct values for the specified field in the database. =back =cut sub GetChoices { # Get the parameters. my ($self, $entityName, $fieldName) = @_; # Get the entity data structure. my $entityData = $self->_GetStructure($entityName); # Get the field descriptor. my $fieldData = $self->_FindField($fieldName, $entityName); # Get the name of the relation containing the field. my $relation = $fieldData->{relation}; # Fix up the field name. my $realName = _FixName($fieldData->{name}); # Get the field type. my $type = $fieldData->{type}; # Get the database handle. my $dbh = $self->{_dbh}; # Query the database. my $results = $dbh->SQL("SELECT DISTINCT $realName FROM $relation"); # Clean the results. They are stored as a list of lists, # and we just want the one list. Also, we want to decode the values. my @retVal = sort map { $TypeTable->{$type}->decode($_-> in L</Standard_Field_Name_Format>. in L</Standard Field Name Format>. The default object name is the first one in the object name list.. See L</Object Name List>. =item filterClause WHERE/ORDER BY clause (without the WHERE) to be used to filter and sort the query. See L</Filter Clause>. =item parameterList List of the parameters to be substituted in for the parameters marks in the filter clause. See L</Parameter List>. =item fields List of the fields to be returned in each element of the list returned, or a string containing a space-delimited list of field names. The field names should be in L</Standard Field Name Format>. ; # Convert the field names to a list if they came in as a string. my $fieldList = (ref $fields ? $fields : [split /\s+/, $fields]); # Loop through the records returned, extracting the fields. Note that if the # counter is non-zero, we stop when the number of records read hits the count. my @retVal = (); while (($count == 0 || $fetched < $count) && (my $row = $query->Fetch())) { my @rowData = $row->Values($fieldList); push @retVal, \@rowData; $fetched++; } # Return the resulting list. return @retVal; } =head3 Exists my $found = $erdb- GetCount my $count = $erdb->GetCount(\@objectNames, $filter, \@params); Return the number of rows found by a specified query. This method would normally be used to count the records in a single table. For example, HasFeature', 'Genome(genus-species) LIKE ?', ['homo %']); it would return the number of genomes, not the number of genome/feature pairs. =over 4 =item objectNames Reference to a list of the objects (entities and relationships) included in the query, or a string containing a space-delimited list of object names. See L</ObjectNames>. =item filter A filter clause for restricting the query. See L</Filter Clause>. =item params Reference to a list of the parameter values to be substituted for the parameter marks in the filter. See L</Parameter List>. ; # Create the SQL command suffix to get the desired records. my ($suffix, $mappedNameListRef, $mappedNameHashRef) = $self->_SetupSQL($objectNames, $filter); # Get the object we're counting. my $firstObject = $mappedNameListRef->[0]; # Find out if we're counting an entity or a relationship. my $countedField; if ($self->IsEntity($mappedNameHashRef->{$firstObject}->[0])) { $countedField = "id"; } else { # For a relationship we count the to-link because it's usually more # numerous. Note we're automatically converting to the SQL form # of the field name (to_link vs. to-link), and we're not worried # about converses. $countedField = "to_link"; } # Prefix it with text telling it we want a record count. GetList my @dbObjects = $erdb->GetList(\@objectNames, $filterClause, \@params); Return a list of L<ERDBObject> objects for the specified query. This method is essentially the same as L</Get> except it returns a list of objects rather than a query object that can be used to get the results one record at a time. This is almost always preferable to L</Get> when the result list is a manageable size. =over 4 =item objectNames Reference to a list containing the names of the entity and relationship objects to be retrieved, or a string containing a space-delimited list of object a list of L<ERDBObject> objects my $query = $erdb->Get(\@objectNames, $filterClause, \@params); This method returns a query object for entities of a specified type using a specified filter. ERDBQuery to determine the # order, name and mapped name for each object in the query. my @relationMap = _RelationMap($mappedNameHashRef, $mappedNameListRef); # Return the statement object. my $retVal = ERDBQuery::_new($self, $sth, \@relationMap); return $retVal; } =head3 Prepare my $query = $erdb->Prepare($objects, $filterString, $parms); Prepare a query for execution but do not create a statement handle. This is useful if you have a query that you want to validate but you do not yet want to acquire the resources to run it. =over 4 =item objects List containing the names of the entity and relationship objects to be retrieved, or a string containing a space-delimited list of names. See L</Object Name List>. =item filterString WHERE clause (without the WHERE) to be used to filter and sort the query. See L</Filter Clause>. =item parms Reference to a list of parameter values to be substituted into the filter clause. See L</Parameter List>. =item RETURN Returns an L<ERDBQuery> object that can be used to check field names or that can be populated with artificial data. =back =cut sub Prepare { # Get the parameters. my ($self, $objects, $filterString, $parms) = @_; # Process the SQL stuff. my ($suffix, $mappedNameListRef, $mappedNameHashRef) = $self->_SetupSQL($objects, $filterString); # Create the query. my $command = "SELECT " . join(".*, ", @{$mappedNameListRef}) . ".* $suffix"; # Now we create the relation map, which enables ERDBQuery to determine the # order, name and mapped name for each object in the query. my @relationMap = _RelationMap($mappedNameHashRef, $mappedNameListRef); # Create the query object without a statement handle. my $retVal = ERDBQuery::_new($self, undef, \@relationMap); # Cache the command and the parameters. $retVal->_Prepare($command, $parms); # Return the result. Name of the object to be searched in full-text mode. If the object name list is a list reference, you can also specify the index into the list. = ($idx =~ /^\d+$/ ? $objectNames->[$idx] : ERDBQuery to determine the order, name # and mapped name for each object in the query. my @relationMap = _RelationMap($mappedNameHashRef, $mappedNameListRef); # Return the statement object. $retVal = ERDB field Name of the field to be used to get the elements of the list returned. The default object name for this context is the first object name specified. =item RETURN Returns a list of values. =back =cut2 Documentation and Metadata Methods =head3 ComputeFieldTable my ($header, $rows) = ERDB::ComputeFieldTable($wiki, $name, $fieldData); Generate the header and rows of a field table for an entity or relationship. The field table describes each field in the specified object. =over 4 =item wiki L<WikiTools> object (or equivalent) for rendering HTML or markup. =item name Name of the object whose field table is being generated. =item fieldData Field structure of the specified entity or relationship. =item RETURN Returns a reference to a list of the labels for the header row and a reference to a list of lists representing the table cells. =back =cut sub ComputeFieldTable { # Get the parameters. my ($wiki, $name, $fieldData) = @_; # We need to sort the fields. First comes the ID, then the # primary fields and the secondary fields. my %sorter; for my $field (keys %$fieldData) { # Get the field's descriptor. my $fieldInfo = $fieldData->{$field}; # Determine whether or not we have a primary field. my $primary; if ($field eq 'id') { $primary = 'A'; } elsif ($fieldInfo->{relation} eq $name) { $primary = 'B'; } else { $primary = 'C'; } # Form the sort key from the flag and the name. $sorter{$field} = "$primary$field"; } # Create the header descriptor for the table. my @header = qw(Name Type Notes); # We'll stash the rows in here. my @rows; # Loop through the fields in their proper order. for my $field (Tracer::SortByValue(\%sorter)) { # Get the field's descriptor. my $fieldInfo = $fieldData->{$field}; # Format the type. my $type = "$fieldInfo->{type}"; # Secondary fields have "C" as the first letter in # the sort value. If a field is secondary, we mark # it as an array. if ($sorter{$field} =~ /^C/) { $type .= " array"; } # Format its table row. push @rows, [$field, $type, ObjectNotes($fieldInfo, $wiki)]; } # Return the results. return (\@header, \@rows); } =head3 FindEntity my $objectData = $erdb->FindEntity($name); Return the structural descriptor of the specified entity, or an undefined value if the entity does not exist. =over 4 =item name Name of the desired entity. =item RETURN Returns the definition structure for the specified entity, or C<undef> if the named entity does not exist. =back =cut sub FindEntity { # Get the parameters. my ($self, $name) = @_; # Return the result. return $self->_FindObject(Entities => $name); } =head3 FindRelationship my $objectData = $erdb->FindRelationship($name); Return the structural descriptor of the specified relationship, or an undefined value if the relationship does not exist. =over 4 =item name Name of the desired relationship. =item RETURN Returns the definition structure for the specified relationship, or C<undef> if the named relationship does not exist. =back =cut sub FindRelationship { # Get the parameters. my ($self, $name) = @_; # Return the result. return $self->_FindObject(Relationships => $name); } =head3 FindShape my $objectData = $erdb->FindShape($name); Return the structural descriptor of the specified shape, or an undefined value if the shape does not exist. =over 4 =item name Name of the desired shape. =item RETURN Returns the definition structure for the specified shape, or C<undef> if the named shape does not exist. =back =cut sub FindShape { # Get the parameters. my ($self, $name) = @_; # Return the result. return $self->_FindObject(Shapes => $name); } =head3 GetObjectTable my $objectHash = $erdb->GetObjectsTable($type); Return the metadata hash of objects of the specified type-- entity, relationship, or shape. =over 4 =item type Type of object desired-- C<entity>, C<relationship>, or C<shape>. =item RETURN Returns a reference to a hash containing all metadata for database objects of the specified type. The hash maps object names to object descriptors. The descriptors represent a cleaned and normalized version of the definition XML. Specifically, all of the implied defaults are filled in. =back =cut sub GetObjectsTable { # Get the parameters. my ($self, $type) = @_; # Return the result. return $self->{_metaData}->{ERDB::Plurals($type)}; } =head3 Plurals my $plural = ERDB::Plurals($singular); Return the plural form of the specified object type (entity, relationship, or shape). This is extremely useful in generating documentation. =over 4 =item singular Singular form of the specified object type. =item RETURN Plural form of the specified object type, in capital case. =back =cut sub Plurals { # Get the parameters. my ($singular) = @_; # Convert to capital case. my $retVal = ucfirst $singular; # Handle a "y" at the end. $retVal =~ s/y$/ie/; # Add the "s". $retVal .= "s"; # FieldType my $type = $erdb->FieldType($string, $defaultName); Return the L<ERDBType> object for the specified field. =over 4 =item string Field name string to be parsed. See L</Standard Field Name Format>. =item defaultName (optional) Default object name to be used if the object name is not specified in the input string. =item RETURN Return the type object for the field's type. =back =cut sub FieldType { # Get the parameters. my ($self, $string, $defaultName) = @_; # Get the field descriptor. my $fieldData = $self->_FindField($string, $defaultName); # Compute the type. my $retVal = $TypeTable->{$fieldData->{type}}; # Return the result. return $retVal; } =head3 IsSecondary my $type = $erdb->IsSecondary($string, $defaultName); Return TRUE if the specified field is in a secondary relation, else FALSE. =over 4 =item string Field name string to be parsed. See L</Standard Field Name Format>. =item defaultName (optional) Default object name to be used if the object name is not specified in the input string. =item RETURN Returns TRUE if the specified field is in a secondary relation, else FALSE. =back =cut sub IsSecondary { # Get the parameters. my ($self, $string, $defaultName) = @_; # Get the field's name and object. my ($objName, $fieldName) = ERDB::ParseFieldName($string, $defaultName); # Retrieve its descriptor from the metadata. my $fieldData = $self->_FindField($fieldName, $objName); # Compare the table name to the object name. my $retVal = ($fieldData->{relation} ne $objName); # Return the result. return $retVal; } =head3 FindRelation my $relData = $erdb->FindRelation($relationName); Return the descriptor for the specified relation. 3 GetRelationshipEntities my ($fromEntity, $toEntity) = $erdb->GetRelationshipEntities($relationshipName); Return the names of the source and target entities for a relationship. If the specified name is not a relationship, an empty list is returned. =over 4 =item relationshipName Name of the relevant relationship. =item RETURN Returns a two-element list. The first element is the name of the relationship's from-entity, and the second is the name of the to-entity. If the specified name is not for a relationship, both elements are undefined. =back =cut sub GetRelationshipEntities { # Get the parameters. my ($self, $relationshipName) = @_; # Declare the return variable. my @retVal = (undef, undef); # Try to find the caller-specified name in the relationship table. my $relationships = $self->{_metaData}->{Relationships}; if (exists $relationships->{$relationshipName}) { # We found it. Return the from and to. @retVal = map { $relationships->{$relationshipName}->{$_} } qw(from to); } # Return the results. return @retVal; } ; } else { # Strip out the minus signs. Everything remaining must be a letter # or digit. my $strippedName = $fieldName; $strippedName =~ s/-//g; if ($strippedName !~ /^([a-z]|\d)+$/i) { Trace("Field name $fieldName contains illegal characters.") if T(1); $retVal = 0; } } #}}->averageLength(); $retVal += $fieldLen; } # Return the result." =over 4 =item relationName Name of the relation to be examined. This could be an entity name, a relationship name, or the name of a secondary entity relation. ); # Get the relation's field list. my @fields = @{$relationData->{Fields}}; my @fieldNames = map { $_->{name} } @fields; # Find out if the relation is a primary entity relation, # a relationship relation, or a secondary entity relation. my $entityTable = $self->{_metaData}->{Entities}; my $relationshipTable = $self->{_metaData}->{Relationships}; if (exists $entityTable->{$relationName}) { # Here we have a primary entity relation. We sort on the ID, and the # ID only. push @keyNames, "id"; } elsif (exists $relationshipTable->{$relationName}) { # Here we have a relationship. We sort using the FROM index followed by # the rest of the fields, in order. First, we get all of the fields in # a hash. my %fieldsLeft = map { $_ => 1 } @fieldNames; # Get the index. my $index = $relationData->{Indexes}->{idxFrom}; # Loop through its fields. for my $keySpec (@{$index->{IndexFields}}) { # Mark this field as used. The field may have a modifier, so we only # take the part up to the first space. $keySpec =~ /^(\S+)/; $fieldsLeft{$1} = 0; push @keyNames, $keySpec; } # Push the rest of the fields on. push @keyNames, grep { $fieldsLeft{$_} } @fieldNames; } else { # Here we have a secondary entity relation, so we have a sort on the whole # record. This essentially gives us a sort on the ID followed by the # secondary data field. push @keyNames, @fieldNames; } # Now we parse the key names into sort parameters. First, we prime the return # string. my $retVal = "sort $ERDBExtras::sort_options -u -T\"$ERDBExtras::temp\" -t\"\t\" "; #Type(); #. my $realI = $i + 1; $fieldSpec = "$realI,$realI$modifier"; } } # Add this field to the sort command. $retVal .= " -k$fieldSpec"; } # Return the result.ConnectingRelationships my @list = $erdb->GetConnectingRelationships($entityName); Return a list of the relationships connected to the specified entity. =over 4 =item entityName Entity whose connected relationships are desired. =item RETURN Returns a list of the relationships that originate from the entity. If the entity is on the I<from> end, it will return the relationship name. If the entity is on the I<to> end it will return the converse of the relationship name. =back =cut sub GetConnectingRelationships { # Get the parameters. my ($self, $entityName) = @_; # Declare the return variable. my @retVal; # Get the relationship list. my $relationships = $self->{_metaData}->{Relationships}; # Find the entity. my $entity = $self->{_metaData}->{Entities}->{$entityName}; # Only proceed if the entity exists. if (! defined $entity) { Trace("Entity $entityName not found.") if T(3); } else { # Loop through the relationships. my @rels = keys %$relationships; Trace(scalar(@rels) . " relationships found in connection search.") if T(3); for my $relationshipName (@rels) { my $relationship = $relationships->{$relationshipName}; if ($relationship->{from} eq $entityName) { # Here we have a forward relationship. push @retVal, $relationshipName; } elsif ($relationship->{to} eq $entityName) { # Here we have a backward relationship. In this case, the # converse relationship name is preferred if it exists. my $converse = $relationship->{converse} || $relationshipName; push @retVal, $converse; } } } # Return the result. return @retVal; } =head3 GetConnectingRelationshipData my ($froms, $tos) = $erdb->GetConnectingRelationshipData($entityName); Return the relationship data for the specified entity. The return will be a two-element list, each element of the list a reference to a hash that maps relationship names to structures. The first hash will be relationships originating from the entity, and the second element a reference to a hash of relationships pointing to the entity. =over 4 =item entityName Name of the entity of interest. =item RETURN Returns a two-element list, each list being a map of relationship names to relationship metadata structures. The first element lists relationships originating from the entity, and the second element lists relationships that point to the entity. =back =cut sub GetConnectingRelationshipData { # Get the parameters. my ($self, $entityName) = @_; # Create a hash that holds the return values. my %retVal = (from => {}, to => {}); # Get the relationship table in the metadata. my $relationships = $self->{_metaData}->{Relationships}; # Loop through it twice, once for each direction. for my $direction (qw(from to)) { # Get the return hash for this direction. my $hash = $retVal{$direction}; # Loop through the relationships, looking for our entity in the # current direction. for my $rel (keys %$relationships) { my $relData = $relationships->{$rel}; if ($relData->{$direction} eq $entityName) { # Here we've found our entity, so we put it in the # return hash. $hash->{$rel} = $relData; } } } # Return the results. return ($retVal{from}, $retVal{to}); } =head3 GetDataTypes my $types = ERDB::GetDataTypes(); Return a table of ERDB data types. The table returned is a hash of L</ERDBType> objects keyed by type name. =cut sub GetDataTypes { # Insure we have a type table. if (! defined $TypeTable) { # Get a list of the names of the standard type classes. my @types = @StandardTypes; # Add in the custom types, if any. if (defined $ERDBExtras::customERDBtypes) { push @types, @$ERDBExtras::customERDBtypes; } Trace("Type List: " . join(", ", @types)) if T(Types => 3); # Initialize the table. $TypeTable = {}; # Loop through all of the types, creating the type objects. for my $type (@types) { # Create the type object. my $typeObject; eval { require "$type.pm"; $typeObject = eval("$type->new()"); }; # Ensure we didn't have an error. if ($@) { Confess("Error building ERDB type table: $@"); } else { # Add the type to the type table. $TypeTable->{$typeObject->name()} = $typeObject; } } } # Return the type table. return $TypeTable; } =head3 ShowDataTypes my $markup = ERDB::ShowDataTypes($wiki, $erdb); Display a table of all the valid data types for this installation. =over 4 =item wiki An object used to render the table, similar to L</WikiTools>. =item erdb (optional) If specified, an ERDB object for a specific database. Only types used by the database will be put in the table. If omitted, all types are returned. =back =cut sub ShowDataTypes { my ($wiki, $erdb) = @_; # Compute the hash of types to display. my $typeHash = (); if (! defined $erdb) { # No ERDB object, so we list all the types. $typeHash = GetDataTypes(); } else { # Here we must extract the types used in the ERDB object. for my $relationName ($erdb->GetTableNames()) { my $relationData = $erdb->FindRelation($relationName); for my $fieldData (@{$relationData->{Fields}}) { my $type = $fieldData->{type}; my $typeData = $TypeTable->{$type}; if (! defined $typeData) { Confess("Invalid data type \"$type\" in relation $relationName."); } else { $typeHash->{$type} = $typeData; } } } } # We'll build table rows in here. We start with the header. my @rows = [qw(Type Indexable Sort Pos Format Description)]; # Loop through the types, generating rows. for my $type (sort keys %$typeHash) { # Get the type object. my $typeData = $typeHash->{$type}; # Compute the indexing column. my $flag = $typeData->indexMod(); if (! defined $flag) { $flag = "no"; } elsif ($flag eq "") { $flag = "yes"; } else { $flag = "prefix"; } # Compute the sort type. my $sortType = $typeData->sortType(); if ($sortType eq 'g' || $sortType eq 'n') { $sortType = "numeric"; } else { $sortType = "alphabetic"; } # Get the position (pretty-sort value). my $pos = $typeData->prettySortValue(); # Finally, the format. my $format = $typeData->objectType() || "scalar"; # Build the data row. my $row = [$type, $flag, $sortType, $pos, $format, $typeData->documentation()]; # Put it into the table. push @rows, $row; } # Form up the table. my $retVal = $wiki->Table(@rows); # Return the result. return $retVal; } SecondaryFields my %fieldTuples = $erdb->GetSecondaryFields($entityName); This method will return a list of the name and type of each of the secondary fields for a specified entity. Secondary fields are stored in two-column tables separate($wiki); Build a description of the database for a wiki. The database will be organized into a single page, with sections for each entity and relationship. The return value is a list of text lines. The parameter must be an object that mimics the object-based interface of the L</WikiTools> object. If it is omitted, L</WikiTools> is used. =cut sub GenerateWikiData { # Get the parameters. my ($self, $wiki) = @_; # If there's no Wiki object, use the default one. $wiki = WikiTools->new() if ! defined $wiki; #}; my $shapeList = $metadata->{Shapes}; # Start with the introductory text. push @retVal, $wiki->Heading(2, "Introduction"); if (my $notes = $metadata->{Notes}) { push @retVal, _WikiNote($notes->{content}, $wiki); } # Generate the issue list. if (my $issues = $metadata->{Issues}) { push @retVal, $wiki->Heading(3, 'Issues'); push @retVal, $wiki->List(map { $_->{content} } @{$issues}); } # Generate the region list. if (my $regions = $metadata->{Regions}) { push @retVal, $wiki->Heading(3, 'Diagram Regions'); for my $region (@$regions) { # Check for notes. my $notes = ""; if ($region->{Notes}) { $notes = $region->{Notes}->{content}; } # Put out the region name as a heading. push @retVal, $wiki->Heading(4, $region->{name}); # Output the notes for the region. push @retVal, _WikiNote($notes, $wiki); } } # Generate the type table. push @retVal, $wiki->Heading(2, "Data Types"); push @retVal, ShowDataTypes($wiki, $self); # Start the entity section. push @retVal, $wiki->Heading(2, "Entities"); # Loop through the entities. Note that unlike the situation with HTML, we # don't need to generate the table of contents manually, just the data # itself. for my $key (sort keys %$entityList) { # Create a header for this entity. push @retVal, "", $wiki->Heading(3, $key); # Get the entity data. my $entityData = $entityList->{$key}; # Plant the notes here, if there are any. push @retVal, ObjectNotes($entityData, $wiki); # Now we list the entity's relationships (if any). First, we build a list # of the relationships relevant to this entity. my @rels = (); for my $rel (sort keys %$relationshipList) { my $relStructure = $relationshipList->{$rel}; # Find out if this relationship involves this entity. my $dir; if ($relStructure->{from} eq $key) { $dir ='from'; } elsif ($relStructure->{to} eq $key) { $dir = 'to'; } if ($dir) { # Get the relationship sentence. my $relSentence = _ComputeRelationshipSentence($wiki, $rel, $relStructure, $dir); # Add it to the relationship list. push @rels, $relSentence; } } # Add the relationships as a Wiki list. push @retVal, $wiki->List(@rels); # Finally, the field table. push @retVal, _WikiObjectTable($key, $entityData->{Fields}, $wiki); } # Now the entities are documented. Next we do the relationships. push @retVal, $wiki->Heading(2, "Relationships"); for my $key (sort keys %$relationshipList) { my $relationshipData = $relationshipList->{$key}; # Create the relationship heading. push @retVal, $wiki- = (); if ($arity eq "11") { push @listElements, "Each " . $wiki->Bold($fromEntity) . " relates to at most one " . $wiki->Bold($toEntity) . "."; } else { push @listElements, "Each " . $wiki->Bold($fromEntity) . " relates to multiple " . $wiki->Bold(Tracer::Pluralize($toEntity)) . "."; if ($arity eq "MM" && $fromEntity ne $toEntity) { push @listElements, "Each " . $wiki->Bold($toEntity) . " relates to multiple " . $wiki->Bold(Tracer::Pluralize($fromEntity)); } } if ($relationshipData->{converse}) { push @listElements, "Converse name is $relationshipData->{converse}." } push @retVal, $wiki->List(@listElements); # Plant the notes here, if there are any. push @retVal, ObjectNotes($relationshipData, $wiki); # Finally, the field table. push @retVal, _WikiObjectTable($key, $relationshipData->{Fields}, $wiki); } # Now loop through the miscellaneous shapes. if ($shapeList) { push @retVal, $wiki->Heading(2, "Miscellaneous"); for my $shape (sort keys %$shapeList) { push @retVal, $wiki->Heading(3, $shape); my $shapeData = $shapeList->{$shape}; push @retVal, ObjectNotes($shapeData, $wiki); } } # All done. Return the lines. return @retVal; } =head3 ObjectNotes my @noteParagraphs = ERDB::ObjectNotes($objectData, $wiki); Return a list of the notes and asides for an entity or relationship in Wiki format. =over 4 =item objectData The metadata for the desired entity or relationship. =item wiki Wiki object used to render text. =item RETURN Returns a list of text paragraphs in Wiki markup form. =back =cut sub ObjectNotes { # Get the parameters. my ($objectData, $wiki) = @_; # Declare the return variable. my @retVal; # Loop through the types of notes. for my $noteType (qw(Notes Asides)) { my $text = $objectData->{$noteType}; if ($text) { push @retVal, _WikiNote($text->{content}, $wiki); } } # Return the result. return @retVal; } =head3 CheckObjectNames my @errors = $erdb->CheckObjectNames($objectNameString); Check an object name string for errors. The return value will be a list of error messages. If no error is found, an empty list will be returned. This process does not guarantee a correct object name list, but it catches the most obvious errors without the need for invoking a full-blown L</Get> method. =over 4 =item objectNameString An object name string, consisting of a space-delimited list of entity and relationship names. =item RETURN Returns an empty list if successful, and a list of error messages if the list is invalid. =back =cut sub CheckObjectNames { # Get the parameters. my ($self, $objectNameString) = @_; # Declare the return variable. my @retVal; # Separate the string into pieces. my @objectNames = split /\s+/, $objectNameString; # Start in a blank state. my $currentObject; # Get the alias table. my $aliasTable = $self->{_metaData}->{AliasTable}; # Loop through the object names. for my $objectName (@objectNames) { # If we have an AND, clear the current object. if ($objectName eq 'AND') { # Insure we don't have an AND at the beginning or after another AND. if (! defined $currentObject) { push @retVal, "An AND was found in the wrong place."; } # Clear the context. undef $currentObject; } else { # Here the user has specified an object name. Get # the root name. unless ($objectName =~ /([A-Za-z]+)(\d*)/) { # Here the name has bad characters in it. Note that an error puts # us into a blank state. push @retVal, "Invalid characters found in \"$objectName\"."; undef $currentObject; } else { # Get the real name from the alias table. my $name = $aliasTable->{$1}; if (! defined $name) { push @retVal, "Could not find an entity or relationship named \"$objectName\"."; undef $currentObject; } else { # Okay, we've got the real entity or relationship name. Does it belong here? # That's only an issue if there is a previous value in $currentObject. if (defined $currentObject) { my $joinClause = $self->_JoinClause($currentObject, $name); if (! $joinClause) { push @retVal, "There is no connection between $currentObject and $name." } } # Save this object as the new current object. $currentObject = $name; } } } } # Return the result. return @retVal; } =head3 GetTitle my $text = $erdb->GetTitle(); Return the title for this database. =cut sub GetTitle { # Get the parameters. my ($self) = @_; # Declare the return variable. my $retVal = $self->{_metaData}->{Title}; if (! $retVal) { # Here no title was supplied, so we make one up. $retVal = "Unknown Database"; } else { # Extract the content of the title element. This is the real title. $retVal = $retVal->{content}; } # Return the result. return $retVal; } =head3 GetDiagramOptions my $hash = $erdb->GetDiagramOptions(); Return the diagram options structure for this database. The diagram options are used by the ERDB documentation widget to configure the database diagram. If the options are not present, an undefined value will be returned. =cut sub GetDiagramOptions { # Get the parameters. my ($self) = @_; # Extract the options element. my $retVal = $self->{_metaData}->{Diagram}; # Return the result. return $retVal; } =head3 GetMetaFileName my $fileName = $erdb->GetMetaFileName(); Return the name of the database definition file for this database. =cut sub GetMetaFileName { # Get the parameters. my ($self) = @_; # Return the result. return $self->{_metaFileName}; } =head2 Database Administration and Loading Methods . =item failOnError If TRUE, then when an error occurs, the process will be killed; otherwise, the process will stay alive, but a message will be put into the statistics object. =back =cut sub LoadTable { # Get the parameters. my ($self, $fileName, $relationName, %options) = @_; # Record any error message in here. If it's defined when we're done # and failOnError is set, we confess it. my $errorMessage; #($@); $errorMessage = $@; } } } # Load the table. my $rv; eval { $rv = $dbh->load_table(file => $fileName, tbl => $relationName, style => $options{mode}); }; if (!defined $rv) { $retVal->AddMessage($@) if ($@); $errorMessage = "Table load failed for $relationName using $fileName."; $retVal->AddMessage("$errorMessage: " . $dbh->error_message); } else { # Here we successfully loaded the table. my $size = -s $fileName; Trace("$size bytes loaded into $relationName.") if T(2); $retVal->Add("bytes-loaded", $size); $retVal->Add("tables-loaded" => ($@) { $errorMessage = $@; $retVal->AddMessage($errorMessage); } } #); } } } if ($errorMessage && $options{failOnError}) { # Here the load failed and we want to error out. Confess($errorMessage); } # Analyze the table to improve performance. if (! $options{partial}) { Trace("Analyzing and compacting $relationName.") if T(3); $self->Analyze($relationName); } Trace("$relationName load completed.") if T(3); # Return the statistics. return $retVal; } =head3 InsertNew my $newID = $erdb->InsertNew($entityName, %fields); Insert a new entity into a table that uses sequential integer IDs. A new, unique ID will be computed automatically and returned to the caller. =over 4 =item entityName Type of the entity being inserted. The entity must have an integer ID. =item fields Hash of field names to field values. Every field in the entity's primary relation should be specified. =item RETURN Returns the ID of the inserted entity. =back =cut sub InsertNew { # Get the parameters. my ($self, $entityName, %fields) = @_; # Declare the return variable. my $retVal; # If this is our first insert, we update the ID field definition. if (! exists $self->{_autonumber}->{$entityName}) { # Check to see if this is an autonumbered entity. my $entityData = $self->FindEntity($entityName); if (! defined $entityData || ! $entityData->{autonumber}) { Confess("Cannot use InsertNew for a entity $entityName."); } else { # Create the alter table command. my $fieldString = $self->_FieldString($entityData->{Fields}->{id}); my $command = "ALTER TABLE $entityName CHANGE COLUMN id $fieldString AUTO_INCREMENT"; # Execute the command. my $dbh = $self->{_dbh}; $dbh->SQL($command); # Insure we don't do this again. $self->{_autonumber}->{$entityName} = 1; } } # Insert the entity. $self->InsertObject($entityName, %fields, id => undef); # Get the last ID inserted. my $dbh = $self->{_dbh}; $retVal = $dbh->last_insert_id(); # Return the result. return $retVal; } =head3 Analyze $erdb->Analyze($tableName); Analyze and compact a table in the database. This is useful after a load to improve the performance of the indexes. =over 4 =item tableName Name of the table to be analyzed and compacted. =back =cut sub Analyze { # Get the parameters. my ($self, $tableName) = @_; # Analyze the table. $self->{_dbh}->vacuum_it($tableName); } =head3 TruncateTable $erdb->TruncateTable($table); Delete all rows from a table quickly. This uses the built-in SQL C<TRUNCATE> statement, which effectively drops and re-creates a table with all its settings intact. =over 4 =item table Name of the table to be cleared. =back =cut sub TruncateTable { # Get the parameters. my ($self, $table) = @_; # Get the database handle. my $dbh = $self->{_dbh}; # Execute a truncation comment. $dbh->SQL("TRUNCATE TABLE $table"); } }) { $self->_DumpRelation($outputDirectory, $relationName); } } # Next, we loop through the relationships. my $relationships = $metaData->{Relationships}; for my $relationshipName (keys %{$relationships}) { # Dump this relationship's relation. $self->_DumpRelation($outputDirectory, $relationshipName); } } =head3 DumpTable my $count = $erdb->DumpTable($tableName, $directory); Dump the specified table to the named directory. This will create a load file having the same name as the relation with an extension of DTX. This file can then be used to reload the table at a later date. If the table does not exist, no action will be taken. =over 4 =item tableName Name of the table to dump. =item directory Name of the directory in which the dump file should be placed. =item RETURN Returns the number of records written. =back =cut sub DumpTable { # Get the parameters. my ($self, $tableName, $directory) = @_; # Declare the return variable. my $retVal; # Insure the table name is valid. if (exists $self->{_metaData}->{RelationTable}->{$tableName}) { # Call the internal dumper. $retVal = $self->_DumpRelation($directory, $tableName); } # Return the result. return $retVal; } =head3 TypeDefault my $value = ERDB::TypeDefault($type); Return the default value for fields of the specified type. =over 4 =item type Relevant type name. =item RETURN Returns a default value suitable for fields of the specified type. =back =cut sub TypeDefault { # Get the parameters. my ($type) = @_; # Validate the type. if (! exists $TypeTable->{$type}) { Confess("TypeDefault called for invalid type \"$type\".") } # Return the result. return $TypeTable->{$type}->default(); } with a suffix of C<.dtx>. Each file must be a tab-delimited table of encoded field values. Each line of the file will be loaded as a row of the target relation table. =over 4 =item directoryName Name of the directory containing the relation files to be loaded. =item rebuild TRUE if the tables should be dropped and rebuilt, else FALSE. =item RETURN Returns a L</Stats>String = $self->_FieldString($fieldData); # Push the result into the field list. push @fieldList, $fieldString; } # $erdb->VerifyFields($relName, \@fieldList); Run through the list of proposed field values, insuring that all of them are valid. =over 4 =item relName Name of the relation for which the specified fields are destined. =item fieldList Reference to a list, in order, of the fields to be put into the relation. =back =cut sub VerifyFields { # Get the parameters. my ($self, $relName, $fieldList) = @_; # Initialize the return value. my $retVal = 0; # Get the relation definition. my $relData = $self->FindRelation($relName); # Get the list of field descriptors. my $fieldThings = $relData->{Fields}; my $fieldCount = scalar @{$fieldThings}; # Loop through the two lists. for (my $i = 0; $i < $fieldCount; $i++) { # Get the descriptor and type of the current field. my $fieldThing = $fieldThings->[$i]; my $fieldType = $TypeTable->{$fieldThing->{type}}; Confess("Undefined field type $fieldThing->{type} in position $i ($fieldThing->{name}) of $relName.") if (! defined $fieldType); # Validate it. my $message = $fieldType->validate($fieldList->[$i]); if ($message) { # It's invalid. Generate an error. Confess("Error in field $i ($fieldThing->{name}) of $relName: $message"); } } # Return a 0 value, for backward compatibility. return 0; } =head3 DigestFields $erdb->DigestFields($relName, $fieldList); Prepare the fields of a relation for output to a load file. }; # Encode the field value in place. $fieldList->[$i] = $TypeTable->{$fieldType}->encode($fieldList->[$i], 1); } } =head3 EncodeField my $coding = $erdb->EncodeField($fieldName, $value); Convert the specified value to the proper format for storing in the specified database field. The field name should be specified in the standard I<object(field)> format, e.g. C<Feature(id)> for the C<id> field of the C<Feature> table. =over 4 =item fieldName Name of the field, specified in as an object name with the field name in parentheses. =item value Value to encode for placement in the field. =item RETURN Coded value ready to put in the database. In most cases, this will be identical to the original input. =back =cut sub EncodeField { # Get the parameters. my ($self, $fieldName, $value) = @_; # Find the field type. my $fieldSpec = $self->_FindField($fieldName); my $retVal = encode($fieldSpec->{type}, $value); # Return the result. return $retVal; } =head3 encode my $coding = ERDB::encode($type, $value); Encode a value of the specified type for storage in the database or for use as a query parameter. Encoding is automatic for all ERDB methods except when loading a table from a user-supplied load file or when processing the parameters for a query filter string. This method can be used in those situations to remedy the lack. =over 4 =item type Name of the incoming value's data type. =item value Value to encode into a string. =item RETURN Returns the encoded value. =back =cut sub encode { # Get the parameters. my ($type, $value) = @_; # Get the type definition. my $typeData = $TypeTable->{$type}; # Complain if it doesn't exist. Confess("Invalid data type \"$type\" specified in encoding.") if ! defined $typeData; # Encode the value. my $retVal = $typeData->encode($value); # Return the result. return $retVal; } =head3 DecodeField my $value = $erdb->DecodeField($fieldName, $coding); Convert the stored coding of the specified field to the proper format for use by the client program. This is essentially the inverse of L</EncodeField>. =over 4 =item fieldName Name of the field, specified as an object name with the field name in parentheses. =item coding Coded data from the database. =item RETURN Returns the original form of the coded data. =back =cut sub DecodeField { # Get the parameters. my ($self, $fieldName, $coding) = @_; # Declare the return variable. my $retVal = $coding; # Get the field type. my $fieldSpec = $self->_FindField($fieldName); my $type = $fieldSpec->{type}; Trace("Decoding field $fieldName of type $type.") if T(ERDBType => 3); # Process according to the type. $retVal = $TypeTable->{$type}->decode($coding); # Return the result. return $retVal; } =head3 DigestKey my $digested = ERDB::DigestKey($longString); Return the digested value of a string. The digested value is a fixed length (22 characters) MD5 checksum. It can be used as a more convenient version of a symbolic key. =over 4 =item longString String to digest. =item RETURN Digested value of the string. =back =cut sub DigestKey { # Allow object-based calls for backward compatability. shift if UNIVERSAL::isa($_[0], __PACKAGE__); # Get the parameters. my ( partial-indexed fields so we can append a length limitation # for them. To do that, we need the relation's field list. my $relFields = $relationData->{Fields}; for (my $i = 0; $i <= $#rawFields; $i++) { # Split the ordering suffix from the field name. my ($field, $suffix) = split(/\s+/, $rawFields[$i]); # Get the field type. my $type = $types{$field}; # Ask if it requires using prefix notation for the index. my $mod = $TypeTable->{$type}->indexMod(); if (! defined($mod)) { Confess("Non-indexable type $type specified for index field in $relationName."); } elsif ($mod) { # Here we have an indexed field that requires a modification in order # to work. This means we need to insert it between the # field name and the ordering suffix. The cool thing here # is that the join works even if $suffix is undefined. $rawFields[$i] = join(" ", "$field($mod)", $suffix); } } SetTestEnvironment $erdb->SetTestEnvironment(); Denote that this is a test environment. Certain performance-enhancing features may be disabled in a test environment. =cut sub SetTestEnvironment { # Get the parameters. my ($self) = @_; # Tell the database we're in test mode. $self->{_dbh}->test_mode(); } =head3 dbName my $dbName = $erdb->dbName(); Return the physical name of the database currently attached to this object. =cut sub dbName { # Get the parameters. my ($self) = @_; # We'll return the database name in here. my $retVal; # Get the connection string. my $connect = $self->{_dbh}->{_connect}; # Extract the database name. if ($connect =~ /dbname\=([^;])/) { $retVal = $1; } # Return the result. return $retVal; } =head2 Database Update Methods ($fieldName, $oldValue, $newValue, $filter, $parms); Update all occurrences of a specific field value to a new value. The number of rows changed will be returned. =over 4 =item fieldName Name of the field in L</Standard Field Name</Filter Clause>. The filter will be applied before any substitutions take place. Note that the filter clause in this case must only specify fields in the table containing fields. =item parms Reference to a list of parameter values in the filter. See L</Parameter List>. =item RETURN Returns the number of rows modified. =back =cut sub UpdateField { # Get the parameters. my ($self, $fieldName, $oldValue, $newValue, $filter, $parms) = @_; # Get the object and field names from the field name parameter. my ($objectName, $realFieldName) = ERDB::ParseFieldName($fieldName); $realFieldName = _FixName($realFieldName); # Add the old value to the filter. Note we allow the possibility that no # filter was specified. my $realFilter = "$fieldName = ?"; if ($filter) { $realFilter .= " AND $filter"; } # Format the query filter. my ($suffix) = in L</Standard Field Name Format>. This specifies the entity name and the field name in a single string. =item value New value to be put in the field. =back =cut sub InsertValue { # Get the parameters. my ($self, $entityID, $fieldName, $value) = @_; # Parse the entity name and the real field name. my ($entityName, $fieldTitle) = ERDB::ParseFieldName($fieldName); if (! defined $entityName) { Confess("Invalid field name specification \"$fieldName\" in InsertValue call."); } else { # Insure we are in an entity.. my $codedValue = $self->EncodeField($fieldName, $value); $dbh->SQL($statement, 0, $entityID, $codedValue); } } } } } =head3 InsertObject $erdb->InsertObject($objectType, %fieldHash); Insert an object into the database. The object is defined by a type name and then a hash of field names to values. All field values should. The field names should be specified in L</Standard Field Name Format>. The default object name is the name of the object being inserted. The values will be encoded for storage by this method. Note that this can be an inline hash (for backward compatibility) or a hash reference. =back =cut sub InsertObject { # Get the parameters. my ($self, $newObjectType, $first, @leftOvers) = @_; # Denote that so far we appear successful. my $retVal = 1; # Create the field hash. my $fieldHash; if (ref $first eq 'HASH') { $fieldHash = $first; } else { $fieldHash = { $first, @leftOvers }; } # Get the database handle. my $dbh = $self->{_dbh}; # Parse the field hash. We need to strip off the table names and # convert underscores in field names to hyphens. We will also # encode the values. my %fixedHash = $self->_SingleTableHash($fieldHash, $newObjectType); # Get the relation descriptor. my $relationData = $self->FindRelation($newObjectType); # We'll need a list of the fields being inserted, a list of the corresponding # values, and a list of fields the user forgot to specify. my @fieldNameList = (); my @valueList = (); my @missing = (); # Loop through the fields in the relation. for my $fieldDescriptor (@{$relationData->{Fields}}) { # Get the field name and save it. Note we need to fix it up so the hyphens # are converted to underscores. my $fieldName = $fieldDescriptor->{name}; my $fixedName = _FixName($fieldName); # Look for the named field in the incoming structure. As a courtesy to the # caller, we accept both the real field name or the fixed-up one. if (exists $fixedHash{$fieldName}) { # Here we found the field. There is a special case for the ID that # we have to check for. if (! defined $fixedHash{$fieldName} && $fieldName eq 'id') { # This is the special case. The ID is going to be computed at # insert time, so we skip it. } else { # Normal case. Stash it in both lists. push @valueList, $fixedHash{$fieldName}; push @fieldNameList, $fixedName; Trace("Value for $fixedName is \"$fixedHash{$fieldName}\".") if T(SQL => 4); } } else { # Here the field is not present. Check for a default. my $default = $self->_Default($newObjectType, $fieldName); if (defined $default) { # Yes, we have a default. Push it into the two lists. push @valueList, $default; push @fieldNameList, $fixedName; Trace("Default value for $fixedName is \"$default\".") if T(SQL => 4); } else { # No, this field is officially missing. push @missing, $fieldName; } } } # Only proceed if there are no missing fields. if (@missing > 0) { Trace("Relation $newObjectType for $newObjectType skipped due to missing fields: " . join(' ', @missing)) if T(1); } else { # Build the INSERT statement. my $statement = "INSERT INTO $newObjectType (" .); # Execute the INSERT statement with the specified parameter list. $retVal = $sth->execute(@valueList); if (!$retVal) { my $errorString = $sth->errstr(); Confess("Error inserting into $newObjectType: $errorString"); } else { Trace("Insert successful for $newObjectType.") if T(3); } } # Return a 1 for backward compatibility. Hash mapping field names to their new values. All of the fields named must be in the entity's primary relation, and they cannot any of them be the ID field. Field names should be in the L</Standard Field Name Format>. The default object name in this case is the entity name. For backward compatability, this can also be a hash reference. =back =cut sub UpdateEntity { # Get the parameters. my ($self, $entityName, $id, $first, @leftovers) = @_; # Get the field hash. my $fields; if (ref $first eq 'HASH') { $fields = $first; } else { $fields = { $first, @leftovers }; } # Fix up the field name hash.) . " = ?"; my $value = $self->EncodeField("$entityName($field)", $fields->{$field}); push @valueList, $value; }. The idea here is to delete an entity and everything related to it. Because this is so dangerous, and option is provided to simply trace the resulting delete calls so you can verify the action before performing the delete. =over 4 =item entityName Name of the entity type for the instance being deleted. =item objectID ID of the entity instance to be deleted. sub Delete { # Get the parameters. my ($self, $entityName, $objectID, %options) = @_; # Declare the return variable. my $retVal = Stats->new(); # Find out if we're in test mode. my $testMode = $options{testMode}; # Get the DBKernel object. my $db = $self->{_dbh}; # Memorize the filter clause and parameters for the GET calls below. my $filter = "$entityName(id) = ?"; my $parms = [$objectID]; #}) { # Yes, put it on the path list. push @fromPathList, [@stackedPath, $myEntityName]; } # look at it later. my @stackList = (@augmentedList, $toEntity); push @todoList, \@stackList; } else { Trace("$toEntity ignored because it occurred previously.") if T(4); } } } # Now check the TO field. In this case only the relationship needs # deletion. if ($relationship->{to} eq $myEntityName) { # Check to see if we're going back the way we came. my $fromEntity = $relationship->{from}; if ($fromEntity ne $myEntityName && ! grep { $_ eq $fromEntity } @stackedPath) { # We're not, so we stack this path. my @augmentedList = (@stackedPath, $myEntityName, $relationshipName); push @toPathList, \@augmentedList; } } } } #. for my $keyName ('to_link', 'from_link') { # Get the list for this key. my @pathList = @{$stackList{$keyName}}; Trace(scalar(@pathList) . " entries in path list for $keyName.") if T(3); # Loop through this list. while (my $path = pop @pathList) { # Get the path we're using to drive the delete. We delete records from the # last table in the list. my @pathTables = @{$path}; my $target = $pathTables[$#pathTables]; # Build the path for the query. We use the query to find the records to delete. my $pathString = join(" ", @pathTables); # How we proceed depends on whether this is an entity or a relationship. if ($self->IsEntity($target)) { # Here we're deleting entity instances. Get the IDs of all the instances # to delete. my @ids = $self->GetFlat($pathString, $filter, $parms, "$target(id)"); # Now we need a list of all the relations used by this entity. my $entityData = $self->FindEntity($target); for my $relation (keys %{$entityData->{Relations}}) { # Form a statement to delete the identified instances for this relation. my $stmt = "DELETE FROM $relation WHERE id = ?"; # Perform the delete for each identified instance. my $deleted = 0; for my $id (@ids) { if (! $testMode) { $deleted += $db->SQL($stmt, 0, $id); } } Trace("$deleted records deleted from $relation via path $pathString.") if T(3); $retVal->Add($relation, $deleted); } } else { # Here we're deleting relationship instances. We use from/to pairs to # identify these records. my @pairs = $self->GetAll($pathString, $filter, $parms, "$target(from-link) $target(to-link)"); # Form a statemen to delete the identified instances for this relationship. my $stmt = "DELETE FROM $target WHERE from_link = ? AND to_link = ?"; # Loop through the pairs, deleting. my $deleted = 0; for my $pair (@pairs) { if (! $testMode) { $deleted += $db->SQL($stmt, 0, @$pair); } } Trace("$deleted records deleted from $target via path $pathString.") if T(3); $retVal->Add($target, $deleted); } } } # Return the result. return $retVal; } =head3 Disconnect $erdb->Disconnect($relationshipName, $originEntityName, $originEntityID); Disconnect an entity instance from all the objects to which it is related via a specific relationship.) = @_; # Encode the entity ID. my $idParameter = $self->EncodeField("$originEntityName(id)", \"$idParameter\".") if T(3); # We do this delete in batches to keep it from dragging down the # server. my $limitClause = ($ERDBExtras::delete_limit ? "LIMIT $ERDBExtras::delete_limit" : ""); my $done = 0; while (! $done) { # Do the delete. my $rows = $dbh->SQL("DELETE FROM $relationshipName WHERE ${dir}_link = ? $limitClause", 0, $idParameter); #) { my ($keyTable, $keyName) = ERDB::ParseFieldName($key, $relationshipName); push @filters, _FixName($keyName) . " = ?"; push @parms, $self->EncodeField("$keyTable($keyName)", for the delete query. See L</Filter Clause>. =item parms Reference to a list of parameters for the filter clause. See L</Parameter List>. , in L</Standard Field Name Format>. ); # Now we need some data about this field., $self->EncodeField("$entityName(id)", $id); } # Check for a filter by value. if (defined $fieldValue) { push @filters, "$fieldName = ?"; push @parms, encode($field->{type}, $fieldValue); } # Append the filters to the command. if (@filters) { $sql .= " WHERE " . join(" AND ", @filters); } # Execute the command. my $dbh = $self->{_dbh}; $retVal = $dbh->SQL($sql, 0, @parms); } # L</Standard Field Name Format>. =back =cut(); Return the object to be used in creating load files for this database. This is only the default source object. Loaders have the option of overriding the chosen source object when constructing the L</ERDBLoadGroup> objects. =cut sub GetSourceObject { Confess("Pure virtual GetSourceObject called."); } =head3 SectionList my @sections = $erdb->SectionList(); Return a list of the names for the different data sections used when loading this database. The default is a single string, in which case there is only one section representing the entire database. =cut sub SectionList { # Get the parameters. my ($self) = @_; # Return the section list. return ("all"); } =head3 PreferredName my $name = $erdb->PreferredName(); Return the variable name to use for this database when generating code. The default is C<erdb>. =cut sub PreferredName { return 'erdb'; } =head3 Loader my $groupLoader = $erdb->Loader($groupName, $options); Return an L</ERDBLoadGroup> object for the specified load group. This method is used by L<ERDBGenerator.pl> to create the load group objects. If you are not using L<ERDBGenerator.pl>, you don't need to override this method. =over 4 =item groupName Name of the load group whose object is to be returned. The group name is guaranteed to be a single word with only the first letter capitalized. =item options Reference to a hash of command-line options. =item RETURN Returns an L</ERDBLoadGroup> object that can be used to process the specified load group for this database. =back =cut sub Loader { # Get the parameters. my ($self, $groupName, $options) = @_; } =head3 LoadGroupList my @groups = $erdb->LoadGroupList(); Returns a list of the names for this database's load groups. This method is used by L<ERDBGenerator.pl> when the user wishes to load all table groups. The default is a single group called 'All' that loads everything. =cut sub LoadGroupList { # Return the list. return qw(All); } =head3 LoadDirectory my $dirName = $erdb->LoadDirectory(); Return the name of the directory in which load files are kept. The default is the FIG temporary directory, which is a really bad choice, but it's always there. =cut sub LoadDirectory { # Get the parameters. my ($self) = @_; # Return the directory name. return $ERDBExtras::temp; } =head3 Cleanup $erdb->Cleanup(); Clean up data structures. This method is called at the end of each section when loading the database. The subclass can use it to free up memory that may have accumulated due to caching or accumulation of hash structures. The default method does nothing. =cut sub Cleanup { } =head3 UseInternalDBD my $flag = $erdb->UseInternalDBD(); Return TRUE if this database should be allowed to use an internal DBD. The internal DBD is stored in the C<_metadata> table, which is created when the database is loaded. The default is FALSE. =cut sub UseInternalDBD { return 0; } =head2 Internal Utility Methods =head3 _FieldString my $fieldString = $erdb->_FieldString($descriptor); Compute the definition string for a particular field from its descriptor in the relation table. =over 4 =item descriptor Field descriptor containing the field's name and type. =item RETURN Returns the SQL declaration string for the field. =back =cut sub _FieldString { # Get the parameters. my ($self, $descriptor) = @_; # Get the fixed-up name. my $fieldName = _FixName($descriptor->{name}); # Compute the SQL type. my $fieldType = $TypeTable->{$descriptor->{type}}->sqlType(); # Assemble the result. my $retVal = "$fieldName $fieldType NOT NULL"; # Return the result. return $retVal; } =head3 _Default my $defaultValue = $self->_Default($objectName, $fieldName); Return the default value for the specified field in the specified object. If no default value is specified, an undefined value will be returned. =over 4 =item objectName Name of the object containing the field. =item fieldName Name of the field whose default value is desired. =item RETURN Returns the default value for the specified field, or an undefined value if no default is available. =back =cut sub _Default { # Get the parameters. my ($self, $objectName, $fieldName) = @_; # Declare the return variable. my $retVal; # Get the field descriptor. my $fieldTable = $self->GetFieldTable($objectName); my $fieldData = $fieldTable->{$fieldName}; # Check for a default value. The default value is already encoded, # so no conversion is required. if (exists $fieldData->{default}) { $retVal = $fieldData->{default}; } else { # No default for the field, so get the default for the type. # This will be undefined if the type has no default, either. $retVal = TypeDefault($fieldData->{type}); } # Return the result. return $retVal; } =head3 _SingleTableHash my %fixedHash = $self->_SingleTableHash($fieldHash, $objectName); Convert a hash of field names in L</Standard Field Name Format> to field values into a hash of simple field names to encoded values. This is a common utility function performed by most update-related methods. =over 4 =item fieldHash A hash mapping field names to values. The field names must be in L</Standard Field Name Format>. =item objectName The default object name to be used when no object name is specified for the field. =item RETURN Returns a hash of simple field names to encoded values for those fields. =back =cut sub _SingleTableHash { # Get the parameters. my ($self, $fieldHash, $objectName) = @_; # Declare the return variable. my %retVal; # Loop through the fields. for my $key (keys %$fieldHash) { my $fieldData = $self->_FindField($key, $objectName); $retVal{$fieldData->{name}} = encode($fieldData->{type}, $fieldHash->{$key}); } # Return the result. return %retVal; } =head3 _FindField my $fieldData = $erdb->_FindField($string, $defaultName); Return the descriptor for the named field. If the field does not exist or the name is invalid, an error will occur. =over 4 =item string Field name string to be parsed. See L</Standard Field Name Format>. =item defaultName (optional) Default object name to be used if the object name is not specified in the input string. =item RETURN Returns the descriptor for the specified field. =back =cut sub _FindField { # Get the parameters. my ($self, $string, $defaultName) = @_; # Declare the return variable. my $retVal; # Parse the string. my ($tableName, $fieldName) = ERDB::ParseFieldName($string, $defaultName); if (! defined $tableName) { # Here the field name string has an invalid format. Confess("Invalid field name specification \"$string\"."); } else { # Find the structure for the specified object. $retVal = $self->_CheckField($tableName, $fieldName); if (! defined $retVal) { Confess("Field \"$fieldName\" not found in \"$tableName\"."); } } # Return the result. return $retVal; } =head3 _CheckField my $descriptor = $erdb->_CheckField($objectName, $fieldName); Return the descriptor for the specified field in the specified entity or relationship, or an undefined value if the field does not exist. =over 4 =item objectName Name of the relevant entity or relationship. If the object does not exist, an error will be thrown. =item fieldName Name of the relevant field. =item RETURN Returns the field descriptor from the metadata, or C<undef> if the field does not exist. =back =cut sub _CheckField { # Get the parameters. my ($self, $objectName, $fieldName) = @_; # Declare the return variable. my $retVal; # Find the structure for the specified object. This will fail # if the object name is invalid. my $objectData = $self->_GetStructure($objectName); # Look for the field. my $fields = $objectData->{Fields}; if (exists $fields->{$fieldName}) { # We found it, so return the descriptor. $retVal = $fields->{$fieldName}; } # Return the result. return $retVal; } =head3 _RelationMap my @relationMap = _RelationMap($mappedNameHashRef, $mappedNameListRef); Create the relation map for an SQL query. The relation map is used by L</ERDBObject> to determine how to interpret the results of the query. =over 4 =item mappedNameHashRef Reference to a hash that maps object name aliases to real object names. =item mappedNameListRef Reference to a list of object name aliases in the order they appear in the SELECT list. =item RETURN Returns a list of 3-tuples. Each tuple consists of an object name alias followed by the actual name of that object and a flag that is TRUE if the alias is a converse. This enables the L</ERDBObject> to determine the order of the tables in the query and which object name belongs to each object alias name. Most of the time the object name and the alias name are the same; however, if an object occurs multiple times in the object name list, the second and subsequent occurrences may be given a numeric suffix to indicate it's a different instance. In addition, some relationship names may be specified using their converse my ($suffix, $nameList, $nameHash) = $erdb->_SetupSQL($objectNames, $filterClause, $matchClause); Process a list of object names and a filter clause so that they can be used to build an SQL statement. This method takes in an object name list and a filter clause. It will return a corrected filter clause, a list of mapped names and the mapped name hash. This is an instance method. =over 4 =item objectNames Object name list from a query. See L</Object Name List>. =item filterClause A string containing the WHERE clause for the query (without the C<WHERE>) and also optionally the C<ORDER BY> and C<LIMIT> clauses. See L</Filter Clause>. 2-tuples consisting of the real name of the object and a flag indicating whether or not the mapping is via a converse relationship name. =back =cut sub _SetupSQL { my ($self, $objectNames, $filterClause, $matchClause) = @_; # This list will contain the object names as they are to appear in the # FROM list. my @fromList = (); # This list contains the object alias name for each object. my @mappedNameList = (); # This hash translates from an object alias name to the real object name. my %mappedNameHash = (); # This will be used to build the join clauses. my @joinWhere = (); # Finally, this variable contains the previous object encountered in the # name list. It is used to create the joins. An empty string means we # don't need a join yet. my $previousObject = ""; # Get pointers to the alias and join tables. my $aliasTable = $self->{_metaData}->{AliasTable}; # Get a list of the object names. my @objectNameList; if (ref $objectNames eq 'ARRAY') { push @objectNameList, @$objectNames; } else { # Here we need to convert a name string into a list. We start by # trimming excess whitespace at the front. my $objectNameString = $objectNames; $objectNameString =~ s/^\s+//; # Now we connect each AND to the object name after it. $objectNameString =~ s/\s+AND\s+(\w+)/ AND=$1/g; Trace("Object name string = $objectNameString") if T(4); # Split on whitespace to form the final list. @objectNameList = split /\s+/, $objectNameString; Trace("Objects are " . join(" ", @objectNameList)) if T(4); } # Loop through the object name list. for my $objectName (@objectNameList) { Trace("Object name is $objectName") if T(4); # Parse this object name. my $alias; if ($objectName =~ /AND=(.+)/) { # Here we have an AND situation. We blank the previous-object # indicator to insure we don't try to set up a join. $previousObject = ""; # Save the object name itself. $alias = $1; } else { # Here we need have a normal object name. $alias = $objectName; } # Have we seen this object name before? if (! exists $mappedNameHash{$alias}) { # No, so we need to compute its real name, put it in the # map hash, and add it to the FROM list. First, we strip # off any number suffix the caller supplied. if ($alias =~ /^(\D+)(\d*)$/) { my ($baseName, $suffix) = ($1, $2); # Does the base name exist in the database? my $realName = $aliasTable->{$baseName}; if (! defined $realName) { Confess("Invalid name in query: \"$baseName\"."); } else { # Yes. Put the real name in the map. $mappedNameHash{$alias} = [$realName, $baseName ne $realName]; # Put the alias and its real name into the FROM list. This # informs SQL of the mapping. my $tableSpec = $realName; if ($alias ne $realName) { $tableSpec .= " $alias"; } push @fromList, $tableSpec; # Add the alias to the mapped name list. push @mappedNameList, $alias; } } else { # Here the alias parse failed. Confess("Invalid name in query: \"$alias\"."); } } # Do we need a join here? if ($previousObject) { # Yes. Compute the join clause. my $joinClause = $self->_JoinClause($previousObject, $alias); if (! $joinClause) { Confess("There is no path from $previousObject to $alias."); } push @joinWhere, $joinClause; } # Save this object as the last object for the next iteration. $previousObject = $alias; } # Begin the SELECT suffix. It starts with # # FROM name1, name2, ... nameN # my $suffix = "FROM " . join(', ', @fromList); # Now for the WHERE. First, we need a place for the filter string. my $filterString = ""; # Check for a filter clause. if ($filterClause) { #; Trace("Sorted name list is " . join(", ", @sortedNames) . ".") if T(4); #, $converse) = @{}; # This will hold the mapped relation name to be used in the # filter clause. The default is the mapped name. my $mappedRelationName = $mappedName; # We may have a secondary relation. if ($relationName ne $objectName) { # This adds a bit of complexity, because we need to insure # the secondary relation is pulled in. First, we peel off # the suffix from the mapped name. my $mappingSuffix = substr $mappedName, length($objectName); # Put the mapping suffix onto the relation name to get the # mapped relation name. $mappedRelationName = "$relationName$mappingSuffix"; # Insure the relation is in the FROM clause. if (!exists $fromNames{$mappedRelationName}) { Trace("Working with $mappedRelationName.") if T(4); #; } } # Is this a converse mapping? Form an SQL field reference # from the relation name and the field name. my $sqlReference = "$mappedRelationName." . _FixName($fieldName, $converse); # Put it into the filter string in place of the old value. substr($filterString, $pos, $len) = $sqlReference; # Reposition the search. pos $filterString = $pos + length $sqlReference; } } } } #)(.+)/) { # Here we have an ORDER BY or LIMIT verb. Split it off of the filter string. $orderClause = $2 . $3; $filterString = $1; } } # All the things that are supposed to be in the WHERE clause of the # SELECT command need to be put into @joinWhere so we can string them # together. We begin with the match clause. It gets put at the end of # the join section so that the match clause's parameter mark precedes # my $sth = $erdb->_GetStatementHandle($command, $params); This method will prepare and execute an SQL query, returning the statement handle. The main reason for doing this here is so that everybody who does SQL queries gets the benefit of tracing. ); if (T(SQL => 4)) { if (! scalar(@$params)) { Trace("PARMS: none"); } else { Trace("PARMS: " . join(", ", map { "'$_'" } @$params)); } } # Get the database handle. my $dbh = $self->{_dbh}; # Prepare the command. my $retVal = $dbh->prepare_command($command); # Execute it with the parameters bound in. This may require multiple retries. my $rv = $retVal->execute(@$params); # The number of retries will be counted in here. my $retries = 0; while (! $rv) { # Get the error message. my $msg = $dbh->ErrorMessage($retVal); # Is a retry worthwhile? if ($retries >= $ERDBExtras::query_retries) { # No, we've tried too many times. Confess($msg); } elsif ($msg =~ /^DBServer Error/) { # Yes. Wait, then try reconnecting. Trace("SELECT error requires reconnection. $msg") if T(2); sleep($ERDBExtras::sleep_time); $dbh->Reconnect(); # Try executing the statement again. $retVal = $dbh->prepare_command($command); $rv = $retVal->execute(@$params); # Denote we've made another retry. $retries++; } else { # No. This error cannot be recovered by reconnecting. Confess($msg); } } # Return the statement handle. return $retVal; } =head3 _GetLoadStats my $stats = ERDB::_GetLoadStats(); Return a blank statistics object for use by the load methods. =cut sub _GetLoadStats{ return Stats->new(); } =head3 _DumpRelation my $count = $erdb->_DumpRelation($outputDirectory, $relationName); Dump the specified relation to the specified output file in tab-delimited format. =over 4 =item outputDirectory Directory to contain the output file. =item relationName Name of the relation to dump. =item RETURN Returns the number of records dumped. =back =cut sub _DumpRelation { # Get the parameters. my ($self, $outputDirectory, $relationName) = @_; # Declare the return variable. my $retVal = 0; #"; $retVal++; } # Close the output file. close DTXOUT; # Return the write count. return $retVal; } =head3 _GetStructure my $objectData = $self->_GetStructure($objectName); Get the data structure for a specified entity or relationship. my $relHash = $erdb->_GetRelationTable($objectName); Get the list of relations for a specified entity or relationship. $erdb->ValidateFieldNames($metadata); Determine whether or not the field names in the specified metadata structure are valid. If there is an error, this method will abort. my $stats = $erdb->_LoadRelation($directoryName, $relationName, $rebuild); Load a relation from the data in a tab-delimited disk file. The load will only take place if a disk file with the same name as the relation exists in the specified directory. ($self, $filename, $external); This method loads the data describing this database from an XML file into a metadata structure. The resulting structure is a set of nested hash tables containing all the information needed to load or use the database. The schema for the XML file is F<ERDatabase.xml>. =over 4 =item self Blessed ERDB object. =item filename Name of the file containing the database definition. =item external (optional) If TRUE, then the internal DBD stored in the database (if any) will be bypassed. This option is usually used by the load-related command-line utilities. =item RETURN Returns a structure describing the database. =back =cut sub _LoadMetaData { # Get the parameters. my ($self, $filename, $external) = @_; # Declare the return variable. my $metadata; # Check for an internal DBD. if (! $external && $self->UseInternalDBD()) { # Get the database handle. my $dbh = $self->{_dbh}; Trace("Checking for internal DBD.") if T(3); # Check for a metadata table. if ($dbh->table_exists(METADATA_TABLE)) { # Check for an internal DBD. my $rv = $dbh->SQL("SELECT data FROM " . METADATA_TABLE . " WHERE id = ?", 0, "DBD"); if ($rv && scalar @$rv > 0) { # Here we found something. The return value is a reference to a # list containing a 1-tuple. my $frozen = $rv->[0][0]; Trace(length($frozen) . " characters read from metadata record.") if T(3); ($metadata) = FreezeThaw::thaw($frozen); Trace("DBD loaded from database.") if T(2); } } } # If we didn't get an internal DBD, read the external one. if (! defined $metadata) { Trace("Reading DBD from $filename.") if T(2); # Slurp the XML file into a variable. Extensive use of options is used to # insure we get the exact structure we = (); # We also have a table for mapping alias names to object names. This is # useful when processing object name lists. my %aliasTable = (); # Loop through the entities. my $entityList = $metadata->{Entities}; for my $entityName (keys %{$entityList}) { my $entityStructure = $entityList->{$entityName}; # # The first step is to fill in all the entity's missing. # # Fix up this entity. _FixupFields($entityStructure, $entityName); # Add the ID field. _AddField($entityStructure, 'id', { type => $entityStructure->{keyType}, name => 'id', relation => $entityName, Notes => { content => "Unique identifier for this \[b\]$entityName\[/b\]." }, PrettySort => 0}); # Store the entity in the alias table. $aliasTable{$entityName} = $entityName; # #') { # Insure the field name is valid. my $fieldThing = $fieldList->{$fieldName}; if (! defined $fieldThing) { Confess("Invalid index: field $fieldName does not belong to $entityName."); } else { #, name => 'from-link', relation => $relationshipName, Notes => { content => $fromComment }, PrettySort => 0}); #, name => 'to-link', relation => $relationshipName, Notes => { content => $toComment }, PrettySort => 0}); # Create an index-free relation from the fields. my $thisRelation = { Fields => _ReOrderRelationTable($relationshipStructure->{Fields}), Indexes => { } }; $relationshipStructure->{Relations} = { $relationshipName => $thisRelation }; # Put the relationship in the alias table. $aliasTable{$relationshipName} = $relationshipName; if (exists $relationshipStructure->{converse}) { $aliasTable{$relationshipStructure->{converse}} = $relationshipName; } # Add the alternate indexes (if any). This MUST be done before the FROM # and TO indexes, because it erases the relation's index list. if (exists $relationshipStructure->{Indexes}) { _ProcessIndexes($relationshipStructure->{Indexes}, $thisRelation); } # Create the FROM and TO indexes. _CreateRelationshipIndex("From", $relationshipName, $relationshipStructure); _CreateRelationshipIndex("To", $relationshipName, $relationshipStructure); # Add the relation to the master table. $masterRelationTable{$relationshipName} = $thisRelation; } # Now store the master relation table and alias table in the metadata structure. $metadata->{RelationTable} = \%masterRelationTable; $metadata->{AliasTable} = \%aliasTable; } # Return the metadata structure. return $metadata; } =head3 _CreateRelationshipIndex ERDB::_CreateRelationshipIndex($indexKey, $relationshipName, $relationshipStructure); Create an index for a relationship's relation. ERDB::_AddIndex($indexName, $relationStructure); ERDB::_FixupFields($structure, $defaultRelationName); This method fixes the field list for the metadata of an entity or relationship. It will add the caller-specified relation name to fields that do not have a name and set the C<PrettySort> values. =over 4 =item structure Entity or relationship structure to be fixed up. =item defaultRelationName Default relation name to be added to the fields. =back =cut sub _FixupFields { # Get the parameters. my ($structure, $defaultRelationName) = @_; #(metadata => 4); my $fieldData = $fieldStructures->{$fieldName}; # Store the field name so we can find it when we're looking at a descriptor # without its key. $fieldData->{name} = $fieldName; # Get the field type. my $type = $fieldData->{type}; # Validate it. if (! exists $TypeTable->{$type}) { Confess("Field $fieldName of $defaultRelationName has unknown type \"$type\"."); } # Plug in a relation name if one} = $TypeTable->{$type}->prettySortValue(); } # If there are searchable fields, remember the fact. if (@textFields) { $structure->{searchFields} = \@textFields; } } } =head3 _FixName my $fixedName = ERDB::_FixName($fieldName, $converse); Fix the incoming field name so that it is a legal SQL column name. =over 4 =item fieldName Field name to fix. =item converse If TRUE, then "from" and "to" will be exchanged. =item RETURN Returns the fixed-up field name. =back =cut sub _FixName { # Get the parameter. my ($fieldName, $converse) = @_; # Replace its minus signs with underscores. $fieldName =~ s/-/_/g; # Check for from/to flipping. if ($converse) { if ($fieldName eq 'from_link') { $fieldName = 'to_link'; } elsif ($fieldName eq 'to_link') { $fieldName = 'from_link'; } } # Return the result. return $fieldName; } =head3 _FixNames my @fixedNames = ERDB::_FixNames(@fields); Fix all the field names in a list. This is essentially a batch call to L</_FixName>. =over 4 =item fields ERDB::_AddField($structure, $fieldName, $fieldData); Add a field to a field list. my \@fieldList = ERDB::_ReOrderRelationTable(\%relation = 0; my $flag = $erdb->_IsPrimary($relationName); Return TRUE if a specified relation is a primary relation, else FALSE. A relation is primary if it has the same name as an entity or relationship. _JoinClause my $joinClause = $erdb->_JoinClause($source, $target); Create a join clause that connects the source object to the target object. If we are crossing from an entity to a relationship, we key off the relationship's from-link. If we are crossing from a relationship to an entity, we key off of it's to-link. It is also possible to cross from relationship to relationship if the two have an entity in common. Finally, we must be aware of converse names for relationships, and for nonrecursive relationships we allow crossing via the wrong link. =over 4 =item source Name of the object from which we are starting. =item target Name of the object to which we are proceeding. =item RETURN Returns a string that may be used in an SQL WHERE in order to connect the two objects. If no connection is possible, an undefined value will be returned. =back =cut sub _JoinClause { # Get the parameters. my ($self, $source, $target) = @_; # Declare the return variable. If no join can be constructed, it will # remain undefined. my $retVal; # We need for both objects (1) an indication of whether it is an entity, a # relationship, or a converse relationship, and (2) its descriptor. my (@types, @descriptors); for my $object ($source, $target) { # Compute this object's real name. We trim off any ending number and # check the alias table. my $realName = $self->_Resolve($object); # If no alias table entry was found, it's an error. if (! defined $realName) { push @types, 'Error'; } else { # Is this an entity or a relationship? my $descriptor = $self->FindEntity($realName); if ($descriptor) { # Here it's an entity. push @types, 'Entity'; push @descriptors, $descriptor; } else { # Here it's a relationship. If the name doesn't match the # real name, it's a converse. $descriptor = $self->FindRelationship($realName); push @types, ($object =~ /$realName/ ? 'Relationship' : 'Converse'); push @descriptors, $descriptor; } } } # Now we check the types. Note that if one of the object names was in error, # the big IF below will not match anything and we'll return undef. my $type = join("/", @types); Trace("Join type for $source to $target is $type.") if T(Joins => 3); if ($type eq 'Entity/Relationship') { $retVal = $self->_BuildJoin(id => $source, $descriptors[0], from => $target, $descriptors[1]); } elsif ($type eq 'Entity/Converse') { $retVal = $self->_BuildJoin(id => $source, $descriptors[0], to => $target, $descriptors[1]); } elsif ($type eq 'Relationship/Entity') { $retVal = $self->_BuildJoin(id => $target, $descriptors[1], to => $source, $descriptors[0]); } elsif ($type eq 'Converse/Entity') { $retVal = $self->_BuildJoin(id => $target, $descriptors[1], from => $source, $descriptors[0]); } elsif ($type eq 'Relationship/Relationship') { $retVal = $self->_BuildJoin(to => $source, $descriptors[0], from => $target, $descriptors[1]); } elsif ($type eq 'Converse/Relationship') { $retVal = $self->_BuildJoin(from => $source, $descriptors[0], from => $target, $descriptors[1]); } elsif ($type eq 'Relationship/Converse') { $retVal = $self->_BuildJoin(to => $source, $descriptors[0], to => $target, $descriptors[1]); } elsif ($type eq 'Converse/Converse') { $retVal = $self->_BuildJoin(from => $source, $descriptors[0], to => $target, $descriptors[1]); } # Return the result. return $retVal; } =head3 _BuildJoin my $joinString = $erdb->_BuildJoin($fld1 => $source, $sourceData, $fld2 => $target, $targetData); Create a join string between the two objects. The second object must be a relationship; the first can be an entity or a relationship. The fields indicators specify the nature of the connection: C<id> for an entity connection, C<from> for the front of a relationship, and C<to> for the back of a relationship. The theory is that if everything is compatible, you just connect the indicated fields in the two objects. This may not be possible if the second relationship does not match the first object in the proper manner. If that is the case, attempts will be made to find a workable connection. =over 4 =item fld1 Join direction for the first object: C<id> if it's an entity, C<from> if it's a relationship and we're coming out the front, or C<to> if it's a relationship and we're coming out the end. =item source Name to use for the first object in constructing the field reference. =item sourceData Entity or relationship descriptor for the first object. =item fld2 Join direction for the second object: C<from> if it's a relationship and we're going in the front, or C<to> if it's a relationship and we're going in the end. =item target Name to use for the second object in constructing the field reference. =item targetData Relationship descriptor for the second object. =item RETURN Returns a string that can be used in an SQL WHERE clause to connect the two objects, or C<undef> if no connection is possible. =back =cut sub _BuildJoin { # Get the parameters. my ($self, $fld1, $source, $sourceData, $fld2, $target, $targetData) = @_; Trace("BuildJoin called for $fld1 => $source against $fld2 => $target,") if T(Joins => 4); # Declare the return variable. If we can do this join, we'll put # the string in here. my $retVal; # Are we starting from an entity? if ($fld1 eq 'id') { # Compute the real entity name. my $realName = $self->_Resolve($source); # Try to find a direction in which the entity connects. for my $dir ($fld2, $FromTo{$fld2}) { last if defined $retVal; # Check this direction. Trace("Join check: $dir of $targetData->{$dir} eq $realName.") if T(Joins => 4); if ($targetData->{$dir} eq $realName) { # Yes, we can connect. $retVal = "$source.id = $target.${dir}_link"; } } } else { # Here we have two relationships. We need to try all four # combinations, stopping at the first match. for my $srcDir ($fld1, $FromTo{$fld1}) { last if defined $retVal; for my $tgtDir ($fld2, $FromTo{$fld2}) { last if defined $retVal; # Check this pair of directions. Trace("Join check: $srcDir to $tgtDir of $sourceData->{$srcDir} eq $targetData->{$tgtDir}.") if T(Joins => 4); if ($sourceData->{$srcDir} eq $targetData->{$tgtDir}) { # We can connect. $retVal = "$source.${srcDir}_link = $target.${tgtDir}_link"; } } } } # Return the result. return $retVal; } =head3 _Resolve my $realName = $erdb->_Resolve($objectName); Determine the real object name for a name from an object name list. Trailing numbers are peeled off, and the alias table is checked. If the incoming name is invalid, the return value will be undefined. =over 4 =item objectName Incoming object name to parse. =item RETURN Returns the object's real name, or C<undef> if the name is invalid. =back =cut sub _Resolve { # Get the parameters. my ($self, $objectName) = @_; # Declare the return variable. my $retVal; # Parse off any numbers at the end. The pattern below will always match # a valid name. if ($objectName =~ /^(\D+)(\d*)$/) { # Check the alias table. Real names map to themselves, and converse # names map to the real name. $retVal = $self->{_metaData}->{AliasTable}->{$1}; } # Return the result. return $retVal; } =head3 InternalizeDBD $erdb->InternalizeDBD(); Save the DBD metadata into the database so that it can be retrieved in the future. =over 4 =item fileName Name of the file containing the DBD. =back =cut sub InternalizeDBD { # Get the parameters. my ($self) = @_; # Only proceed if an internal DBD is supported. if ($self->UseInternalDBD()) { # Get the database handle. my $dbh = $self->{_dbh}; # Insure we have a metadata table. if (! $dbh->table_exists(METADATA_TABLE)) { Trace("Creating metadata table.") if T(3); $dbh->create_table(tbl => METADATA_TABLE, flds => 'id VARCHAR(20) NOT NULL PRIMARY KEY, data MEDIUMTEXT'); } # Delete the current DBD record. $dbh->SQL("DELETE FROM " . METADATA_TABLE . " WHERE id = ?", 0, 'DBD'); # Freeze the DBD metadata. my $frozen = FreezeThaw::freeze($self->{_metaData}); # Store it in the database. Trace("Storing DBD in metadata table.") if T(3); $dbh->SQL("INSERT INTO " . METADATA_TABLE . " (id, data) VALUES (?, ?)", 0, 'DBD', $frozen); } } =head2 Internal Documentation-Related Methods =head3 _FindObject my $objectData = $erdb->_FindObject($list => $name); Return the structural descriptor of the specified object (entity, relationship, or shape), or an undefined value if the object does not exist. =over 4 =item list Name of the list containing the desired type of object (C<Entities>, C<Relationships>, or C<Shapes>). =item name Name of the desired object. =item RETURN Returns the object descriptor if found, or C<undef> if the object does not exist or is not of the proper type. =back =cut sub _FindObject { # Get the parameters. my ($self, $list, $name) = @_; # Declare the return variable. my $retVal; # If the object exists, return its descriptor. my $thingHash = $self->{_metaData}->{$list}; if (exists $thingHash->{$name}) { $retVal = $thingHash->{$name}; } # Return the result. return $retVal; } =head3 _WikiNote my $wikiText = ERDB::_WikiNote($dataString, $wiki); Convert a note or comment to Wiki text by replacing some bulletin-board codes with HTML. The codes supported are C<[b]> for B<bold>, C<[i]> for I<italics>, C<[link]> for links, C<[list]> for bullet lists. and C<[p]> for a new paragraph. All the codes are closed by slash-codes. So, for example, C<[b]Feature[/b]> displays the string C<Feature> in boldface. =over 4 =item dataString String to convert to Wiki text. =item wiki Wiki object used to format the text. =item RETURN An Wiki text string derived from the input string. =back =cut sub _WikiNote { # Get the parameter. my ($dataString, $wiki) = @_; # HTML-escape the text. my $retVal = CGI::escapeHTML($dataString); # Substitute the italic code. $retVal =~ s#\[i\](.+?)\[/i\]#$wiki->Italic($1)#sge; # Substitute the bold code. $retVal =~ s#\[b\](.+?)\[/b\]#$wiki->Bold($1)#sge; # Substitute for the paragraph breaks. $retVal =~ s#\[p\](.+?)\[/p\]#$wiki->Para($1)#sge; # Now we do the links, which are complicated by the need to know two # things: the target URL and the text. $retVal =~ s#\[link\s+([^\]]+)\]([^\[]+)\[/link\]#$wiki->LinkMarkup($1, $2)#sge; # Finally, we have bullet lists. $retVal =~ s#\[list\](.+?)\[/list\]#$wiki->List(split /\[\*\]/, $1)#sge; Trace("Wiki Note is\n$retVal") if T(Wiki => 3); # Return the result. return $retVal; } =head3 _ComputeRelationshipSentence my $text = ERDB::_ComputeRelationshipSentence($wiki, $relationshipName, $relationshipStructure, $dir); The relationship sentence consists of the relationship name between the names of the two related entities and an arity indicator. =over 4 =item wiki L<WikiTools> object for rendering links. If this parameter is undefined, no link will be put in place. =item relationshipName Name of the relationship. =item relationshipStructure Relationship structure containing the relationship's description and properties. =item dir (optional) Starting point of the relationship: C<from> (default) or C<to>. =item RETURN Returns a string containing the entity names on either side of the relationship name and an indicator of the arity. =back =cut sub _ComputeRelationshipSentence { # Get the parameters. my ($wiki, $relationshipName, $relationshipStructure, $dir) = @_; # This will contain the first, second, and third pieces of the sentence. my @relWords; # Process according to the direction. if (! $dir || $dir eq 'from') { # Here we're going forward. @relWords = ($relationshipStructure->{from}, $relationshipName, $relationshipStructure->{to}); } else { # Here we're going backward. Compute the relationship name, using # converse if one is available. my $relName; if (exists $relationshipStructure->{converse}) { $relName = $relationshipStructure->{converse}; } else { $relName = "($relationshipName)"; } @relWords = ($relationshipStructure->{to}, $relName, $relationshipStructure->{from}); } # Now we need to set up the link. This is only necessary if the wiki object # is defined. if (defined $wiki) { $relWords[1] = $wiki->LinkMarkup("#$relationshipName", $relWords[1]); } # Compute the arity. my $arityCode = $relationshipStructure->{arity}; push @relWords, "($ArityTable{$arityCode})"; # Form the sentence. my $retVal = join(" ", @relWords) . "."; return $retVal; } =head3 _WikiObjectTable my $tableMarkup = _WikiObjectTable($name, $fieldStructure, $wiki); Generate the field table for the named entity or relationship. =over 4 =item name Name of the object whose field table is being generated. =item fieldStructure Field structure for the object. This is a hash mapping field names to field data. =item wiki L<WikiTools> object (or equivalent) for rendering HTML. =item RETURN Returns the markup for a table of field information. =back =cut sub _WikiObjectTable { # Get the parameters. my ($name, $fieldStructure, $wiki) = @_; # Compute the table header row and data rows. my ($header, $rows) = ComputeFieldTable($wiki, $name, $fieldStructure); # Convert it to a table. my $retVal = $wiki->Table($header, @$rows); # Return the result. return $retVal; } 1; | http://biocvs.mcs.anl.gov/viewcvs.cgi/Sprout/ERDB.pm?hideattic=0&revision=1.133&view=markup&pathrev=mgrast_release_3_0_2 | CC-MAIN-2020-10 | refinedweb | 17,537 | 59.3 |
I have 2 jars lets call them a.jar and b.jar.
b.jar depends on a.jar.
in a.jar, I defined a class, lets call it StaticClass, in the StaticClass, I defined a static block, calling a method named "init" :
public class StaticClass {
static {
init();
}
public void static init () {
// do some initialization here
}
}
Yes, you are right. Static initialization blocks are run when the JVM (class loader - to be specific) loads
StaticClass (which occurs the first time it is referenced in code).
You could force this method to be invoked by explicitly calling
StaticClass.init() which is preferable to relying on the JVM.
You could also try using
Class.forName(String) to force the JVM to load the class and invoke its static blocks. | https://codedump.io/share/lTSd8pN449uB/1/when-is-the-static-block-of-a-class-executed | CC-MAIN-2017-13 | refinedweb | 126 | 75.1 |
C ++ Wrapper for all Real-Time Operating Systems for CortexM4 (Part 2)
Check out this second installment on how to refine tasks in the C++ wrapper for real-time operating systems on the CortexM4. Click here for more!
Join the DZone community and get the full member experience.Join For Free
In our last installment, we looked at everything we needed to get started with real-time operating systems. This post will work on refining that project and building on some of the code we already implemented. Let's get into it!
Continuing to Refine the Task
The task now has almost everything you need. We need to add the method
Sleep (). This method suspends the execution of the task at a specified time. In most cases, this is enough, but if you need a clearly deterministic time, then
Sleep () can bring you problems. For example, you want to do some calculation and blink the LED and do it exactly every 100 ms.
void MyTask::Execute() { while(true) { DoCalculation(); //It takes about 10ms Led1.Toggle() ; Sleep(100ms) ; } }
This code will blink the LED once every 110 ms. But, you want to fold once in 100ms. You can roughly calculate the calculation time and put
Sleep (90ms) . But, if the calculation time depends on the input parameters, then the blinking will not be deterministic at all. For such cases, there are special methods in "all" operating systems, such as
DelayUntil (). It works by this principle. First, you need to remember the current value of the operating system tick counter. Then, to add to this value and the number of ticks that you want to pause the task, as soon as the tick counter reaches this value, the task is unlocked. Thus, the task will be locked exactly to the value that you set, and your LED will blink exactly every 100ms — regardless of the duration of the calculation.
This mechanism is implemented differently in different operating systems, but it has one algorithm. As a result, the mechanism, say, implemented on FreeRTOS, will be simplified to the state shown in the following picture:
As you can see, the readout of the initial state of the operating system counter tickers occurs before entering the infinite loop, and we need to figure something out to implement it. Help comes from the template design,
Template method. It is very easy to implement; we just need to add another non-virtual method, where we first call the method that reads and stores the operating system tick counter and then calls the virtual
Execute () method that will be implemented in the child, i.e. in your implementation of the task. Since we do not need this method to stick out for the user (it's just a helper), then we'll hide it in the private section.
class Thread { public: virtual void Execute() = 0 ; friend class Rtos ; private: void Run() { lastWakeTime = wGetTicks() ; Execute(); } ... tTime lastWakeTime = 0ms ; ... }
Accordingly, in the static
Run method of the RTOS class, you will now need to call the
Execute () method, but the
Run () method of the Thread object. We just made the RTOS class friendly to access the private
Run () method in the Thread class.
static void Run(void *pContext ) { static_cast<Thread*>(pContext)->Run() ; }
The only restriction for the
SleepUntil () method is that it cannot be used in conjunction with other methods that block the task. Alternatively, to solve the problem of working in conjunction with other methods blocking the task, you can dub the method of updating the memorized ticks of the system and call it before
SleepUntil (). But, for now, just keep this nuance in mind. The extreme version of the classes appear in the following picture:
/******************************************************************************* * Filename : thread.hpp * * Details : Base class for any Taskis which contains the pure virtual * method Execute(). Any active classes which will have a method for running as * a task of RTOS should inherit the Thread and override the Execute() method. * For example: * class MyTask : public OsWrapper::Thread * { * public: * virtual void Execute() override { * while(true) { * //do something.. * } * } ; * * Author : Sergey Kolody *******************************************************************************/ #ifndef __THREAD_HPP #define __THREAD_HPP #include "FreeRtos/rtosdefs.hpp" #include "../../Common/susudefs.hpp" namespace OsWrapper { extern void wSleep(const tTime) ; extern void wSleepUntil(tTime &, const tTime) ; extern tTime wGetTicks() ; extern void wSignal(tTaskHandle const &, const tTaskEventMask) ; extern tTaskEventMask wWaitForSignal(const tTaskEventMask, tTime) ; constexpr tTaskEventMask defaultTaskMaskBits = 0b010101010 ; enum class ThreadPriority { clear = 0, lowest = 10, belowNormal = 20, normal = 30, aboveNormal = 80, highest = 90, priorityMax = 255 } ; enum class StackDepth: tU16 { minimal = 128U, medium = 256U, big = 512U, biggest = 1024U }; class Thread { public: virtual void Execute() = 0 ; inline tTaskHandle GetTaskHanlde() const { return handle; } static void Sleep(const tTime timeOut = 1000ms) { wSleep(timeOut) ; }; void SleepUntil(const tTime timeOut = 1000ms) { wSleepUntil(lastWakeTime, timeOut); }; inline void Signal(const tTaskEventMask mask = defaultTaskMaskBits) { wSignal(handle, mask); }; inline tTaskEventMask WaitForSignal(tTime timeOut = 1000ms, const tTaskEventMask mask = defaultTaskMaskBits) { return wWaitForSignal(mask, timeOut) ; } friend void wCreateThread(Thread &, const char *, ThreadPriority, const tU16, tStack *); friend class Rtos ; private: tTaskHandle handle ; tTaskContext context ; tTime lastWakeTime = 0ms ; void Run() { lastWakeTime = wGetTicks() ; Execute(); } } ; } ; #endif // __THREAD_HPP
/******************************************************************************* * Filename : Rtos.hpp * * Details : Rtos class is used to create tasks, work with special Rtos * functions and also it contains a special static method Run. In this method * the pointer on Thread should be pass. This method is input point as * the task of Rtos. In the body of the method, the method of concrete Thread * will run. *******************************************************************************/ #ifndef __RTOS_HPP #define __RTOS_HPP #include "thread.hpp" // for Thread #include "../../Common/susudefs.hpp" #include "FreeRtos/rtosdefs.hpp" namespace OsWrapper { extern void wCreateThread(Thread &, const char *, ThreadPriority, const tU16, tStack *) ; extern void wStart() ; extern void wHandleSvcInterrupt() ; extern void wHandleSvInterrupt() ; extern void wHandleSysTickInterrupt() ; extern void wEnterCriticalSection(); extern void wLeaveCriticalSection(); class Rtos { public: static void CreateThread(Thread &thread , tStack * pStack = nullptr, const char * pName = nullptr, ThreadPriority prior = ThreadPriority::normal, const tU16 stackDepth = static_cast<tU16>(StackDepth::minimal)) ; static void Start() ; static void HandleSvcInterrupt() ; static void HandleSvInterrupt() ; static void HandleSysTickInterrupt() ; friend void wCreateThread(Thread &, const char *, ThreadPriority, const tU16, tStack *); friend class Thread ; private: //cstat !MISRAC++2008-7-1-2 To prevent reinterpet_cast in the CreateTask static void Run(void *pContext ) { static_cast<Thread*>(pContext)->Run() ; } } ; } ; #endif // __RTOS_HPP
Developments
So, once the task is created, it can be sent to an event. But, you want to implement an event that cannot be sent to a specific task. But, to any subscriber who decides to wait for this event, roughly speaking, we need to implement a wrapper over the
Event.
In general, the mechanism of events assumes very many options. You can send the event setting bits, and some tasks can wait for the installation of one bit, while others can install others. You can expect all of them at once. However, you cannot clear bits after receiving an event or options, but in my work, it is necessary to send and receive the event and discard all the bits. However, we still need to offer a simple interface to support additional functionality. The structure of the event is similar to the tasks. They also have a certain context that needs to be stored and the identifier. Also, I wanted the event to be able to adjust the waiting time and the mask, so I added two additional private fields.
You can use it like this:
OsWrapper :: Event event {10000ms, 3}; // create an event, wait for the event 10000ms, set bits number 0 and bit number 1. void SomeTask :: Execute () { while (true) { using OsWrapper :: operator "" ms; Sleep (1000ms); event.Signal (); // Send the event with bit 0 and bit 1 set. Sleep (1000ms); event.SetMaskBits (4) // Now set bit 2 only. event.Signal (); // Send the event with bit 2 set. } };}; void AnotherTask :: Execute () { while (true) { using namespace :: OsWrapper; // We check that the event did not work according to the timeout, the timeout if that is 10000ms if ((event.Wait () & defaultTaskMaskBits)! = 0) { GPIOC-> ODR ^ = (1 << 5); } } };};
Mutex, Semaphores, and Queues
And, I have not implemented them yet, or rather, the mutexes have already been done. But, I have not checked. The queues are waiting for their turn. I hope to finish it in the near future.
How Can We All Use This?
The basis is made to understand how all this can be used. I bring a small piece of code that does the following — the
LedTask task blinks once in exactly two seconds with the LED, and every two seconds, it sends a signal to the task
myTask, which waits 10 seconds for the event. As soon as the event has come, she blinks another LED. In general, as a result, two LEDs blink once every two seconds. I did not directly notify the task, but I did it via an event. Unfortunately, it is not a clever solution to blink two LEDs.
using OsWrapper::operator""ms ; OsWrapper::Event event{10000ms, 1}; class MyTask : public OsWrapper::Thread { public: virtual void Execute() override { while(true) { if (event.Wait() != 0) { GPIOC->ODR ^= (1 << 9); } } } using tMyTaskStack = std::array<OsWrapper::tStack, static_cast<tU16>(OsWrapper::StackDepth::minimal)> ; inline static tMyTaskStack Stack; //C++17 фишка в IAR 8.30 } ; class LedTask : public OsWrapper::Thread { public: virtual void Execute() override { while(true) { GPIOC->ODR ^= (1 << 5) ; using OsWrapper::operator""ms ; SleepUntil(2000ms); event.Signal() ; } } using tLedStack = std::array<OsWrapper::tStack, static_cast<tU16>(OsWrapper::StackDepth::minimal)> ; inline static tLedStack Stack; //C++17 фишка в IAR 8.30 } ; MyTask myTask; LedTask ledTask; int main() { using namespace OsWrapper ; Rtos::CreateThread(myTask, MyTask::Stack.data(), "myTask", ThreadPriority::lowest, MyTask::Stack.size()) ; Rtos::CreateThread(ledTask, LedTask::Stack.data()) ; Rtos::Start(); return 0; }
Conclusion
I'll venture to give my subjective view of the future of firmware for microcontrollers. I believe that the time will come for C ++ where there will be more and more operating systems providing the C ++ interface. Manufacturers already need to rewrite or wrap everything in C ++.mFrom this point of view, I would recommend using an RTOS. For example, the above-mentioned MAX RTOS can save you so much time — you can not even imagine— and there are still such unique chips, for example, running on different microcontrollers. If it had a security certificate, it would be better to find a different solution.
But, in the meantime, most of us use traditional Sisnye OSes. You can use the wrapper as an initial start to your transition to a happy future with C ++ :)
I assembled a small test project in Clion. I had to tinker with its settings; it is still not entirely intended for developing software for microcontrollers and is almost not friendly with the IAR toolchain. But still, it turned out to compile, link to elf format, converts to hex format, flash, and start debugging with GDB. And, it was worth it — it is just an excellent environment and corrects mistakes on the fly. If you need to change the signature of the method, then refactoring will occur in two seconds. In general, you don't even need to think — it will say where it should be and is best for making or naming the parameter. I even got the impression that the wrapper was written by Clion herself. In general, it contains all the bugs associated with the IAR toolchain that you can take.
But, in the old-fashioned project for IAR, I still created the version 8.30.1. I also checked out how it all works using the following equipment: XNUCLEO-F411RE, ST-Link debugger. And, yet, once again, look at how debugging looks in Clion — well, it's pretty, but so far it's buggy:
You can take the IAR project here: IAR project 8.30.1. While this is an incomplete version, without queues and semaphores, I will provide a more complete version in GitHub whenever I can. But, I think that this one can already be used for small projects in conjunction with FreeRtos. Happy coding!
Opinions expressed by DZone contributors are their own. | https://dzone.com/articles/c-wrapper-for-all-real-time-operating-systems-for | CC-MAIN-2021-31 | refinedweb | 1,967 | 53.61 |
2014-08-29 07:36 AM
Hi community,
I was trying to use the Python API to query a list of files on a specific volume on a specific vserver, so far i have been successfully able to do that using the powershell command:
Read-NcDirectory /vol/volume0 -VserverContext nfs_test | where {$_.Type -match "directory" -and $_.Name -notmatch "\."} | Read-NcDirectory
However, when I try to accomplish a similar task using the python API:
import sys
sys.path.append("C:\sdk\lib\python\NetApp")
from NaServer import *
s = NaServer("10.0.0.1", 1 , 20)
s.set_server_type("FILER")
s.set_transport_type("HTTPS")
s.set_port(443)
s.set_style("LOGIN")
s.set_admin_user("admin", "password")
api = NaElement("file-list-directory-iter")
api.child_add_string("encoded",True)
api.child_add_string("path","/vol/volume0")
xo = s.invoke_elem(api)
if (xo.results_status() == "failed") :
print ("Error:\n")
print (xo.sprintf())
sys.exit (1)
print "Received:\n"
print xo.sprintf()
I get the following output:
Error:
<results status="failed" errno="13005" reason="Unable to find API: file-list-directory-iter-start"></results>
Has anyone been able to accomplish this? My main task is to verify all .vmdk and .qcow files inside a specific volume and check its QoS policy, if there's none I must be able to add one.
Any tips? Any hints? Anything?
Solved! SEE THE SOLUTION
2014-08-30 12:12 AM
The iter-start APIs calls aren't there any more, just file-list-directory-iter.
See the element next-tag.
I use the ZExplore utility to find clues on these issues.
I hope this response has been helpful to you.
At your service,
Eugene E. Kashpureff, Sr.
Independent NetApp Consultant, K&H Research
Senior NetApp Instructor, IT Learning Solutions
(P.S. I appreciate points for helpful or correct answers.)
2014-09-02 04:41 AM
I also tried the file-list-directory-iter, but I get the same error:
Error:
<results status="failed" errno="13005" reason="Unable to find API: file-list-directory-iter"></results>
I must also add that I'm with a clustered mode infrastructure.
2014-09-02 09:17 AM
The API is indeed called file-list-directory-iter. Note that it is a Vserver-specific API, so you need to either:
-Ben | http://community.netapp.com/t5/Software-Development-Kit-SDK-and-API-Discussions/Python-List-files-on-vserver/td-p/21664 | CC-MAIN-2017-30 | refinedweb | 370 | 50.63 |
// Enter);
I got an error when I tried to declare it at the top of the header file
#ifndef Eth_h#define Eth_h#include <SPI.h>#include <Ethernet.h>byte mac [5];IPAddress ip;class Eth{ public:void init();};#endif
error: 'IPAddress' does not name a type
here is what I attempted:
IMHO you should not declare variables in .h files.
What does your sketch look like? Are you including Ethernet.h in the sketch?
QuoteIMHO you should not declare variables in .h files.Why not? It is standard practice, recommended everywhere else.What about class definitions?
#ifndef EEPROM_h#define EEPROM_h#include <inttypes.h>class EEPROMClass{ public: uint8_t read(int); void write(int, uint8_t);};extern EEPROMClass EEPROM; // <<< reference to a variable declared elsewhere#endif
#include <avr/eeprom.h>#include "Arduino.h"#include "EEPROM.h"uint8_t EEPROMClass::read(int address){ return eeprom_read_byte((unsigned char *) address);}void EEPROMClass::write(int address, uint8_t value){ eeprom_write_byte((unsigned char *) address, value);}EEPROMClass EEPROM; // <<< variable declaration is in .cpp file
Uh ? Are you saying class definitions and variable declarations are the same thing ?
QuoteUh ? Are you saying class definitions and variable declarations are the same thing ?No, I never said that. I'm asking, if you don't like declaring variables in the header file, how you feel about defining classes in header files.
I'm trying to determine whether to take you seriously.
What does your sketch look like? Are you including Ethernet.h in the sketch?No.... And that was the problem! Thanks
It looks like it is there in your code, in your second post on this thread.
I will ignore personal attacks.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=144498.msg1086709 | CC-MAIN-2016-18 | refinedweb | 304 | 61.22 |
The acronyms they keep a-growin'. I started out with AME (ASDT, MTASC and Eclipse), got the Flashout plugin working and entered the world of FAME before getting my Borg on and assimilating _root. I knew I was well on my way to open-source Flash development but there was still a piece of the puzzle missing: How do I create my initial SWF, its library, resources, etc.
Enter swfmill by Daniel Fischer. Swfmill is a wonderful little command line tool that allows you to go from XML to SWF (using a dialect called swfml) and vice-versa. The path to totally open-source Flash development thus makes our acronym FAMES (which I'm sure Jesse would pronounce "famous"!)
Here are the steps to recreate the FAMES swf you see above:
1. Create a new ActionScript project in ASDT (Eclipse)
2. Create a new XML file called application.xml. This is the swfml file that Swfmill will compile to create the skeleton swf that contains your library.
3. Add the following code to the swfml file:
- <?xml version="1.0" encoding="iso-8859-1"?>
- <movie width="320" height="240" framerate="30">
- <background color="#ffffff"/>
-
- <frame>
- <library>
- <clip id="spheres" import="library/spheres.png"/>
- </library>
- </frame>
- </movie>
4. Create a new AS File to use as the main application class and call it Application.as. Here is the code for Application.as:
- class Application extends MovieClip
- {
- var tfCaption:TextField;
-
- // Clips attached dynamically from Swfmill library
- var mcSpheres:MovieClip;
-
- var sW:Number = null; // Stage width
- var sH:Number = null; // Stage height
-
- private function Application ( target )
- {
- // Link movie clips to classes
- Object.registerClass ( "spheres", Particle );
-
- // Assimilate the target
- target.__proto__ = this.__proto__;
- target.__constructor__ = Application;
- this = target;
-
- Flashout.log ("Application initialized: " + this );
-
- // Store stage dimensions for easy look-up
- sW = Stage.width;
- sH = Stage.height;
-
- // Draw border around the stage
- lineStyle ( 1, 0x000000 );
- moveTo ( 0, 0 );
- lineTo ( sW, 0 );
- lineTo ( sW, sH );
- lineTo ( 0, sH );
- lineTo ( 0, 0 );
-
- //
- // Create a caption
- //
- var captionTextFormat = new TextFormat();
- captionTextFormat.size = 12;
- captionTextFormat.font = "_sans";
-
- var captionText:String = "Made with FAMES (FAME + Swfmill)";
-
- ("spheres", "mcSphere", 1000 + i );
- }
- }
-
- static function main ()
- {
- // Create an Application instance and
- // have is assimilate _root.
- var test:Application = new Application( _root );
- }
- }
The important new bit of functionality here concerns linking the "spheres" clip to the Particle class (achieved with the Object.registerClass statement in the constructor) and attaching instances of it on stage.
5. Here is the code for the Particle class:
- class Particle extends MovieClip
- {
- var vX:Number = null;
- var vY:Number = null;
- var randomness:Number = null;
-
- function Particle ()
- {
- Flashout.log ( "Particle created: " + this );
-
- ;
- }
- }
- }
Before compiling the project, you need to use Swfmill to create the application.swf. To do this, open up a command prompt and enter the following command from your project folder (make sure you add swfmill.exe to your path first):
swfmill simple application.xml application.swf
6. Finally, you need to configure Flashout. To do this, browse to the Application class to set it as the "Root (main) class" and browse to the application.swf file created by Swfmill to set it for the "Path to swf" option. Hit "compile" and you should see the SWF run!
FAMES opens up a whole new world of cool possibilities.
Download the Swfmill particles example (12k)
Dammit, I wish I knew Java. Spike was kind enough to give me a plethora of links to Eclipse plugin development, but it is a bit overwhelming. Someone needs to write a panel around that functionality so we can have a library replacement, more like evolution, as a GUI.
Don’t get me wrong, as a Flash Developer, I’ve done my fair share of hand writing XML, but this missing component needs to be made easier for the developer.
Either way, now all we’re missing is declaritive layout via XML with an ability to compile that down to initialization ActionScript.
Yeah Jesse — I’ve actually checked out ASDT and built ASDT (Eclipse makes plugin development really easy with its own tools) and I’m playing with it a bit.
A panel for Swfmill would be really cool indeed.
The Swfmill XML is very low level and mirrors the SWF structure directly (try going swf2xml on the application.swf that you create with the “simple” tag and you’ll see the actual — non-simple — XML.)
Something like Ted’s FLOW — or a compile-time version like Flex — would be a great addition, no doubt.
Could you please tell me what parameters you use with this example in mtasc? I don’t use Flashout so I don’t know and I want to compile this example on console.
I cannot wait unti I can dump windows all together – can I use AMES to develop swf apps and replace Flash MX?
Hi Anand,
You can use AMES to develop SWF applications or you can use it alongside the Flash IDE.
Hey,
i try your example but i have a problem.
I can`t compile, it says:
C:/MyProjects/second/Application.as:14: characters 36-44 : type error Unknown class Particle
But the class particle is in the right folder.
Any idears?
Thank you
[...] [...]
hey myriam
i had this problem yesterday but i’ve managed to get this example working today. here’s how:
1. go to your MM (adobe…) core classes folder (for me on window xp this is ‘C:\Program Files\Macromedia\Flash MX 2004\en\First Run\Classes’)
2. create new folder ‘com’
3. inside ‘com’ create new folder ‘aralbalkan’
4. inside ‘aralbalkan’ create new folder ‘particle’
5. go back to the folder that contains ‘Particle.as’
6. to avoid any confusion, rename as ‘abParticle.as’ (aral’s initials as a faux-namespace)
7. open ‘abParticle.as’ in a text editor
8. retype the class declaration as ‘class com.aralbalkan.particles.abParticle extends MovieClip’
9. retype the constructor method as ‘function abParticle ()’
10. save and close
11. go back to eclipse and refresh your AS project. now if you expand your ‘core’ classes folder you should see the new package and class file
12. open ‘Application.as’ and add ‘import com.aralbalkan.particles.abParticle;’ as a new first line above the class declaration
13. retype the line ‘Object.registerClass ( “spheres”, Particle );’ as ‘Object.registerClass ( “spheres”, abParticle );’
14. go to your Flashout plugin and hit compile
15. voila!
actually not voila! straightaway. not for me anyway, the compiler starting moaning about the ‘TRACE’ method in Flashout.as. as a quick&Dirty fix i commented out all the TRACE calls in Flashout.as and THEN we had a working ‘Application.swf’
and there was much rejoicing! :)
who put that fcb link there? ;)
just to add: you can fix the compiler error in Flashout.as by making all uppercase TRACE calls lowercase. whoever did put that link there, you’ll note if you go there that doing just this is part of the instructions.
happy trails hans!
FAMES…
Really been getting my teeth into this FAMES tutorial that come in 2 parts: how install FAMES and your first FAMES project from the (dormant?) resident alien blog.
FAMES stands for Flashout, ASDT, Motion Twin ActionScript Compiler(MTASC), Eclipse and…
hi 2 all.
hello world! It is nice site. Keep working!
best regards
i found you here ^^
Hi,
I just began exploring the open source flash development with your tutorial.
What I noticed is that we need Flash installed on the system in order to use the core classes provided by macromedia.
Is there some open source library of actionscript core classes that we can use ?
warm regards
arrow
Hi Arrow,
You can use the intrinsic classes that come with the latest versions of MTASC.
Has anyone been able to use the -out MTASC option with asdt? I’ve tried adding:
-header 320:240:30 -out output.swf
in the Additionals section, but I can’t seem to get a new file to be produced.
[...] FAMES: FAME + Swfmill = Fully open source flash [...]
Thanks for the great tutorial!
I have the same problem as Myriam did about a year ago with “type error Unknown class Particle” on compile.
Im using the core MTASC library because I don’t have flash. I tried some things similar to what richard willis suggested only with where my core MTASC libraries are located. This didn’t work.
I looked up the documentation on Object.registerClass and it seems pretty straight foreword.
Anyone know where I can look to resolve this?
Chuck,
I ran into the same problem.
I wracked my brain until 1:00 AM and then it finally occurred to me that Particle.as should reside int he same location as the Flashout.as file. You do this, and its “All Good”.
If you add -cp to the MTASC additional tab when you hit compile this will tell the compiler where to find Particles.
I would like to find a completely open source tool chain that works under Linux to generate flash files which contain time lapse sequences with music playing in the background.
I no flash programming experience, but this application should not be all that complex, but it is absolutely necessary that it be scriptable in some fashion since the sequences will consist of several hundred frames.
I would greatly appreciate any advice as to which software to start out with, and any simple example code to get started.
Thanks in advance,
-Arlen
Thanks for a great tutorial! Doing this entirely in open source makes me all warm and happy.
I also had trouble getting the example to recognize Particle as a class.
To clarify what Phillc wrote, I added the following to the “Additionals” tab of the Flashout panel:
-cp “D:\Documents and Settings\JHowell\eclipse_workspace\swfmill_demo\src”
I had placed the .as files in a subdir of the project. Don’t forget those double quotes if you have a space in the path!
I tried this, but i can’t find a Flashout download.
Could someone please post a link, or upload Flashout, if it’s no longer available?
Thanks
(PS: Nice tutorial!) | http://aralbalkan.com/373 | crawl-003 | refinedweb | 1,675 | 66.74 |
Digging into the dungeons of some good frameworks is always a good way to learn things, but it coding in java, xpath itself, writing parsers, whatever.
The instance(‘xxx’) function might be causing trouble, but I think other functions are as well. That might also be only with functions in the xforms namespace, will check that out.
|| I can't see any way the tokenizer can return a name in the format {}amount unless there is a string literal in the source immediately followed by a colon.
Well, if parsing an expression like ../child::element('':amount) would lead to this, we might have found a cause. This is what the tokenizer was processing before returning {}amount. The original expression (below) is parsed into an expression where this is part of. For the xpath engine, each xpath is ‘split’ into so references (I heard you talked about this to Joern Toerner at XML Prague) that each will be converterted into parsed expression that will be put into a cache, to be used to decide what nodes should be updated if such a reference changes.
The additional xpath functions that are not available in xpath are added via function libraries. (keep in mind that the code referenced below is for saxon 9.2.1.5 but the basics have not changed)
These are added to the “IndependentContext” and at the same time, the default function namespace is set to the xforms namespace. This all is done in:
One thing that is done is that in the function library for the additional xforms functions also the normal xpath functions are added:
Your confusion about my statement of parsing vs evaluating was valid. I was put of on the wrong foot because an evaluated expression (a saxon Expression object) was already available, so I thought it went wrong when evaluating. What I did not see (stupid me, sorry) was that this expression was going to be split into references… It fails, as mentioned above) when parsing these individual references. So you are right… Here you see the order of which the tokenizer parses the suspicious element (not sure where the loop of 2/0 comes from. Did not see that earlier on (afaik).
../child::element('':amount)
Token: 206[..]
Token: 2[/]
Token: 36[<axis>] child
Token: 69[<node-type>()] element
Token: 201[<name>] {}amount
Token: 2[/]
Token: 0[<eof>]
Token: 2[/]
Token: 0[<eof>]
Token: 2[/]
Token: 0[<eof>]
Token: 2[/]
Token: 0[<eof>]
Token: 2[/]
Token: 0[<eof>]
Token: 2[/]
Token: 0[<eof>]
Token: 2[/]
Token: 0[<eof>]
Hope this helps… Since it might be that I made an accidental error in the reference parser, but still strange that it in the end it works if I comment out a specific if-then in the ExpressionParser.
Cheers,
Ronald van Kuijk
From: Michael Kay [mailto:mike@saxonica.com]
Sent: woensdag 8 augustus 2012 18:33
To: saxon-help@lists.sourceforge.net
Subject: Re: [saxon] 9.4.0.4 exception with empty default namepace ("uri":local ...)
Thanks for your sterling efforts to get to the bottom of this.
First, you're right that 9.4 is still supporting the "uri":local syntax rather than the newer Q{uri}local.
I suspected that the function call instance('xxx') might be causing problems because of the conflict with the "instance of" operator. So I did a modified build in which the instance() function was defined, and it ran without trouble.
I can't see any way the tokenizer can return a name in the format {}amount unless there is a string literal in the source immediately followed by a colon.
I'd be interested to know how you enable Saxon to recognize the instance() and current() functions which are not normally supported in XPath.
We should probably change the code so that the tokenizer doesn't recognize "uri":local unless the parser is going to accept it, but that doesn't seem to be at the root of the problem. In your debugging, did you see what path the tokenizer was taking before it returned {}amount?
I'm confused by this statement: "When the expression is build, nothing fails but when it is actually evaluated, it fails with the exception above.", since the error message is clearly one that can only arise during XPath parsing. I wonder if it's the case that the instance() function invokes some dynamic XPath parsing?
Michael Kay
Saxonica
On 08/08/2012 10:21, Ronald van Kuijk wrote:
string(../amount * instance('convTable')/rate[@currency=current()/../currency])
------------------------------------------------------------------------------ | http://sourceforge.net/p/saxon/mailman/attachment/50237E06.7080304@saxonica.com/1/ | CC-MAIN-2016-07 | refinedweb | 754 | 60.24 |
Determining the I2C Address
The I2C address of your LCD depends on the manufacturer, as mentioned earlier. If your LCD has a PCF8574 chip from Texas Instruments, its default I2C address is 0x27Hex. If your LCD has a PCF8574 chip from NXP semiconductors, its default I2C address is 0x3FHex.
So your LCD probably has an I2C address 0x27Hex or 0x3FHex. Nevertheless it is recommended that you find out the actual I2C of the LCD before using. Luckily there is a simple way to do this, thanks to Nick Gammon‘s great work.
Nick has written a simple I2C scanner sketch that scans your I2C bus and gives you back the address of each I2C device it finds.
#include <Wire.h> void setup() { Serial.begin (9600); //() {}
Load this sketch into your Arduino then open your serial monitor. You’ll see the I2C address of your I2C LCD display.
Please make note of this address. You’ll need it in the subsequent sketches.
Basic Arduino Sketch – Hello World
The following test sketch will print ‘Hello World!’ on the first line of the LCD and ‘LCD tutorial’ on the second line.
But, before you head for uploading the sketch, you need to make some changes to make it work for you. You need to enter the I2C address of your LCD and the dimensions of the display (columns and rows the display). If you are using 16×2 character LCD, pass the parameters 16 & 2; If you are using 20×4 LCD, pass the parameters 20 & 4.
// enter the I2C address and the dimensions of your LCD here LiquidCrystal_I2C lcd(0x3F, 16, 2);
Once you are done, go ahead and try the sketch out.
#include <LiquidCrystal_I2C.h> LiquidCrystal_I2C lcd(0x3F,16,2); // set the LCD address to 0x3F for a 16 chars and 2 line display void setup() { lcd.init(); lcd.clear(); lcd.backlight(); // Make sure backlight is on // Print a message on both lines of the LCD. lcd.setCursor(2,0); //Set cursor to character 2 on line 0 lcd.print("Hello world!"); lcd.setCursor(2,1); //Move cursor to character 2 on line 1 lcd.print("LCD Tutorial"); } void loop() { }
If everything goes right, you should see following output on the display.
Code Explanation:
The sketch starts by including LiquidCrystal_I2C library.
#include <LiquidCrystal_I2C.h>
Next an object of LiquidCrystal_I2C class is created. This object uses 3 parameters
LiquidCrystal_I2C(address,columns,rows). This is where you will need to change the default address to the address you found earlier if it happens to be different, and dimensions of the display.
LiquidCrystal_I2C lcd(0x3F,16,2);
Once the LiquidCrystal_I2C object is declared, you can access special methods that are specific to the LCD.
In the ‘setup’ function: the
init() function is called to initialize the lcd object. Next, the
clear() function is called. This function clears the LCD screen and moves the cursor to the top-left corner. The
backlight() function is used to turn on the LCD backlight.
lcd.init(); lcd.clear(); lcd.backlight();
Next, the cursor position is set to third column and the first row of the LCD, by calling function
lcd.setCursor(2,0). The cursor position specifies the location where you need the new text to be displayed on the LCD. The top left corner is considered col=0, row=0.
lcd.setCursor(2,0);
Next, the string ‘Hello World!’ is printed by calling the
print() function.
lcd.print("Hello world!");
Similarly, the next two lines will set the cursor position at the third column and the second row, and print ‘LCD Tutorial’ on the LCD.
lcd.setCursor(2,1); lcd.print("LCD Tutorial");
Other useful functions of the Library
There are a few useful functions you can use with LiquidCrystal_I2C object. Few of them are listed below:
home()– positions the cursor in the top-left corner of the LCD without clearing the display.
cursor()– displays the LCD cursor, an underscore (line) at the position of the next character to be printed.
noCursor()– hides the LCD cursor.
blink()– creates a blinking block style LCD cursor: a blinking rectangle of 5×8 pixels at the position of the next character to be printed.
noBlink()– disables the blinking block style LCD cursor.
display()– turns on the LCD screen and displays the characters that were previously printed on the display.
noDisplay()– turns off the LCD screen. Simply turning off the LCD screen does not clear data from the LCD memory. This means that it will be shown again when the display() function is called.
scrollDisplayLeft()– scrolls the contents of the display one space to the left. If you want to scroll the text continuously, you need to use this function inside a loop.
scrollDisplayRight()– scrolls the contents of the display one space to the right.
autoscroll()– turns on automatic scrolling of the LCD. If the current text direction is left-to-right (default), the display scrolls to the left, if the current direction is right-to-left, the display scrolls to the right.
noAutoscroll()– turns off automatic scrolling.
Create and Display Custom Characters LCD.
To define a custom character the
createChar() function is used. This function accepts an array of 8 bytes. Each byte (only 5 bits are considered) in the array defines one row of the character in the 5×8 matrix. Whereas, 0s and 1s in the byte indicate which pixels in the row should be off and which should be turned on.
Custom Character Generator
Creating custom character was not easy until now! We have created a small application to help you create your custom characters. You can click on any of the 5×8 pixels below to set/clear a particular pixel. As you click on pixels, the code for the character is generated next to the grid. This code can directly be used in your Arduino sketch.
The following sketch shows how you can create custom characters and print them on the LCD.
#include <LiquidCrystal_I2C.h> LiquidCrystal_I2C lcd(0x3F, 16, 2); // set the LCD address to 0x3F for a 16 chars and 2 line display //() { lcd.init(); // Make sure backlight is on lcd.backlight(); // create a new characters lcd.createChar(0, Heart); lcd.createChar(1, Bell); lcd.createChar(2, Alien); lcd.createChar(3, Check); lcd.createChar(4, Speaker); lcd.createChar(5, Sound); lcd.createChar(6, Skull); lcd.createChar(7, Lock); // Clears:
Code Explanation:
After including the library and creating the LCD object, the custom character arrays are defined. Each array consists of 8 bytes, 1 byte for each row of the 5×8 led matrix. In this sketch, 8 custom characters are created.
Let’s examine
Heart[8] array as an example. You can see how bits are forming a heart shape that are actually 0s and 1s. A 0 sets the pixel off and a 1 sets the pixel on.
byte Heart[8] = { 0b00000, 0b01010, 0b11111, 0b11111, 0b01110, 0b00100, 0b00000, 0b00000 };
In the setup, the custom character is created using the
createChar() function. This function takes two parameters. The first one is a number between 0 and 7 in order to reserve one of the 8 supported custom characters. The second parameter is the name of the array of bytes.
lcd.createChar(0, Heart);
Next in the loop, to display the custom character we use
write() function and as a parameter we use the number of the character that we want to display.
lcd.setCursor(0, 1); lcd.write(0); | https://lastminuteengineers.com/i2c-lcd-arduino-tutorial/ | CC-MAIN-2021-10 | refinedweb | 1,235 | 66.54 |
.
Note
A custom graph stage should not be the first tool you reach for, defining graphs using flows
and the graph DSL is in general easier and does to a larger extent protect you from mistakes that
might be easy to make with a custom
GraphStage
Custom processing with GraphStage
The
GraphStage abstraction can be used to create arbitrary graph processing stages with any number of input
or output ports. It is a counterpart of the
GraphDSL.create() method which creates new stream processing
stages by composing others. Where
GraphStage differs is that it creates a stage that is itself not divisible into
smaller ones, and allows state to be maintained inside it in a safe way.
As a first motivating example, we will build a new
Source that will simply emit numbers from 1 until it is
cancelled. To start, we need to define the "interface" of our stage, which is called shape in Akka Streams terminology
(this is explained in more detail in the section Modularity, Composition and Hierarchy). This is how this looks like:
import akka.stream.SourceShape import akka.stream.stage.GraphStage class NumbersSource extends GraphStage[SourceShape[Int]] { // Define the (sole) output port of this stage val out: Outlet[Int] = Outlet("NumbersSource") // Define the shape of this stage, which is SourceShape with the port we defined above override val shape: SourceShape[Int] = SourceShape(out) // This is where the actual (possibly stateful) logic will live override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = ??? }
As you see, in itself the
GraphStage only defines the ports of this stage and a shape that contains the ports.
It also has, a currently unimplemented method called
createLogic. If you recall, stages are reusable in multiple
materializations, each resulting in a different executing entity. In the case of
GraphStage the actual running
logic is modeled as an instance of a
GraphStageLogic which will be created by the materializer by calling
the
createLogic method. In other words, all we need to do is to create a suitable logic that will emit the
numbers we want.
Note
It is very important to keep the GraphStage object itself immutable and reusable. All mutable state needs to be confined to the GraphStageLogic that is created for every materialization.
In order to emit from a
Source in a backpressured stream one needs first to have demand from downstream.
To receive the necessary events one needs to register a subclass of
OutHandler with the output port
(
Outlet). This handler will receive events related to the lifecycle of the port. In our case we need to
override
onPull() which indicates that we are free to emit a single element. There is another callback,
onDownstreamFinish() which is called if the downstream cancelled. Since the default behavior of that callback is
to stop the stage, we don't need to override it. In the
onPull callback we will simply emit the next number. This
is how it looks like in the end:
import akka.stream.SourceShape import akka.stream.Graph import akka.stream.stage.GraphStage import akka.stream.stage.OutHandler class NumbersSource extends GraphStage[SourceShape[Int]] { val out: Outlet[Int] = Outlet("NumbersSource") override val shape: SourceShape[Int] = SourceShape(out) override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) { // All state MUST be inside the GraphStageLogic, // never inside the enclosing GraphStage. // This state is safe to access and modify from all the // callbacks that are provided by GraphStageLogic and the // registered handlers. private var counter = 1 setHandler(out, new OutHandler { override def onPull(): Unit = { push(out, counter) counter += 1 } }) } }
Instances of the above
GraphStage are subclasses of
Graph[SourceShape[Int],Unit] which means
that they are already usable in many situations, but do not provide the DSL methods we usually have for other
Source s. In order to convert this
Graph to a proper
Source we need to wrap it using
Source.fromGraph (see Modularity, Composition and Hierarchy for more details about graphs and DSLs). Now we can use the
source as any other built-in one:
// A GraphStage is a proper Graph, just like what GraphDSL.create would return val sourceGraph: Graph[SourceShape[Int], NotUsed] = new NumbersSource // Create a Source from the Graph to access the DSL val mySource: Source[Int, NotUsed] = Source.fromGraph(sourceGraph) // Returns 55 val result1: Future[Int] = mySource.take(10).runFold(0)(_ + _) // The source is reusable. This returns 5050 val result2: Future[Int] = mySource.take(100).runFold(0)(_ + _)
Similarly, to create a custom
Sink one can register a subclass
InHandler with the stage
Inlet.
The
onPush() callback is used to signal the handler a new element has been pushed to the stage,
and can hence be grabbed and used.
onPush() can be overridden to provide custom behaviour.
Please note, most Sinks would need to request upstream elements as soon as they are created: this can be
done by calling
pull(inlet) in the
preStart() callback.
import akka.stream.SinkShape import akka.stream.stage.GraphStage import akka.stream.stage.InHandler class StdoutSink extends GraphStage[SinkShape[Int]] { val in: Inlet[Int] = Inlet("StdoutSink") override val shape: SinkShape[Int] = SinkShape(in) override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) { // This requests one element at the Sink startup. override def preStart(): Unit = pull(in) setHandler(in, new InHandler { override def onPush(): Unit = { println(grab(in)) pull(in) } }) } }
Port states, InHandler and OutHandler
In order to interact with a port (
Inlet or
Outlet) of the stage we need to be able to receive events
and generate new events belonging to the port. From the
GraphStageLogic the following operations are available
on an output port:
push(out,elem)pushes an element to the output port. Only possible after the port has been pulled by downstream.
complete(out)closes the output port normally.
fail(out,exception)closes the port with a failure signal.
The events corresponding to an output port can be received in an
OutHandler instance registered to the
output port using
setHandler(out,handler). This handler has two callbacks:
onPull()is called when the output port is ready to emit the next element,
push(out, elem)is now allowed to be called on this port.
onDownstreamFinish()is called once the downstream has cancelled and no longer allows messages to be pushed to it. No more
onPull()will arrive after this event. If not overridden this will default to stopping the stage.
Also, there are two query methods available for output ports:
isAvailable(out)returns true if the port can be pushed
isClosed(out)returns true if the port is closed. At this point the port can not be pushed and will not be pulled_0<<
The following operations are available for input ports:
pull(in)requests a new element from an input port. This is only possible after the port has been pushed by upstream.
grab(in)acquires the element that has been received during an
onPush(). It cannot be called again until the port is pushed again by the upstream.
cancel(in)closes the input port.
The events corresponding to an input port can be received in an
InHandler instance registered to the
input port using
setHandler(in, handler). This handler has three callbacks:
onPush()is called when the input port has now a new element. Now it is possible to acquire this element using
grab(in)and/or call
pull(in)on the port to request the next element. It is not mandatory to grab the element, but if it is pulled while the element has not been grabbed it will drop the buffered element.
onUpstreamFinish()is called once the upstream has completed and no longer can be pulled for new elements. No more
onPush()will arrive after this event. If not overridden this will default to stopping the stage.
onUpstreamFailure()is called if the upstream failed with an exception and no longer can be pulled for new elements. No more
onPush()will arrive after this event. If not overridden this will default to failing the stage.
Also, there are three query methods available for input ports:
isAvailable(in)returns true if the port can be grabbed.
hasBeenPulled(in)returns true if the port has been already pulled. Calling
pull(in)in this state is illegal.
isClosed(in)returns true if the port is closed. At this point the port can not be pulled and will not be pushed_1<<
Finally, there are two methods available for convenience to complete the stage and all of its ports:
completeStage()is equivalent to closing all output ports and cancelling all input ports.
failStage(exception)is equivalent to failing all output ports and cancelling all input ports.
In some cases it is inconvenient and error prone to react on the regular state machine events with the signal based API described above. For those cases there is an API which allows for a more declarative sequencing of actions which will greatly simplify some use cases at the cost of some extra allocations. The difference between the two APIs could be described as that the first one is signal driven from the outside, while this API is more active and drives its surroundings.
The operations of this part of the :class:
GraphStage API are:
emit(out, elem)and
emitMultiple(out, Iterable(elem1, elem2))replaces the
OutHandlerwith a handler that emits one or more elements when there is demand, and then reinstalls the current handlers
read(in)(andThen)and
readN(in, n)(andThen)replaces the
InHandlerwith a handler that reads one or more elements as they are pushed and allows the handler to react once the requested number of elements has been read.
abortEmitting()and
abortReading()which will cancel an ongoing emit or read
Note that since the above methods are implemented by temporarily replacing the handlers of the stage you should never
call
setHandler while they are running
emit or
read as that interferes with how they are implemented.
The following methods are safe to call after invoking
emit and
read (and will lead to actually running the
operation when those are done):
complete(out),
completeStage(),
emit,
emitMultiple,
abortEmitting()
and
abortReading()
An example of how this API simplifies a stage can be found below in the second version of the :class:
Duplicator.
Custom linear processing stages using GraphStage
Graph stages allows for custom linear processing stages through letting them
have one input and one output and using
FlowShape as their shape.
Such a stage can be illustrated as a box with two flows as it is seen in the illustration below. Demand flowing upstream leading to elements flowing downstream.
To illustrate these concepts we create a small
GraphStage that implements the
map transformation.
Map calls
push(out) from the
onPush() handler and it also calls
pull() from the
onPull handler resulting in the
conceptual wiring above, and fully expressed in code below:
class Map[A, B](f: A => B) extends GraphStage[FlowShape[A, B]] { val in = Inlet[A]("Map.in") val out = Outlet[B]("Map.out") override val shape = FlowShape.of(in, out) override def createLogic(attr: Attributes): GraphStageLogic = new GraphStageLogic(shape) { setHandler(in, new InHandler { override def onPush(): Unit = { push(out, f(grab(in))) } }) setHandler(out, new OutHandler { override def onPull(): Unit = { pull(in) } }) } }
Map is a typical example of a one-to-one transformation of a stream where demand is passed along upstream elements passed on downstream.
To demonstrate a many-to-one stage we will implement
filter. The conceptual wiring of
Filter looks like this:
pull(in) or
push(out) call
(and of course not having a mapping
f function).) } }) } }
To complete the picture we define a one-to-many transformation as the next step. We chose a straightforward example stage that emits every upstream element twice downstream. The conceptual wiring of this stage looks like this:
This is a stage that has state: an option with the last element it has seen indicating if it has duplicated this last element already or not. We must also make sure to emit the extra element if the upstream completes.) { // Again: note that all mutable state // MUST be inside the GraphStageLogic var lastElem: Option[A] = None setHandler(in, new InHandler { override def onPush(): Unit = { val elem = grab(in) lastElem = Some(elem) push(out, elem) } override def onUpstreamFinish(): Unit = { if (lastElem.isDefined) emit(out, lastElem.get) complete(out) } }) setHandler(out, new OutHandler { override def onPull(): Unit = { if (lastElem.isDefined) { push(out, lastElem.get) lastElem = None } else { pull(in) } } }) } }
In this case a pull from downstream might be consumed by the stage itself rather than passed along upstream as the stage might contain an element it wants to push. Note that we also need to handle the case where the upstream closes while the stage still has elements it wants to push downstream. This is done by overriding onUpstreamFinish in the InHandler and provide custom logic that should happen when the upstream has been finished.
This example can be simplified by replacing the usage of a mutable state with calls to
emitMultiple which will replace the handlers, emit each of multiple elements and then
reinstate the original handlers:) { setHandler(in, new InHandler { override def onPush(): Unit = { val elem = grab(in) // this will temporarily suspend this handler until the two elems // are emitted and then reinstates it emitMultiple(out, Iterable(elem, elem)) } }) setHandler(out, new OutHandler { override def onPull(): Unit = { pull(in) } }) } }
Finally, to demonstrate all of the stages above, we put them together into a processing chain, which conceptually would correspond to the following structure:
In code this is only a few lines, using the
via use our custom stages in a stream:
val resultFuture = Source(1 to 5) .via(new Filter(_ % 2 == 0)) .via(new Duplicator()) .via(new Map(_ / 2)) .runWith(sink)
If we attempt to draw the sequence of events, it shows that there is one "event token" in circulation in a potential chain of stages, just like our conceptual "railroad tracks" representation predicts.
Completion
Completion handling usually (but not exclusively) comes into the picture when processing stages need to emit
a few more elements after their upstream source has been completed. We have seen an example of this in our
first
Duplicator implementation where the last element needs to be doubled even after the upstream neighbor
stage has been completed. This can be done by overriding the
onUpstreamFinish method in
InHandler.
Stages by default automatically stop once all of their ports (input and output) have been closed externally or internally.
It is possible to opt out from this behavior by invoking
setKeepGoing(true) (which is not supported from the stage’s
constructor and usually done in
preStart). In this case the stage must be explicitly closed by calling
completeStage()
or
failStage(exception). This feature carries the risk of leaking streams and actors, therefore it should be used
with care.
Logging inside GraphStages
Logging debug or other important information in your stages is often a very good idea, especially when developing more advances stages which may need to be debugged at some point.
The helper trait
akka.stream.stage.StageLogging is provided to enable you to easily obtain a
LoggingAdapter
inside of a
GraphStage as long as the
Materializer you're using is able to provide you with a logger.
In that sense, it serves a very similar purpose as
ActorLogging does for Actors.
Note
Please note that you can always simply use a logging library directly inside a Stage. Make sure to use an asynchronous appender however, to not accidentally block the stage when writing to files etc. See Using the SLF4J API directly for more details on setting up async appenders in SLF4J.
The stage then gets access to the
log field which it can safely use from any
GraphStage callbacks:
final class RandomLettersSource extends GraphStage[SourceShape[String]] { val out = Outlet[String]("RandomLettersSource.out") override val shape: SourceShape[String] = SourceShape(out) override def createLogic(inheritedAttributes: Attributes) = new GraphStageLogic(shape) with StageLogging { setHandler(out, new OutHandler { override def onPull(): Unit = { val c = nextChar() // ASCII lower case letters // `log` is obtained from materializer automatically (via StageLogging) log.debug("Randomly generated: [{}]", c) push(out, c.toString) } }) } def nextChar(): Char = ThreadLocalRandom.current().nextInt('a', 'z'.toInt + 1).toChar }
Note
SPI Note: If you're implementing a Materializer, you can add this ability to your materializer by implementing
MaterializerLoggingProvider in your
Materializer.
Using timers
It is possible to use timers in
GraphStages by using
TimerGraphStageLogic as the base class for
the returned logic. Timers can be scheduled by calling one of
scheduleOnce(key,delay),
schedulePeriodically(key,period) or
schedulePeriodicallyWithInitialDelay(key,delay,period) and passing an object as a key for that timer (can be any object, for example
a
String). The
onTimer(key) method needs to be overridden and it will be called once the timer of
key
fires. It is possible to cancel a timer using
cancelTimer(key) and check the status of a timer with
isTimerActive(key). Timers will be automatically cleaned up when the stage completes.
Timers can not be scheduled from the constructor of the logic, but it is possible to schedule them from the
preStart() lifecycle hook.
In this sample the stage toggles between open and closed, where open means no elements are passed through. The stage starts out as closed but as soon as an element is pushed downstream the gate becomes open for a duration of time during which it will consume and drop upstream messages:
// each time an event is pushed through it will trigger a period of silence class TimedGate[A](silencePeriod: FiniteDuration) extends GraphStage[FlowShape[A, A]] { val in = Inlet[A]("TimedGate.in") val out = Outlet[A]("TimedGate.out") val shape = FlowShape.of(in, out) override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new TimerGraphStageLogic(shape) { var open = false setHandler(in, new InHandler { override def onPush(): Unit = { val elem = grab(in) if (open) pull(in) else { push(out, elem) open = true scheduleOnce(None, silencePeriod) } } }) setHandler(out, new OutHandler { override def onPull(): Unit = { pull(in) } }) override protected def onTimer(timerKey: Any): Unit = { open = false } } }
Using asynchronous side-channels
In order to receive asynchronous events that are not arriving as stream elements (for example a completion of a future
or a callback from a 3rd party API) one must acquire a
AsyncCallback by calling
getAsyncCallback() from the
stage logic. The method
getAsyncCallback takes as a parameter a callback that will be called once the asynchronous
event fires. It is important to not call the callback directly, instead, the external API must call the
invoke(event) method on the returned
AsyncCallback. The execution engine will take care of calling the
provided callback in a thread-safe way. The callback can safely access the state of the
GraphStageLogic
implementation.
Sharing the AsyncCallback from the constructor risks race conditions, therefore it is recommended to use the
preStart() lifecycle hook instead.
This example shows an asynchronous side channel graph stage that starts dropping elements when a future completes:
// will close upstream in all materializations of the graph stage instance // when the future completes class KillSwitch[A](switch: Future[Unit]) extends GraphStage[FlowShape[A, A]] { val in = Inlet[A]("KillSwitch.in") val out = Outlet[A]("KillSwitch.out") val shape = FlowShape.of(in, out) override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) { override def preStart(): Unit = { val callback = getAsyncCallback[Unit] { (_) => completeStage() } switch.foreach(callback.invoke) } setHandler(in, new InHandler { override def onPush(): Unit = { push(out, grab(in)) } }) setHandler(out, new OutHandler { override def onPull(): Unit = { pull(in) } }) } }
Integration with actors
This section is a stub and will be extended in the next release This is an experimental feature*
It is possible to acquire an ActorRef that can be addressed from the outside of the stage, similarly how
AsyncCallback allows injecting asynchronous events into a stage logic. This reference can be obtained
by calling
getStageActorRef(receive) passing in a function that takes a
Pair of the sender
ActorRef and the received message. This reference can be used to watch other actors by calling its
watch(ref)
or
unwatch(ref) methods. The reference can be also watched by external actors. The current limitations of this
ActorRef are:
- they are not location transparent, they cannot be accessed via remoting.
- they cannot be returned as materialized values.
- they cannot be accessed from the constructor of the
GraphStageLogic, but they can be accessed from the
preStart()method.
Custom materialized values
Custom stages can return materialized values instead of
Unit by inheriting from
GraphStageWithMaterializedValue
instead of the simpler
GraphStage. The difference is that in this case the method
createLogicAndMaterializedValue(inheritedAttributes) needs to be overridden, and in addition to the
stage logic the materialized value must be provided
Warning
There is no built-in synchronization of accessing this value from both of the thread where the logic runs and the thread that got hold of the materialized value. It is the responsibility of the programmer to add the necessary (non-blocking) synchronization and visibility guarantees to this shared object.
In this sample the materialized value is a future containing the first element to go through the stream:
class FirstValue[A] extends GraphStageWithMaterializedValue[FlowShape[A, A], Future[A]] { val in = Inlet[A]("FirstValue.in") val out = Outlet[A]("FirstValue.out") val shape = FlowShape.of(in, out) override def createLogicAndMaterializedValue(inheritedAttributes: Attributes): (GraphStageLogic, Future[A]) = { val promise = Promise[A]() val logic = new GraphStageLogic(shape) { setHandler(in, new InHandler { override def onPush(): Unit = { val elem = grab(in) promise.success(elem) push(out, elem) // replace handler with one just forwarding setHandler(in, new InHandler { override def onPush(): Unit = { push(out, grab(in)) } }) } }) setHandler(out, new OutHandler { override def onPull(): Unit = { pull(in) } }) } (logic, promise.future) } }
Using attributes to affect the behavior of a stage
This section is a stub and will be extended in the next release
Stages can access the
Attributes object created by the materializer. This contains all the applied (inherited)
attributes applying to the stage, ordered from least specific (outermost) towards the most specific (innermost)
attribute. It is the responsibility of the stage to decide how to reconcile this inheritance chain to a final effective
decision.
See Modularity, Composition and Hierarchy for an explanation on how attributes work.
Rate decoupled graph stages
Sometimes it is desirable to decouple the rate of the upstream and downstream of a stage, synchronizing only when needed.
This is achieved in the model by representing a
GraphStage as a boundary between two regions where the
demand sent upstream is decoupled from the demand that arrives from downstream. One immediate consequence of this
difference is that an
onPush call does not always lead to calling
push and an
onPull call does not always
lead to calling
pull.
One of the important use-case for this is to build buffer-like entities, that allow independent progress of upstream and downstream stages when the buffer is not full or empty, and slowing down the appropriate side if the buffer becomes empty or full.
The next diagram illustrates the event sequence for a buffer with capacity of two elements in a setting where the downstream demand is slow to start and the buffer will fill up with upstream elements before any demand is seen from downstream.
Another scenario would be where the demand from downstream starts coming in before any element is pushed into the buffer stage.
The first difference we can notice is that our
Buffer stage is automatically pulling its upstream on
initialization. The buffer has demand for up to two elements without any downstream demand.
The following code example demonstrates a buffer class corresponding to the message sequence chart above.
class TwoBuffer[A] extends GraphStage[FlowShape[A, A]] { val in = Inlet[A]("TwoBuffer.in") val out = Outlet[A]("TwoBuffer.out") val shape = FlowShape.of(in, out) override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) { val buffer = mutable.Queue[A]() def bufferFull = buffer.size == 2 var downstreamWaiting = false override def preStart(): Unit = { // a detached stage needs to start upstream demand // itself as it is not triggered by downstream demand pull(in) } setHandler(in, new InHandler { override def onPush(): Unit = { val elem = grab(in) buffer.enqueue(elem) if (downstreamWaiting) { downstreamWaiting = false val bufferedElem = buffer.dequeue() push(out, bufferedElem) } if (!bufferFull) { pull(in) } } override def onUpstreamFinish(): Unit = { if (buffer.nonEmpty) { // emit the rest if possible emitMultiple(out, buffer.toIterator) } completeStage() } }) setHandler(out, new OutHandler { override def onPull(): Unit = { if (buffer.isEmpty) { downstreamWaiting = true } else { val elem = buffer.dequeue push(out, elem) } if (!bufferFull && !hasBeenPulled(in)) { pull(in) } } }) } }
Thread safety of custom processing stages
- All of the above custom stages (linear or graph) provide a few simple guarantees that implementors can rely on.
- The callbacks exposed by all of these classes are never called concurrently.
- The state encapsulated by these classes can be safely modified from the provided callbacks, without any further synchronization.
In essence, the above guarantees are similar to what
Actor s provide, if one thinks of the state of a custom
stage as state of an actor, and the callbacks as the
receive block of the actor.
Warning
It is not safe to access the state of any custom stage outside of the callbacks that it provides, just like it is unsafe to access the state of an actor from the outside. This means that Future callbacks should not close over internal state of custom stages because such access can be concurrent with the provided callbacks, leading to undefined behavior.
Extending Flow Combinators with Custom Operators
The most general way of extending any
Source,
Flow or
SubFlow (e.g. from
groupBy) is
demonstrated above: create a graph of flow-shape like the
Duplicator example given above and use the
.via(...)
combinator to integrate it into your stream topology. This works with all
FlowOps sub-types, including the
ports that you connect with the graph DSL.
Advanced Scala users may wonder whether it is possible to write extension methods that enrich
FlowOps to
allow nicer syntax. The short answer is that Scala 2 does not support this in a fully generic fashion, the problem is
that it is impossible to abstract over the kind of stream that is being extended because
Source,
Flow
and
SubFlow differ in the number and kind of their type parameters. While it would be possible to write
an implicit class that enriches them generically, this class would require explicit instantiation with all type
parameters due to SI-2712. For a partial workaround that unifies
extensions to
Source and
Flow see this sketch by R. Kuhn.
A lot simpler is the task of just adding an extension method to
Source as shown below:
implicit class SourceDuplicator[Out, Mat](s: Source[Out, Mat]) { def duplicateElements: Source[Out, Mat] = s.via(new Duplicator) } val s = Source(1 to 3).duplicateElements s.runWith(Sink.seq).futureValue should ===(Seq(1, 1, 2, 2, 3, 3))
The analog works for
Flow as well:
implicit class FlowDuplicator[In, Out, Mat](s: Flow[In, Out, Mat]) { def duplicateElements: Flow[In, Out, Mat] = s.via(new Duplicator) } val f = Flow[Int].duplicateElements Source(1 to 3).via(f).runWith(Sink.seq).futureValue should ===(Seq(1, 1, 2, 2, 3, 3))
If you try to write this for
SubFlow, though, you will run into the same issue as when trying to unify
the two solutions above, only on a higher level (the type constructors needed for that unification would have rank
two, meaning that some of their type arguments are type constructors themselves—when trying to extend the solution
shown in the linked sketch the author encountered such a density of compiler StackOverflowErrors and IDE failures
that he gave up).
It is interesting to note that a simplified form of this problem has found its way into the dotty test suite. Dotty is the development version of Scala on its way to Scala 3.
Contents | http://doc.akka.io/docs/akka/2.4.14/scala/stream/stream-customize.html | CC-MAIN-2017-17 | refinedweb | 4,593 | 51.07 |
Why Code Coverage is not Enough
One of the holy grails for unit testing is to get 100% code coverage from your tests. However, you can’t sit back and smoke a cigar when you reach that point and assume your code is invulnerable. Code coverage just is not enough.
One obvious reason is that Code Coverage cannot help you find errors of omission. That is, even if you had 100% code coverage from your tests, if you forget to implement a feature (and a test for that feature), then you’re shit out of luck.
However, apart from errors of omission, there’s the case presented here. Imagine you have the following simple class (I’m sure your real world class is much more complicated and interesting, but bear with me).
using System; using System.Collections; public class MyClass {(8, mine.SumIt(keys)); } }
Voila! 100% code coverage. But does this satisfy the little QA tester inside? I would hope not and suggest that it shouldn’t. Code coverage is worthy goal, but often unnattainable in large systems (hence the need for prioritization) and doesn’t provide all the benefits it would seem.
To handle situations like this, unit tests need to go beyond concentrating on code coverage and also consider data coverage. Of course, that’s not always practical. In the above example, if I only have 10 keys, testing the possible permutations of SumIt becomes a huge burden. Often the best you can do is to test a small sample and the boundary cases.
3 responses | https://haacked.com/archive/2004/11/03/codecoverageisnotenough.aspx/ | CC-MAIN-2021-25 | refinedweb | 258 | 65.12 |
For a terse textbook example this seems relevant, of course. For a 450000 line project with 900+ classes, I've just searched for main() methods and found only 7 instances, 6 of which are for various quite familiar cmdline batch utilites and 1 of which is for a rich client entry point. Which isn't very convincing to me that typing this in is exactly killing the project...
If the likely counterargument is that, ok, but surely typing in 'public class' goo is bothersome...I don't actually think I've actually typed that phrase in 2+ years.
And I'm not being facetious, it's merely because in production-level projects it rapidly becomes best practice to clone proper header block with copyright disclaimers and cvs dollar-tag blocks, and javadoc tags and a class defintion already set up, etc. In my particular case I've even gone the minor laziness farther of binding that static text chunk to a function key in my editor to eliminate the cut-n-paste bother. Tap-a-key simple. But even without the key binding, not exactly overly bothersome.
I do admit, being adept with Lisp and Python and yet coding Java for the corporate master, that there are cases where Java's language limits lead to inconveniences and pattern insertions that I grrr against, such as, say, lacking a covariant return type. However the mere goo of typing a 'main' or 'class' definition, or accessors, isn't compelling in itself, as we have trivial key bindings and Eclipse and IntelliJ one-button clicks and so forth for that stuff.
If you want to convince me that Ruby can rock steady with increased productivity, I'd contend you ought to avoid omissions of complexity and persuade versus examples of SlickEdit tricked out with macros and key bindings, or an experienced dual-use Eclipse / IntelliJ developer.
Closure and code blocks and covariants and so forth are where Ruby and Lisp and dynamically typed languages start to slay. Goo is not precisely fluffy, but isn't, I would assert, a compelling debate point, particular toward the middle and higher range projects where you assert Ruby needs to go, where all manner of assorted copyright and cvs tag and etcetera goo are already being neatly aggregated, dealt, and dispensed with.
© 2016, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/cs/user/view/cs_msg/74632 | CC-MAIN-2016-40 | refinedweb | 412 | 55.27 |
We the requested URL to find the correct view.
How do URLs work in Django?
Let's open up the mysite/urls.py file in your code editor of choice and see what it looks like:
"""mysite URL Configuration[...]"""from django.urls import path, includefrom django.contrib import adminurlpatterns = [path( the previous chapter, is already here:
mysite/urls.py
path( import.
Your mysite/urls.py file should now look like this:
from django.urls import path, includefrom django.contrib import adminurlpatterns = [path('admin/', admin.site.urls),path('', include('blog.urls')),]
Django will now redirect everything that comes into to blog.urls and looks for further instructions there.
blog.urls
Create a new empty file named urls.py in the blog directory. All right! Add these first two lines:
from django.urls import pathfrom . import views
Here we're importing Django's function path and all of our views from the blog application. (We don't have any yet, but we will get to that in a minute!)
After that, we can add our first URL pattern:
urlpatterns = [path('', views.post_list, name='post_list'),]
As you can see, we're now assigning a view called post_list to the root URL. This URL pattern will match an empty string and the Django URL resolver will ignore the domain name (i.e.,) that prefixes the full url path.: | https://www.commonlounge.com/discussion/19606d9ea43e428d8de062618cf6bc24 | CC-MAIN-2018-43 | refinedweb | 224 | 70.9 |
Distilled • LeetCode • Graphs
- Pattern: Graphs
- [797/Medium] All Paths From Source to Target
- [997/Easy] Find the Town Judge
- [1791/Easy] Find Center of Star Graph
- [1971/Easy] Find if Path Exists in Graph
Pattern: Graphs
[797/Medium] All Paths From Source to Target
Problem
Given a directed acyclic graph (DAG) of n nodes labeled from
0to
n - 1, find all possible paths from node
0to node
n - 1and return them in any order.
The graph is given as follows:
graph[i]is a list of all nodes you can visit from node
i(i.e., there is a directed edge from node
ito]]
- Constraints:
n == graph.length
2 <= n <= 15
0 <= graph[i][j] < n
graph[i][j] != i (i.e., there will be no self-loops).
All the elements of graph[i] are unique.
The input graph is guaranteed to be a DAG.
- See problem on LeetCode.
Solution: DFS
- If it asks for just the number of paths, we can generally solve it in two ways:
- Count from start to target in topological order.
- Count using DFS with memo.
- Note that both of them have time \(O(Edges)\) and space \(O(Nodes)\).
- This problem asks for all paths. Memo might not save much time.
- Imagine the worst case that we have \(node-1\) to \(node-N\), and \(node-i\) linked to \(node-j\) if \(i < j\).
- There are \(2^(N-2)\) paths and \((N+2)*2^(N-3)\) nodes in all paths. We can roughly say \(O(2^N)\).
class Solution: def allPathsSourceTarget(self, graph: List[List[int]]) -> List[List[int]]: def dfs(cur, path): if cur == len(graph) - 1: res.append(path) else: for i in graph[cur]: dfs(i, path + [i]) res = [] dfs(0, [0]) return res
Solution: Recursive One-liner
class Solution: def allPathsSourceTarget(self, g, cur=0): if cur == len(g) - 1: return [[len(g) - 1]] return [([cur] + path) for i in g[cur] for path in self.allPathsSourceTarget(g, i)]
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[997/Easy] Find the Town Judge
Problem
In a town, there are n people labeled an array trust where
trust[i] = [a_i, b_i]representing that the person labeled
a_itrusts the person labeled bi.
Return the label of the town judge if the town judge exists and can be identified, or return
-1otherwise.
Example 1:
Input: n = 2, trust = [[1,2]] Output: 2
- Example 2:
Input: n = 3, trust = [[1,3],[2,3]] Output: 3
- Example 3:
Input: n = 3, trust = [[1,3],[2,3],[3,1]] Output: -1
- Constraints:
1 <= n <= 1000
0 <= trust.length <= 104
trust[i].length == 2
All the pairs of trust are unique.
a_i != b_i
1 <= a_i, b_i <= n
- See problem on LeetCode.
Solution: Maintain a score for each person to be a town judge candidate; add/subtract one if the person is trusted/trusts; check for count
from collections import Counter class Solution: def findJudge(self, n: int, trust: List[List[int]]) -> int: # base case: early termination if n == 1 and trust == []: return 1 # score for each person to be a town judge candidate score = Counter() # if a person trusts another person, decrease their score by one # (since the town judge trusts nobody) # if a person is trusted, increase their score by one # (since everybody trusts the town judge) for a, b in trust: score[a] -= 1 score[b] += 1 # count number of people which trust the candidates # if n-1 people trust one candidate it is the town judge for i in range(1, n + 1): if score[i] == n - 1: return i return -1
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[1791/Easy] Find Center of Star Graph
Problem
There is an undirected star graph consisting of n nodes labeled from
1to
n. A star graph is a graph where there is one center node and exactly
n - 1edges that connect the center node with every other node.
You are given a 2D integer array edges where each
edges[i] = [u_i, v_i]indicates that there is an edge between the nodes
u_iand
v_i. Return the center of the given star graph.
Example 1:
Input: edges = [[1,2],[2,3],[4,2]] Output: 2 Explanation: As shown in the figure above, node 2 is connected to every other node, so 2 is the center.
- Example 2:
Input: edges = [[1,2],[5,1],[1,3],[1,4]] Output: 1
- Constraints:
3 <= n <= 105
edges.length == n - 1
edges[i].length == 2
1 <= ui, vi <= n
ui != vi
The given edges represent a valid star graph.
- See problem on LeetCode.
Solution: Check the first two edges and return the overlapping node
- The solution is based on the following points:
- The center is the only node that has more than one edge.
- The center is also connected to all other nodes.
- Any two edges must have a common node, which is the center.
- We can only check the first two edges and return the common node:
class Solution: def findCenter(self, edges: List[List[int]]) -> int: for i in edges[0]: if i in edges[1]: return i
class Solution: def findCenter(self, edges: List[List[int]]) -> int: return edges[0][0] if edges[0][0] == edges[1][0] or edges[0][0] == edges[1][1] else edges[0][1]
Complexity
- Time: \(O(1)\)
- Space: \(O(1)\)
Solution: Generalized version: find multiple centers in a multi-star graph
class Solution(object): def findCenter(self, edges): """ :type edges: List[List[int]] :rtype: int """ n = max(max(my_list) for my_list in edges) adj_list = [[] for _ in range(n)] for edge in edges: adj_list[edge[0]-1].append(edge[1]-1) adj_list[edge[1]-1].append(edge[0]-1) for i in range(len(adj_list)): if len(adj_list[i]) == n-1: return i +1
Complexity
- Time: \(O(n)\)
- Space: \(O(1)\)
[1971/Easy] Find if Path Exists in Graph
Problem
There is a bi-directional graph with n vertices, where each vertex is labeled from
0to
n - 1(inclusive). The edges in the graph are represented as a 2D integer array edges, where each
edges[i] = [u_i, v_i]denotes a bi-directional edge between vertex
u_iand vertex
v_i. Every vertex pair is connected by at most one edge, and no vertex has an edge to itself.
You want to determine if there is a valid path that exists from vertex
sourceto vertex
destination.
Given
edgesand the integers
n,
source, and
destination, return true if there is a valid path from
sourceto
destination, or
falseotherwise.
Example 1:
Input: n = 3, edges = [[0,1],[1,2],[2,0]], source = 0, destination = 2 Output: true Explanation: There are two paths from vertex 0 to vertex 2: - 0 → 1 → 2 - 0 → 2
- Example 2:
Input: n = 6, edges = [[0,1],[0,2],[3,5],[5,4],[4,3]], source = 0, destination = 5 Output: false Explanation: There is no path from vertex 0 to vertex 5.
- Constraints:
1 <= n <= 2 * 105
0 <= edges.length <= 2 * 105
edges[i].length == 2
0 <= ui, vi <= n - 1
ui != vi
0 <= source, destination <= n - 1
There are no duplicate edges.
There are no self edges.
- See problem on LeetCode.
Solution: DFS
class Solution: def validPath(self, n: int, edges: List[List[int]], source: int, destination: int) -> bool: graph = self.makeGraph(edges) return self.depthFirstSearch(graph, source, destination, set()) # format: {x: [y], y: [x]} def makeGraph(self,edges): graph = {} for edge in edges: x,y = edge if x not in graph: graph[x] = [] if y not in graph: graph[y] = [] graph[x].append(y) graph[y].append(x) return graph def depthFirstSearch(self, graph, node, target, visited): # base case: reached target if node == target: return True # mark visited nodes visited.add(node) for node in graph[node]: # don't want to visit a visited node if node not in visited: if self.depthFirstSearch(graph, node, target, visited): return True return False
Complexity
- Time: \(O(n)\)
- Space: \(O(2n + n + n) = O(n)\) | https://aman.ai/code/graphs/ | CC-MAIN-2022-40 | refinedweb | 1,317 | 68.2 |
Sorting in a grouped advanced datagrid?Handycam Oct 12, 2009 12:06 PM
How does one sort:
a) the grouping categories
mine are in alpha order but I need an order based on another data attribute. For example, I have data such as:
<instruction time="45 Minutes Before" order="1" txt="Heat oven to 450°F for the Bacon-Wrapped Scallops." />
<instruction time="15 Minutes Before" order="2" txt="Heat oven to 450°F for the Bacon-Wrapped Scallops." />
The grouping field is @time, but then they then sort with 15 minutes first, so I put in the "order" attribute. How do I sort by that instead, while still grouping by @time?
b) the "leaves"?
If I have a grouping collection like:
Produce
Apples
Oranges
Bananas
How do I sort the fruit names?
1. Re: Sorting in a grouped advanced datagrid?Sameer Bhatt
Oct 13, 2009 8:31 AM (in response to Handycam)
If you set Grouping.label to the same value as the dataField of the column in which the particular Grouped data is shown, then clicking on the header will sort the whole data automatically.
Also, you can create your own compareFunctions to provide a custom sort.
-Sameer
2. Re: Sorting in a grouped advanced datagrid?Handycam Oct 13, 2009 8:47 AM (in response to Sameer Bhatt)
Thanks. I want the items to be sorted automatically. I am hiding the headers; I do not want to give the user sorting ability.
My main problem is that the group headers sort alphabetically; I need a way for them to appear in a specific order -- which is either unsorted and hard-coded by me or based on a field not displayed to the user.
3. Re: Sorting in a grouped advanced datagrid?Sameer Bhatt
Oct 13, 2009 9:11 AM (in response to Handycam)
Try this -
// create a new sort with a custom compare function
var s:Sort = new Sort();
s.fields = [new SortField("name")];
s.compareFunction = compareFunc;
// assign the sort to the grid's dataProvider and call refresh
adg.dataProvider.sort = s;
adg.dataProvider.refresh();
Now, you can implement the compareFunction and provide a custom compare.
Call this code when the grouping is done and the GroupingCollection is assigned to the grid.
-Sameer
4. Re: Sorting in a grouped advanced datagrid?Handycam Oct 14, 2009 8:00 AM (in response to Sameer Bhatt)
This does not work...
scheduleListCollection = new XMLListCollection(scheduleList..instruction); scheduleGrouped = new GroupingCollection2(); scheduleGrouped.source = scheduleList..instruction; var groupingInst:Grouping = new Grouping(); groupingInst.fields = [new GroupingField("@time")]; scheduleGrouped.grouping = groupingInst; scheduleGrid.dataProvider= scheduleGrouped; scheduleGrid.dataProvider.refresh(); var s:Sort = new Sort(); s.fields = [new SortField("@sortOrder")]; // assign the sort to the grid's dataProvider and call refresh scheduleGrid.dataProvider.sort = s; scheduleGrid.dataProvider.refresh();
I get a runtime error:
TypeError: Error #1009: Cannot access a property or method of a null object reference.
at mm1/makeSchedule()[/Users/stevelombardi/Documents/WORK/FINE COOKING/MenuMaker/mm1/src/mm1.mxml:343]
at mm1/__makeScheduleBtn_click()[/Users/stevelombardi/Documents/WORK/FINE COOKING/MenuMaker/mm1/src/mm1.mxml:602]
The line that throws the error: scheduleGrid.dataProvider.sort = s;
5. Re: Sorting in a grouped advanced datagrid?Handycam Oct 14, 2009 8:20 AM (in response to Handycam)
I did further testing, it seems sorting the list before grouping does nothing, and one it's grouped its being sorted by the grouping field.
What I need to do is sort in by a DIFFERENT field other than the one it's grouped by.
Is that possible? Here is the data. Note the attribute "sort order"
<instruction time="The Day Before" txt="Make the Juniper-Ginger Butter for the roast turkey. Brine the Juniper-Ginger Butter Turkey for 4 to 6 hours, then remove from the brine and rub the juniper ginger butter under the skin. " sortOrder="04" jump="" title="Roasted Turkey with Juniper-Ginger Butter and Pan Gravy"/> <instruction time="4 Hours Before" txt="Heat the oven to 350°F for the Juniper-Ginger Butter Turkey. " sortOrder="08" jump="" title="Roasted Turkey with Juniper-Ginger Butter and Pan Gravy"/> <instruction time="3 Hours Before" txt="Put the Juniper-Ginger Butter Turkey in the oven to roast. " sortOrder="09" jump="" title="Roasted Turkey with Juniper-Ginger Butter and Pan Gravy"/> <instruction time="30 Minutes Before" txt="When the Juniper-Ginger Butter Turkey is done, tent it with foil and let it rest on a carving board while you make the gravy. " sortOrder="15" jump="" title="Roasted Turkey with Juniper-Ginger Butter and Pan Gravy"/> <instruction time="The Day Before" txt="Make the Maple-Pecan-Shallot Butter and refrigerate, covered. " sortOrder="04" jump="" title="Baked Sweet Potatoes with Maple-Pecan-Shallot Butter"/> <instruction time="2 Hours Before" txt="Remove the Maple-Pecan-Shallot Butter from the refrigerator and let come to room temperature. " sortOrder="11" jump="" title="Baked Sweet Potatoes with Maple-Pecan-Shallot Butter"/> <instruction time="1 Hour Before" txt="Bake the Sweet Potatoes at 425°F. " sortOrder="13" jump="" title="Baked Sweet Potatoes with Maple-Pecan-Shallot Butter"/>
When I sort this like so:
scheduleListCollection = new XMLListCollection(scheduleList..instruction); var srt:Sort = new Sort(); srt.fields = [new SortField("@sortOrder")]; scheduleListCollection.sort = srt; scheduleListCollection.refresh();// this is assigned to the grid in the capture scheduleSortedList = scheduleListCollection.source; scheduleGrouped = new GroupingCollection2(); scheduleGrouped.source = scheduleSortedList; var groupingInst:Grouping = new Grouping(); groupingInst.fields = [new GroupingField("@time")]; scheduleGrouped.grouping = groupingInst; scheduleGrouped.refresh(false);
I can display it in a whole new, ungrouped data grid. This is the order I want the group labels to be in.
But I need to group it now.
6. Re: Sorting in a grouped advanced datagrid?Sameer Bhatt
Oct 14, 2009 8:55 AM (in response to Handycam)
Here, the grouped row (data) does not contain the sortOrder field and hence while sorting, the sort will not find the sort field (sortOrder) in the data.
So, it can't sort the data.
Try this -
<fx:Script>
<![CDATA[
import mx.collections.Sort;
import mx.collections.SortField;
private var xml:XML =
protected function adg2_creationCompleteHandler(event:FlexEvent):void
{
gc2.source = xml..instruction;
gc2.refresh();
adg2.validateNow();
var s:Sort = new Sort();
s.fields = [new SortField("sortOrder")];
adg2.dataProvider.sort = s;
adg2.dataProvider.refresh();
}
private function grpObjectFunc(label:String):Object
{
switch (label)
{
case "The Day Before": return {sortOrder:04};
case "4 Hours Before": return ;
case "3 Hours Before": return ;
case "30 Minutes Before": return ;
case "2 Hours Before": return ;
case "1 Hour Before": return ;
}
return {};
}
]]>
</fx:Script>
<mx:AdvancedDataGrid
<mx:dataProvider>
<mx:GroupingCollection
<mx:Grouping
<mx:GroupingField
</mx:Grouping>
</mx:GroupingCollection>
</mx:dataProvider>
<mx:columns>
<mx:AdvancedDataGridColumn
<mx:AdvancedDataGridColumn
</mx:columns>
</mx:AdvancedDataGrid>
Note that this is only one way of solving this, there can be other ways possible.
-Sameer
7. Re: Sorting in a grouped advanced datagrid?Handycam Oct 15, 2009 11:22 AM (in response to Sameer Bhatt)
Thanks so much, Sameer, this saved my project.
This was pretty obscure, I doubt I ever would have discovered it on my own. Thanks again for taking the time to share it. | https://forums.adobe.com/thread/505449 | CC-MAIN-2018-13 | refinedweb | 1,166 | 50.63 |
Send a request to Screen to add new buffers to a stream
#include <screen/screen.h>
int screen_create_stream_buffers(screen_stream_t stream, int count)
Function Type: Flushing Execution
This function adds buffers to a stream. Streams must have at least one buffer to be usable. After the producer creates buffers for a stream, or after it attaches buffers to a stream, it must call screen_destroy_stream_buffers() before calling screen_create_stream_buffers() again. Buffers are created with the size of SCREEN_PROPERTY_BUFFER_SIZE as set on the stream.
Before calling this function, ensure that you set the SCREEN_PROPERTY_USAGE property on the stream to indicate your intended usage of the stream's buffer(s). For example, to retrieve SCREEN_PROPERTY_POINTER from the buffer(s), you must have set the SCREEN_PROPERTY_USAGE property to include the SCREEN_USAGE_READ and/or SCREEN_USAGE_WRITE flags on the stream prior to calling screen_create_stream_buffers().
0 if successful, or -1 if an error occurred (errno is set; refer to errno.h for more details). | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.screen/topic/screen_create_stream_buffers.html | CC-MAIN-2018-22 | refinedweb | 155 | 63.39 |
I suppose you mean "^[0-9]{5}$"
I suppose you mean "^[0-9]{5}$"
you should remove the post then. If you leave it , you are telling people you don't mind , even though you advocate it.
why StringTokenizer? A simple split will do. Also , StringTokenizer is outdated. The Scanner has useDelimiter() for these things.
you can make a count of how many odd integers there are first. After that you can initialize the ans array with that count size. Then go through the array again and store the odd number. Or you can...
of course, but note that its only one space, (not white space, which includes tabs etc). If all the "blanks" between the words are tabs, it will not work.
you can just use splitting. eg, splitting by whitespace
String str = "hello world my nane is mike"
String s[] = str.split("\\s+")
note, each individual word is stored in a String array....
remove the for loop but leave the print statement. also, why are you using parseInt() ? remove it.
I have already told you in another forum, the s array contains all the fields you want. if you want to get the month field, then use s[0]. Similarly the rest. Didn't you learn arrays yet? the for...
the String class has methods to check for strings and substrings. Strings.contains(), String.matches(), etc....
"?" is special to regular expression. try escaping it. "\\?"
try
pdphrase = pdphrase.replaceAll("a", "00");
well, once you can work with getting the text from 1 url, you can parse the text, search for further links, and then do a url connection to get contents from those links found. you have to do some...
you can see an example here
how did you define dir variable? try also to remove the "static" keyword.
the requirement should have digits and letters ( as in OP's post). what if the result of randomly choosing from all letters(upper+lower) and numbers ended up in all letters? (or all digits? ).
i would rather use Scanner class for your input. Also, you could probably do a shuffle() method (or use the shuffle() method from Collections class) so that your numbers and letters are jumbled up...
I have already replied to you in that thread, so am not going to repeat here.
it takes 2 hands to clap remember?
don't take it seriously
Its 50 - 50. Giving examples for them to have a start is also all right. So there is no reason for you to stop. Even if you do...
the requirement also said nothing about anything else. Its left for us to interpret. You are a super genius to only stop at one level. However, I went a bit far ahead to suggest he keeps a score of...
Sometimes regex is not the right tool for the job. Where does you strings come from ? a text file?
here's an example
public class ReadWriteFile {
public static void main(String[] args) throws FileNotFoundException, IOException{
Formatter output = new Formatter(args[1]);
...
you declared quiz to be 9 elements
SimpleQuestion[] quiz = new SimpleQuestion[9];
but you only have 3 quizzes that have values. That's why you have the error. either initialize your quiz...
you can also try WMI using Java.
you can see an example of how its done using vbscript. Then adapt the WMI code portion with Java.
what exactly did you don't get? I provided examples of where you can put your handlers in your code.
here's an example for you
if ( myVariable >= 20 && myVariable <=30 ){
System.out.println("you failed" );
}else if ( .......... ){
...........
} | http://www.javaprogrammingforums.com/search.php?s=c068cf0ccc32441fc023fbad8e208ce9&searchid=783646 | CC-MAIN-2014-10 | refinedweb | 600 | 78.25 |
Save external data in rails db, data structures
Hello,
I need some help/advise on how to save third data in my database, also I'm interested in any tutorial or info about these topic.
I'm struggling to save data from third party api.
I get the result and I can display the result on my app. When I want to save data in my DB it saves only the id and the zone attribute.
Here is the response body
{ "from": 1, "to": 100, "total": 4490, "auditData": { "processTime": "157", "timestamp": "2016-05-08 09:58:05.163", "requestHost": "", "serverId": "", "environment": "[int]", "release": "" }, "destinations": [{ "code": "A1N", "name": { "content": "Ansan" }, "countryCode": "KR", "isoCode": "KR", "zones": [{ "zoneCode": 1, "name": "Ansan downtown" }], "groupZones": [] }, }
In my Destination model where I have the destinations_table
attr_accessor :destination_id, :code, :country_code, :name, :zones
def self.set_destinations url ="" response = HTTParty.get(url, :query => { "fields" => 'code,name,countryCode,zones', "language" => "ENG", "from" => "1", "to" => "5", "useSecondaryLanguage" => "false" }, :headers => add_signature_for_hotels ) result = JSON.parse(response.body) result["destinations"].each do |value| Destination.create!("destination_id" => value["id"], "code" => value["code"], "name" => value["name"], "country_code" => value["countryCode"], "zone" => value["zones"][0]["name"] ) end end
If I try in my console result[0]["code"] it gives my the result "A1N" but if I put in my method self.set_destinations it gives me an error. undefined method `[]' for nil:NilClass
At this moment only save the destination.id = 1 and zones["name"] = Ansan downtown, how can I save the data for destination_id, code, name, and country_code.
These is how looks my table after a request:
1 | null | null | null | null | Ansan downtown | 2016-05-07 20:19:34.546309 | 2016-05-07 20:19:34.546309
How can I save all the data in my DB?
Thanks,
Cata | https://gorails.com/forum/save-external-data-in-rails-db-data-structures | CC-MAIN-2021-04 | refinedweb | 292 | 55.74 |
Hello all,
I recently switched to IDEA (from Eclipse) and am quite happy with the IDE and Scala plugin. There is one annoying thing however:
I prefer putting opening braces on the next line (I know it is non-idiomatic). After typing the opening-brace and pressing enter the editor autmatically inserts the closing brace. So far so good.
But often the editor will also indent the pair of braces instead of lining them up with the previous line. In fact, I think that every time the previous line does not end in a closing bracket the braces are indented. This is NEVER what I want but I can not find any setting that prevents this behavior.
Please tell me something can be done about this...
Hello all,
Please, provide a code snippet with the explanation of how to reproduce the misbehavior (I've tried hard, but everything seems to be OK).
Check "Settings / Code Style / Wrapping and Braces / Braces placement" section.
Hello Pavel,
Try entering this:
val n = Seq(1,2,3)
n.foreach
{
x =>
{
println(x)
}
}
I know it is a completely bloated example but even only entering "n.foreach<CR>{" produces the unexpected indent of the open brace.
I have set all braces to "next line" and have even tried setting the continuation indent to 0 but this setting does not seem to have any influence on my autoindenting at all.
It appears other people have run into this issue as well. Disturbingly seems to suggest Alexander thinks this issue has been fixed, which is not true at all.
I tried all possible values for those settings but the behavior stays the same. The resulting layout (sometimes one indent level too many, usually two levels) makes no sense at all, regardless what settings you use.
I am using tabs for indenting.
Silvio
I added fix for your case. Please try next nightly build. If something more is wrong please report it. ("Brace Placement -> Other -> Next Line" should work now) Thenk you for your example.
Best regards,
Alexander Podkhalyuzin.
Hello Alexander,
I just installed the latest build and can confirm that the problem has been fixed.
Thank you very much.
Cheers,
Silvio
Thanks for fixing that :) I really like it this way …
def allFine() =
{
"foo"
}
However it fails for def when followed by ensuring:
def notFine() =
{
"bar"
} ensuring (_ != "foo")
Any idea how to get the rid of the indentation in this case?
addition: reported … | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206640565-Braces-placement | CC-MAIN-2020-34 | refinedweb | 407 | 73.98 |
as an API, I decided to write an article to help others to simply deploy their model. I hope it helps:)
In this article, we are going to use simple linear regression algorithm with scikit-learn for simplicity, we will use Flask as it is a very light web framework. We will create three files,
- model.py
- server.py
- request.py
In a model.py file, we will develop and train our model, in a server.py, we will code to handle POST requests and return the results and finally in the request.py, we will send requests with the features to the server and receive the results.
Let’s begin the coding part
- model.py
As I mentioned above, in this file we will develop our ML model and train it. We will predict the salary of an employee based on his/her experience in the field. You can find the dataset here.
import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression import pickle import requests import json
Importing the libraries that we are going to use to develop our model. numpyand pandas to manipulate the matrices and data respectively, sklearn.model_selection for splitting data into train and test set and sklearn.linear_model to train our model using LinearRegression. pickle to save our trained model to the disk, requests to send requests to the server and jsonto print the result in our terminal.
dataset = pd.read_csv('Salary_Data.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, 1].values
We have imported the dataset using pandas and separated the features and label from the dataset.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 0)
In this section, we have split our data into train and test size of 0.67 and 0.33 respectively using train_test_split from sklearn.
regressor = LinearRegression() regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
The object is instantiated as a regressor of the class LinearRegression() and trained using X_train and y_train. Latter the predicted results are stored in the y_pred.
pickle.dump(regressor, open('model.pkl','wb'))
We will save our trained model to the disk using the pickle library. Pickle is used to serializing and de-serializing a Python object structure. In which python object is converted into the byte stream. dump() method dumps the object into the file specified in the arguments.
In our case, we want to save our model so that it can be used by the server. So we will save our object regressor to the file named model.pkl.
We can again load the model by the following method,
model = pickle.load(open('model.pkl','rb')) print(model.predict([[1.8]]))
pickle.load() method loads the method and saves the deserialized bytes to model. Predictions can be done using model.predict().
For example, we can predict the salary of the employee who has experience of 1.8 years.
Here, our model.py is ready to train and save the model. The whole code of model.py is as follows.
# Importing the libraries import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression import pickle import requests import json
# Importing the dataset dataset = pd.read_csv('Salary_Data.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, 1].values
# Splitting the dataset into the Training set and Test set X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1/3, random_state = 0)
# Fitting Simple Linear Regression to the Training set regressor = LinearRegression() regressor.fit(X_train, y_train)
# Predicting the Test set results y_pred = regressor.predict(X_test)
# Saving model to disk pickle.dump(regressor, open('model.pkl','wb'))
# Loading model to compare the results model = pickle.load(open('model.pkl','rb')) print(model.predict([[1.8]]))
2. server.py
In this file, we will use the flask web framework to handle the POST requests that we will get from the request.py.
Importing the methods and libraries that we are going to use in the code.
import numpy as np from flask import Flask, request, jsonify import pickle
Here we have imported numpy to create the array of requested data, pickle to load our trained model to predict.
In the following section of the code, we have created the instance of the Flask() and loaded the model into the model.
app = Flask(__name__)
model = pickle.load(open('model.pkl','rb'))
Here, we have bounded /api with the method predict(). In which predict method gets the data from the json passed by the requestor. model.predict() method takes input from the json and converts it into 2D numpy array the results are stored into the variable named output and we return this variable after converting it into the json object using flasks jsonify() method.
@app.route('/api',methods=['POST']) def predict(): data = request.get_json(force=True) prediction = model.predict([[np.array(data['exp'])]]) output = prediction[0] return jsonify(output)
Finally, we will run our server by following code section. Here I have used port 5000 and have set debug=True since if we get any error we can debug it and solve it.
if __name__ == '__main__': app.run(port=5000, debug=True)
Here, our server is ready to serve the requests. Here is the whole code of the server.py.
# Import libraries import numpy as np from flask import Flask, request, jsonify import pickle
app = Flask(__name__)
# Load the model model = pickle.load(open('model.pkl','rb'))
@app.route('/api',methods=['POST']) def predict(): # Get the data from the POST request. data = request.get_json(force=True)
# Make prediction using model loaded from disk as per the data. prediction = model.predict([[np.array(data['exp'])]])
# Take the first value of prediction output = prediction[0]
return jsonify(output)
if __name__ == '__main__': app.run(port=5000, debug=True)
3. request.py
As I mentioned earlier that request.py is going to request the server for the predictions.
Here is the whole code to make a request to the server.
import requests
url = ''
r = requests.post(url,json={'exp':1.8,}) print(r.json())
We have used requests library to make post requests. requests.post() takes URL and the data to be passed in the POST request and the returned results from the servers are stored into the variable r and printed by r.json().
Conclusion
We have created three files model.py, server.py and request.py to train and save a model, to handle the request, to make a request to the server respectively.
After coding all of these files, the sequence to execute the files should be model.py, server.py(in separate terminal) and at the end request.py.
You can compare the results of prediction with a model.py as we printing the result at the end of the file.
You can find all the coding in my Github repository, flask-salary-predictor.
Source: hackernoon | https://learningactors.com/deploy-a-machine-learning-model-using-flask/ | CC-MAIN-2021-31 | refinedweb | 1,149 | 59.7 |
SAN.
"We pretty much emptied the bank account into refunds,” he admitted.
But as far as many customers are concerned, those refunds are not happening fast enough. Delayed orders led to escalating frustrations, which in some instances led to lawsuits and arbitration cases, with likely more on the way.
Ars’ recent story chronicled the five arbitration cases and two lawsuits that HashFast has pending against it. Many customers have accused the firm of outright fraud, and some are upset that when the company failed to fulfill its orders, it refused to refund the amount in bitcoins as it had previously promised.
Last Thursday, HashFast hired a new CFO, fired half of its staff, and decided that it’s now only going to sell ASIC Bitcoin-mining chips rather than make or sell complete boards.
Up until that decision, the startup had been manufacturing and selling specialized boards equipped with chips designed specifically to compute hashes in the Bitcoin blockchain as a way to generate, or “mine,” new bitcoins. Hashfast's original “Baby Jet” machine (a fully assembled box) was designed to perform at 400 gigahashes per second (GH/s).
At the Friday meeting, deCastro talked up the company’s sole product at the moment: the “Golden Nonce” chips, which Simon Barber, the company’s CTO, claims can reach as high as 800 GH/s given “specialist cooling” conditions. The CEO now wants the company to become the “Intel of the ASIC world.”
"The only thing that is holding us back is that we are as poor as church mice,” deCastro continued. “We are cash poor and inventory rich. We have lots of inventory and no cash."
Joe Russell, the senior accounts manager, later told Ars that the company has a current stock of “tens of thousands” of chips on hand.
The company’s new CFO, Monica Hushen—who mostly stayed quiet during the hour-long interview—noted that this narrow focus may not always be the situation.
“That's not to say that our strategy might change in the future, but it’s the only one that makes sense right now,” she said. “We're being very pragmatic with respect to the customer.”
HashFast’s struggles add to the ever-increasing list of legal cases involving apparent Bitcoin-fueled fraud: The Bitcoin Savings and Trust hedge fund collapse; the high-profile Silk Road takedown; and most recently, the implosion of Mt. Gox, once the currency's largest exchange.
Later on Friday, HashFast announced a partnership with Pepper Mining, a company that is selling a $1,100 “Habanero” mining board designed to work with the Golden Nonce chip.
Strangely, Pepper Mining’s website lacks any identifiable contact information, nor any clear details as to who is behind that company. Its three staffers are only named as “Mr. Teal,” “Chipgeek,” and “Gateway”—their pictures are South Park-style avatars.
Russell also told Ars that Pepper Mining is based in Novato, California, about 30 miles north of San Francisco. But the company website lists no address or phone number, its whois data is obscured, and a search of California business records turns up no company by that name. Mr. Teal’s bio says that he is an “Electrical Engineer based in Canada.”
HashFast’s new relationship with a company that seemingly wants to obscure its origins does not exactly engender confidence.
Hindsight is 20/20
So what were HashFast's key mistakes? The first, both the CEO and CTO agreed, was hiring a board designer who was less than satisfactory.
“We just hired a contractor we shouldn't have hired,” deCastro said. “The [silicon] wafers came in the tail end of October 2013. Should everything else have been ready, we could have [shipped on time]. We could have hit the beginning of October, if we had a board to land on, but we didn't. We aced the difficult part of the test.”
In an attempt to stop the bleeding, HashFast temporarily halted all sales at the end of December 2013. As customer exasperation mounted and the lawsuits and arbitration cases began, deCastro said that the firm’s lawyers advised them to not say anything publicly.
“Both were strategic mistakes—stopping selling and not talking,” he said.
“We should have been communicating this a heck of a lot more—this kind of info should have gone out. We erred on the side of listening to our counsel. We do [still listen] but we are listening less to them.”
In California, HashFast is represented by Jeremy James Frederick Gray of Zuber Lawler & Del Duca LLP and in Texas it's represented by John D Penn of Perkins Coie LLP. Neither attorney has responded to Ars’ repeated requests for comment.
On top of the companies own self-inflicted wounds, there were two other events that threw HashFast for a loop.
In Fall 2013, HashFast had a customer who had placed a “huge order”—they declined to name the client nor the total purchase price—but that customer backed out. Why? HashFast apparently couldn’t order a large enough quantity of the component parts to fulfill the order, and the big-spending customer withdrew.
“Our whole financial plan was [now] based on how the money was going to come in,” Barber said.
On top of it all, there was an act of God, too. deCastro experienced a massive electrical fire in his house due to a faulty electrical device charger.
“I had burns, smoke inhalation, and everything I owned burned,” he said, lamenting the fact that one of his dogs died in the blaze. One of his housemates was apparently in the hospital for three months as a result. “I barely managed to get out.”
With deCastro out of commission for a little while, Barber said he was put in charge of “business stuff,” while his normal area of expertise is engineering.
“This distracted me, I should have been supervising these [board] contractors more closely,” he admitted.
Defending lawsuits costs money
Company executives declined to provide any clear indication as to how many individual refunds have been issued, nor the total value of those refunds. Some customers have accepted refunds in US dollars, and others have accepted a “conversion” of their order into pure chips totaling more hashing power than what they had originally ordered.
One of the primary sticking points among angry customers is the fact that in August 2013, CTO Simon Barber stated publicly on the BitcoinTalk.org forum: “Orders are taken in BTC, in the unlikely event we get to refunds they will be given in BTC.”
The company further said it would ship its Baby Jet ASIC Bitcoin miner by October 2013—which was then delayed until January 2014. By late April 2014, Barber, along with the company’s CEO Eduardo deCastro, issued an “apology to our customers.”
Now though, the company says that what it really meant was that while it was accepting payment in bitcoins at the time, it was then immediately converting that amount to US dollars. After all, its products were priced in US dollars, not bitcoins. Therefore, company execs argue, it stands to reason that refunds should only be issued based on the dollar amount that was paid at the time, or the present-day bitcoin equivalent—and not the original purchase date amount of bitcoins paid.
As more time goes by, bitcoins become harder to mine, which means that miners are worth less and less. With bitcoins trading currently at around $450 (down from a high of nearly $1,200 in December 2013), HashFast’s customers are clearly losing money. Were customers to be refunded in the amount of bitcoins they originally paid, they would have easily quadrupled their initial investment.
In fact, Bitcoin’s ever-increasing mining difficulty is exactly why the company says that it needed to take pre-orders in the first place—to garner enough cash to pay its suppliers, and itself.
“If we had spent many months rescheduling to get venture capital [the business] wouldn’t have worked,” Barber said later, during an office tour.
The chief executive also said that early on, HashFast refunded customers some amount of US dollars once it found that it could reduce shipping costs.
“We weren’t profiting in any way,” he said. “Every last one was in US dollars, nobody said boo. All those folks had nothing to say about this when we refunded them in US dollars. All of this only started when Bitcoin went up. It’s going to cost those people money, it’s costing us money and attention and it’s distracting us from our business.”
Books stay closed
Still, the company says it wants to do right by its customers.
“We’ve been going through the list of people who are undelivered,” Simon Barber, the company’s CTO said. “We have lots of chips on hand. If you want us to deliver something, we can deliver for you significantly more hashing power than you originally signed up to purchase, in the form of chips. It’s a deal almost everyone takes.”
“We’re working through [refunds] as soon as we get them in,” deCastro said. “We get profits, we process them as refunds.”
Barber chimed in: “Some money goes into producing stuff, and some of the cash flow goes into issuing refunds.”
“And legal issues,” deCastro noted dryly.
As far as refunds are concerned, the company says that it’s well on its way to finally resolving all of the lingering order and refund issues.
“We sold in multiple batches,” deCastro explained. “Batch 1 is complete except for a few refunds pending and the ones who are suing us.”
Amy Abascal, company’s director of marketing, who also attended the Friday meeting, said that Batch 2 shipments were complete except for an “upgrade kit for the Baby Jet which there are still some outstanding.”
Of course, such statements remain impossible to verify. Similarly, it’s impossible to know precisely what financial shape the company is in, nor how the company has been structured, nor who its shareholders or investors are.
"We're not going to deal with financials—it's not appropriate,” CFO Monica Hushen said during the meeting. “We can't comment on corporate structure.”
She did say though, that the company’s intellectual property was held by a Delaware corporation, HashFast LLC, which in turn owns HashFast Technologies, a California corporation—lawsuits have been filed against both entities.
So much for “conversions”
These “conversions” of customers being compensated in chips (as a way to mitigate the order delays) seem very similar to the company’s previous “Miner Protection Program,” (MPP) which was initially given as a gift to early customers, and then later was offered for purchase.
Abascal told Ars later by e-mail that these two setups—the original MPP, and the new conversion program—“are different things.”
HashFast describes the MPP program this way:
At HashFast, we understand that healthy, prosperous customers make for a healthy and prosperous company. We know that our customers are concerned about the rapid growth of the network hashrate – and we stand by our customers. We designed our silicon so efficiently per square mm, that we are able offer you this protection. If the Bitcoin network hashrate increases so that your Baby Jet doesn’t generate more Bitcoins in ninety days than you paid for it, HashFast will give you additional ASICs. In fact, we will give you up to 400% more hashing capacity than the Baby Jet you purchased. Yes, that does mean that if you don’t make your money back in 90 days, we will increase your mining capacity to up to 2 Terahashes!
One customer, Edward Hammond, wrote to Ars to say that he was part of the Batch 1 purchase group and was in on the MPP promise.
“In other words, for every Batch 1 Baby Jet shipped (assuming the customer didn't cancel or sign a release form), Hashfast is overdue providing an additional 1.6 terahashes (the MPP calculation period ended at the end of January),” he told Ars. “This also applies to some Batch 2 customers, those that paid for the add-on. Personally, my order was for one Batch 1 BabyJet (which arrived way late). I am still waiting for the MPP on that.”
Hammond also told Ars that he ordered a Baby Jet upgrade card, which has still not arrived yet.
“I tried to refund the upgrade card, which at this point is a big money loser for customers (it cost $1500 and will never remotely pay for itself), but Hashfast insisted that I sign a release that would have let them off the hook for ‘any and all claims’ against them, i.e. including the obligation to provide the MPP. So I refused their release. They didn't refund.”
Another customer, Dan McArdle, relayed a similar story by e-mail.
He said that he was HashFast’s eighth customer ever, placing an order for a Baby Jet on August 8, 2013 for the price of around 59 bitcoins. Again, as more time goes by, bitcoins become harder to mine, which means that miners are worth less and less.
“In January 2014, with still no delivery, and having gone past their absolute drop-dead delivery timeframe of December 31, 2013, I started thinking about a refund,” McArdle told Ars. “Obviously I wanted my bitcoin refunded (worth about $50,000 at the time). HF stated they would only issue dollar refunds and that their prior statements about ‘full refunds in bitcoin’ just meant the dollar equivalent purchase price converted to bitcoin and refunded as bitcoin. Whatever the original intent, they had a huge communication failure, plus their horrible customer service, lack of any helpful information, excessive delays, and continual touting of their ‘amazing’ products, were simply disrespectful and infuriating.”
McArdle said he repeatedly e-mailed HashFast executives, including CTO Simon Barber, also about his MPP delivery.
“They did, however, leak a bunch of info to me during these e-mail exchanges (misdirected emails and support ticket responses). This included Peter Morici's FedEx shipping info (ie, name/phone/home-address) as well as engineering info about how HashFast was having difficulty getting their first units working, as well as assembly line issues.”
Following similar frustrations, Morici filed his lawsuit against HashFast in federal court in San Francisco in January 2014.
“So obviously that spooked me and I requested a refund, formally, with a certified letter, and an e-mail,” McArdle continued. “They confirmed receipt of my refund request via e-mail. A few days later, now early February (while I was traveling), I got a FedEx notification for a package from HashFast. Obviously this was unexpected, given my confirmed refund request, but the geek in me couldn't resist, so I accepted the shipment instead of refusing the package to keep my refund claim.”
“I was the 8th order, so this was a very early Baby Jet. It came pretty beat-up, broken chassis, screws completely loose and rattling around in the case. But it worked. It barely hit advertised specs, but it's still running to this day.”
“I'm still waiting for delivery of the ‘protection plan’ (MPP) chips. They should've shipped in early February, and they devalue by 50 percent every month, given the hashrate increase of the network as a whole. Bottom line: I'm now looking at a [return on investment] loss even in dollar terms, and I was one of the lucky ones who actually received a working unit relatively early. Every Hashfast customer is essentially a victim of their over-confident promises, lack of communication, and ultimate disrespect.”
When Ars relayed some of these comments to HashFast late Friday afternoon, Amy Abascal, the chief marketing officer, did not have an immediate response.
48 Reader Comments ... ndise-rule
These mining companies should really find another way to finance themselves: get investors, build up stock or at least confirm that you have manufacturing capacity on working units, and THEN start accepting money. I have no doubt that these guys are going to be out of business in another couple months, and it sounds like Butterfly Labs is quickly spiraling into oblivion for trying to run their business the same way..
Some of these companies have handled the situation well - KnC had good communication with their customers and the turnaround to send defective boards back for replacement was a little over a week (from North America to Sweden and back) while Black Arrow has offered significant compensation for missed deadlines. Cointerra's been problematic though, offering essentially no compensation to customers and being extremely difficult to contact (Ars should look into them next...).
It's amazing that many of these companies have been able to ship products at all - most of them didn't even exist a year ago, and now they're shipping high-performance custom-designed mining hardware all over the world - that's astounding. But the customer service end of the equation has definitely been lacking in many cases.
Butterfly Labs are scamming scumbags, but I think most other ASIC companies are legitimately trying, and I feel bad for both the customers and the engineers and other staff involved.
This sort of leads to my core concern with these altcoin miner-producing startups: If these ASICs really do ultimately make you more money than they cost, why are they selling them rather than just using them themselves?
With an extra layer because they're supplying to tools to the miners, versus mining themselves. During the Gold Rush the suppliers and outfitters were the ones making money, not so much the miners.
So much for history repeating itself here.
Especially the ones who think they need to waste most of their funds on office space in SF.
For the record , I called them scamming bastards weeks ago, when there was another article (I think on ButterflyLabs)... still later than a whole bunch of Hashfast's customers calling them scamming bastards on facebook, twitter, youtube and everywhere else though.
Sorry bud, but that's on you. Hashfast owes me 12k, but I'm not horribly upset, because I knew it was a huge gamble from the outset. If you take your life savings to Vegas and lose it, you can be angry at the casino but ultimately you're to blame.
That's not to say that I don't want my money back. I just don't think investing in miners from an unknown company is a great thing to do with money you need.
Last edited by issor on Sun May 11, 2014 5:35 pm
Because you can't pay rent or taxes in bitcoins
I do wonder why they don't just mine with their chips. The gold rush analogy is poor because in the gold rush, you had to find the gold, then mine it. With bit coins, you just do the mining.
BTW, can I have my ARS tin foil hat back?
This woman is the death of hardware personified.
Her career is amazing. She was with Apple during the time when Apple was for all intents and purposes dead, then she went to Iomega right when Zip drives fell out of favor. Afterwards she went to some B2C solution provider not even Wikipedia remembers that was promptly bought and killed by eBay, then she went to work for the smoldering almost-corpse of Palm Inc, which HP finally axe murdered. Noticing a trend she had a brief stint at ECS Refining, which is a recycling company for dead hardware.
And now she's with Hashfast, a hardware "company" floundering dead in the water before it sold its first product.
This sort of leads to my core concern with these altcoin miner-producing startups: If these ASICs really do ultimately make you more money than they cost, why are they selling them rather than just using them themselves?
Ars addressed this in their original stories about Butterfly Labs - Bitcoin/altcoin prices are volatile, and it costs a lot of money to make the hardware in the first place. You might make more money mining yourself, but you might not - making the hardware and selling it is lower risk than mining.
Lots of mining companies do both - KnC for example keeps 5% of the hardware they manufacture for their own mining operation. One of the things Butterfly Labs allegedly did that got a lot of people in the community so angry was hold on to customer hardware (that was six months or a year late) to mine with it for a couple of months, before shipping to the customer.
Isn't that how Tesla financed itself for years? It works if your product is successful. It is moving some of the risk of a startup from investors to buyers.
I don't think it is necessarily a bad thing. Most Kickstarter campaigns work that way. Motivated customers are sometimes in a better position to decide which technology should be funded than investors. If they are willing to take the risks without the potential rewards that investors could get, then it must be really good.
But, as with all startups, there is an enormous risk of failure. The initial build can go bad, the technology can become out of favor. funding can be insufficient. In fact it sounds like all these things happened here.
So in other words, it hits advertised specs?
This sort of leads to my core concern with these altcoin miner-producing startups: If these ASICs really do ultimately make you more money than they cost, why are they selling them rather than just using them themselves?
I've replied to this at length a couple times on previous stories, but it's perfectly rational to sell the miner rather than "just turn it on." Even ignoring the idea of acceptable risk for return (mining, even if profitable, may be riskier), there are several other factors that make selling equipment a better idea than running it yourself.
First and foremost is that there is potentially a lot of capital involved in running it. For you or I to turn a miner on in our basement, it's basically just a free power outlet and some internet bandwidth. That's because, for a single miner or a couple miners, we already have the infrastructure in place to run them. But to run, say, 10,000 of them? That requires real space, warehouse-sized space. It probably requires more than a residential internet connection. It requires power, significant amounts, enough that you probably won't be paying residential prices. It requires staff, because while these things may run mostly autonomously if you have 10,000 of them even occasional issues require full-time staff to keep resolved.
Basically, for an individual the only capital outlay involved in mining is the cost of the miner, because the rest of it is already taken care of...you already have a house, power, internet, and time. For a farm running thousands of them, there are huge capital outlays required. That just adds to the aforementioned risk.
That isn't to say these things aren't a scam, or that mining bitcoins is a good idea. Just that the very idea of selling the miner rather than using it isn't ludicrous. You have to make thousands of them to bring the cost of one miner down to a reasonable level. But if you decide to run thousands yourself, the risk is huge.
So in other words, it hits advertised specs?
Kinda like what do you call a guy who barely passes med school, eh?
EDIT: In case it's unclear, I'm agreeing with you.
Every person who ever wants to work with bitcoin should first stop and study economic/monetary policy history and why virtually every country/central bank in the world adopts an expansionary (i.e. inflationary) monetary policy under normal circumstances. When you have an effective finite hard cap on the amount of your currency, you are going to create *immense* hoarding pressure. If the dollar was capped at X amount in the entire world, all other things being equal (economic growth, growing population, etc) a dollar today is going to be worth less than a dollar tomorrow (this is the opposite of what is normal - a dollar today is worth ever so slightly more than a dollar tomorrow; see also: all old people complaints about how expensive things are these days). Under a finite-money-supply system, you start to incentivize people to hold onto money instead of using it, which further contracts the money supply (since this is effectively the same as increasing the reserve rate, except people are doing it out of prisoner's-dilemma-rational-calculation instead of by a central bank's fiat). This creates a feedback loop that basically undermines and self-destroys a currency's ability to be a currency: the lack of liquidity.
In other words, knowing what HashFast does now (assuming bitcoin doesn't completely collapse in value) - do you think HashFast (or other companies) will ever implicitly offer a bitcoin-for-bitcoin refund policy? No way - they (and other similar companies) will want to adopt the USD-equivalent for bitcoin-then and bitcoin-now policy. And that right there should be a major negative signal to anyone: for these kinds of transactions bitcoin is just a worse version of the dollar than a currency in and of itself.
Last edited by thelee on Sun May 11, 2014 10:01 pm
I've been saying this since BTC became popular. It's a libertarian's dream currency, where FIAT is to blame for all the world's ills & oh how they wish to go back to the gold standard!
Just because the phony bitcoin "currency" went up in unit value doesn't mean your $1500 purchase would be refunded based on the increased unit price
Everyone demanding that is as much a stupid greedy bastard as the HashFast schemers likely are
Just because the phony bitcoin "currency" went up in unit value doesn't mean your $1500 purchase would be refunded based on the increased unit price
Everyone demanding that is as much a stupid greedy bastard as the HashFast schemers likely are
The fact that you yourself are converting bitcoin to USD demonstrates bitcoin's weakness as a currency.
Put another way, the dollar is constantly changing its value against other currencies like the Euro, the Yen, etc. If you bought N product for $100, is there any company in the *world* that issue you a refund in terms of what those dollars are worth in another currency? No way - you'll get $100 back.
If bitcoin were an actual currency and not a speculative (virtual) asset, then there'd be no problem - you spent 59 bitcoins, you get refunded 59 bitcoins. But because bitcoin is designed poorly, with immense hoarding/deflationary pressure in the long run, no one will ever want to do that - because those 59 future bitcoins are worth more than the 59 current bitcoins.
BTC worth peaked in December at around $1,100, but it sounds like all (or nearly all) of the orders were placed a year or more ago. Last year's pattern before December, based on this graph:
Jan - March 2013 — gradual rise to $100
April - October 2013 — $100-200
November 2013 — $400-500
Since we're in November '13 levels of worth right now, it should be a while before a BTC-for-BTC refund wouldn't substantially benefit anyone that made the purchase before November.
Either the sellers are selling things that would make them more money if they kept them and used them, or the buyers are buying things that will never make them the money back they paid for them.
It's stark-plain stupid.
Exchange rate risk.
If I had a $1,000 machine that printed 100 Euros a month, I wouldn't sell it -- I'd simply buy EURUSD futures and retire.
People are too accustomed to the established world of precious metals and currencies, where there are large, liquid, and reliable markets for derivatives and options. In a way, that's what these ASIC companies are functioning as: nascent markets for a very specific+narrow (and slightly perverse) form of cryptocurrency futures contract. A miner is a future that isn't subject to credit risk after delivery.
Logical conclusion: They're highly incompetent crooks.
This sort of leads to my core concern with these altcoin miner-producing startups: If these ASICs really do ultimately make you more money than they cost, why are they selling them rather than just using them themselves?
I actually wouldn't be surprised to see this become the next big thing in hashing: hosted mining, instead of buying hardware that ships to your house. The advantages in being able to design boards that host tens to hundreds of chips per unit of rack space, and the power efficiencies involved in moving cooling and power out of the compute chassis are compelling. So instead of buying a miner, maybe you buy shares of the company's output, or the output of 1U, or more, of rack minus power & cooling fees.
The fun part is that this is essentially indistinguishable from a Ponzi scheme.
Because the machines are owed to the people that paid for them.
Also because running enough machines (assuming they can be run profitably) to pay back all pre-orders would require capital outlays they likely can't afford. Can't run 1,000 miners in your bedroom, man..
Hell, as a creative type (development really, but close enough), I want someone like a Jobs around, because I don't want to do the business side of things. I want to do the stuff I like, which is development.
Don't get me wrong, I want to be involved with business stuff; I don't want to just lock myself in a cave and develop. But I don't want my time monopolized by business stuff, which is what tends to happen.!
All most companies will care about is profit, true. They'll care about customers only insofar as it's necessary to maintain a revenue stream, to maintain profit. For a company going down in flames, that's no longer a concern.
This is why buying items on pre-order that aren't actually completely developed is risky business. I would generally void doing so, though admittedly I have done so once. Though back then I used a credit card, because I knew that worst case I could just work with my card issue to get a chargeback done if necessary if the company failed to deliver. That obviously doesn't work if you pay in bitcoins. Maybe some lessons in there, perhaps?
This sort of leads to my core concern with these altcoin miner-producing startups: If these ASICs really do ultimately make you more money than they cost, why are they selling them rather than just using them themselves?
This has been answered in every single comment section of every one of these stories.
If they just use the ASICs themselves, for one, they have to pony up all of the cash to fabricate them. Second, all of their fortunes are then tied to the fortunes of Bitcoin. If the value of Bitcoin tanks, then while they still might be making 10 BTC a week, it's not worth anything. And they still have to pay their fabricators in USD. Selling the ASICs means that they are exposed to a lot less of that risk.
You must login or create an account to comment. | http://arstechnica.com/tech-policy/2014/05/embattled-ceo-of-bitcoin-miner-firm-we-are-as-poor-as-church-mice/?comments=1 | CC-MAIN-2015-48 | refinedweb | 5,297 | 59.94 |
2009.
Servlet 3.0 – About to go in Public Review (next week, until presumably February 2009)
Sometime next week, the public review should start for the Servlet 3.0 specification. At the time of writing, the only spec draft available is the early draft. Martin gave the audience some additional insights from within the JSR committee, clear suggestions of what will be in the public review draft – and what may be added beyond that.
Pluggability
A long running issue with the Servlet specification: all resources – servlets, filters and listeners – have to be configured in a single central web.xml. If you use libraries in your application that come with their own servlet resources, you still need to configure them in that one web.xml file. No pre-configured Servlets can be plugged into your web application.
With Servlet 3.0 that will change. In two ways actually. One is that the Servlet container will hunt down the Classpath and in JAR-files for occurrences of META-LINK/web-fragment.xml. A web-fragment.xml is a file that contains the same kind of configuration details for servlets, listeners, context-parameters and filters that are in web.xml. The servlet container merges together all web-fragment.xml files that it locates and creates one big combined in-memory web.xml from them. So now libraries can come with pre-configured servlet artefacts, if they include a web-fragment.xml in the META-INF directory.
Note: a set of fairly complex though straightforward rules determine the order in which the files are interpreted and what settings take precedence of which other settings.
The second way is described below under Ease of Development.
There is actually a third way: run-time configuration of Servlets and Filters. A ServletContextListener that is invoked during context initialization can add Servlets and Filters to the context. The methods used for this on the ServletContext are addServlet and addFilter.
It seems that it would also be nice in this ServletContextListener to remove (block) the Filters and ServletMappings that are pushed into the web.xml from the various jar files my web application is using. I may not want all that preconfigured stuff. However, I certainly do not want to go round manipulating the jar-files and redesigning the web-fragment.xml files in those jar-files. Nor do I want to add fake filter and servlet mappings to the web.xml to override these jar based configuration.
I have seen no indication that such a programmatic ‘remove’ option is part of the specification. But perhaps it is or will be.
Ease of Development
There are annotations being defined to declare servlets (@WebServlet), filters (@ServletFilter) and Listeners (@WebServletContextListener). These annotations have all the attributes defined to replace configuration in the web.xml or web-fragment.xml. The attributes contain information like url-mapping, init-params and other information that would typically be defined in the deployment descriptor. For example the @WebServlet annotation has attributes Value (a single URL pattern), UrlPatterns (to use instead of Value), Name and to support the new asynchronous behavior: supportsAsynch and asynchTimeout.
This way Servlets, filters etc can be defined entirely using annotations and would be picked up from the WEB-INF/classes or WEB-INF/lib directory.
Note: the classes that are annotated with @WebServlet will still have to extend from HttpServlet. This is a change from the initial draft which also specified method level annotations to allow plain classes to act as Servlet by specifying through annotations which methods acted as doGet and doPost.
One nice thing to know: in Servlet 3.0 the ServletContext will be available on the Request object (getServletContext()).
Asynchronous Request/Response processing
The asynchronous processing of requests serves several purposes. The underlying issue that is addressed is thread-starvation: asynchronous processing minimizes the number of threads necessary for performing the processing required for the Web Application. This is important in at least two major use cases:
- the back end services accessed from the Servlet are slow (perhaps asynchronous themselves) – hanging on to the thread while the back end sits on its response maybe quickly become to costly. The support for asynchronous processing in Servlet 3.0 allows the Servlet to give up the thread and only to resume processing when the results are in
- the client (browser) is not satisfied with just the initial response returned upon reception of the request; it is interested in additional results from the server, that should be sent (pushed) to the client as they originate on the server. This would yield an active web client that without end user intervention keeps getting refreshed (a bit like a Chat client, RSS reader or Email client that all get notified and show alerts when new messages arrive). This latter functionality is known as Server Push or Comet-style web applications. They can be achieved today, but most implementations use long running requests that keep server threads occupied and therefore do not scale well.
Servlet 3.0 supports both use cases.
Simply put it works like this:
- a request is received
- an asynchronuous context is started for it and it is stored somewhere
- asynchronous processing could be started – such as calling a back end service using an Executor; when done, the processing would use the AsynchContext to write the response to the client or forward the request to another party such as a Servlet or JSP (that do not need to be asynchronous) for generating the response
- instead of some asynchrously started processing, the application can wait for events to occur that can lead to retrieving the Asynch Context and write some data to the Response.
At any rate, the initial servlet thread can be relinquished when the AsynchContext is created for the request.
It is up to the client to properly deal with the asynchronous response. Common issues to deal with are the time out of the client request and the irregularly receiving of response bits and pieces; the response may not be complete for a very long time, the server sending pieces of data to the response whenever events occur. Smart JavaScript AJAX processing needs to deal with these responses.
Martin discussed a nice use case – of Pizza Delivery Service:
Through Web Application, the customer orders a pizza. The request is received and dealt with asynchronously. That means an AsynchContext is initialized for the request, this context is stored in a (application scope) Map under the orderId key and an initial response could be written. New orders are dealt with in the same way.
The Cook also uses a Web Application, that shows the orders that should be processed. When the initial cook’s request was received, that too started an AsynchContext that was stored in the application scope. When a new order is received, the Cook’s AsynchContext is retrieved and the new order details are written to the Response. This will lead to an immediate update of the Cook’s Web UI – provided that his client side logic can handle the bits of information sent asynchronously.
With every step of the pizza preparation process – add tomato sauce, add cheese, add salami,… – the Cook could update the order details in the web application. These are sent to the server. The server can retrieve from the Map with customer AsynchContext objects the context for the Order’s Customer and write some details about the cooking process to the Response. This would update the Web Interface for the customer with the latest state of his pizza.
The Delivery department could likewise have started an Asynch Context when first logging in. When the pizza is ready for delivery, the context’s response object’s writer is retrieved and order and delivery details are written to it – asynchronously and near-instanteanously updating the Delivery Web Page.
Note: The customers, cook and delivery department to not hold threads locked. They only consume threads when information is exchanged.
The code required includes:
@WebServlet("/pizza" asyncSupported=true)public class MyServlet extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse res) { ... // if some parameter is NewOrder (as opposed to CookingUpdate, ReadyForDelivery) AsyncContext aCtx = req.startAsync(req, res); // store aCtx in the application scope Map with Order context objects // perhaps write a first response (order successfullt received, now being processed) // get the Cook's Context object cookCtx.getResponse().getWriter().print(.... new order details....); }}
To finish an asynchronous request, we can call the complete() method on the AsynchContext. Another method on the context that we can use instead is the forward() to have the processing completed by another Servlet or JSP (that do not necessarily support asynch).
We can configure Listeners on the AsynchContext() that are called onComplete or onTimeout event.
Beyond the current spec
Martin Marinschek told that the final spec may also describe an API for file upload. Currently there is no standard for Servlet Based File Uploading. Everyone rolls his own or relies on the de facto standard Commons Fileupload.
Not yet part of the spec, but also under discussion is.
JSF 2.0 – Currently in Public Review (until 26th January 2009)
See later blog article.
Resources
An early access implementation of some of the features are available in the GlassFish nightly build.
An introduction to Servlet 3.0 by Deepa Sobhana
Servlet 3.0 From the Source (Rajiv Mordani’s blog)
Servlet 3.0 Pluggability (Rajiv Mordani’s blog)
Asynchronous Support in Servlet 3.0 (Rajiv Mordani’s Blog) | http://technology.amis.nl/2008/12/12/jsf-20-and-servlet-30-specifications-almost-ready-for-take-off/ | CC-MAIN-2014-15 | refinedweb | 1,570 | 54.22 |
Basic Autoloader Usage - Autoloading in Zend Framework
Basic Autoloader Usage
Now that we have an understanding of what autoloading is and the goals and design of Zend Framework's autoloading solution, let's look at how to use Zend_Loader_Autoloader.
In the simplest case, you would simply require the class, and then instantiate it. Since Zend_Loader_Autoloader is a singleton (due to the fact that the SPL autoloader is a single resource), we use getInstance() to retrieve an instance.
- require_once 'Zend/Loader/Autoloader.php';
- Zend_Loader_Autoloader::getInstance();
By default, this will allow loading any classes with the class namespace prefixes of "Zend_" or "ZendX_", as long as they are on your include_path.
What happens if you have other namespace prefixes you wish to use? The best, and simplest, way is to call the registerNamespace() method on the instance. You can pass a single namespace prefix, or an array of them:
Alternately, you can tell Zend_Loader_Autoloader to act as a "fallback" autoloader. This means that it will try to resolve any class regardless of namespace prefix.
- $loader->setFallbackAutoloader(true);
Do not use as a fallback autoloader
While it's tempting to use Zend_Loader_Autoloader as a fallback autoloader, we do not recommend the practice.
Internally, Zend_Loader_Autoloader uses Zend_Loader::loadClass() to load classes. That method uses include() to attempt to load the given class file. include() will return a boolean FALSE if not successful -- but also issues a PHP warning. This latter fact can lead to some issues:
If display_errors is enabled, the warning will be included in output.
Depending on the error_reporting level you have chosen, it could also clutter your logs.
You can suppress the error messages (the Zend_Loader_Autoloader documentation details this), but note that the suppression is only relevant when display_errors is enabled; the error log will always display the messages. For these reasons, we recommend always configuring the namespace prefixes the autoloader should be aware of
Note: Namespace Prefixes vs PHP Namespaces
At the time this is written, PHP 5.3 has been released. With that version, PHP now has official namespace support.
However, Zend Framework predates PHP 5.3, and thus namespaces. Within Zend Framework, when we refer to "namespaces", we are referring to a practice whereby classes are prefixed with a vender "namespace". As an example, all Zend Framework class names are prefixed with "Zend_" -- that is our vendor "namespace".
Zend Framework plans to offer native PHP namespace support to the autoloader in future revisions, and its own library will utilize namespaces starting with version 2.0.0. to executing Zend Framework's internal autoloading mechanism. This approach offers the following benefits:
Each method takes an optional second argument, a class namespace prefix. This can be used to indicate that the given autoloader should only be used when looking up classes with that given class prefix. If the class being resolved does not have that prefix, the autoloader will be skipped -- which can lead to performance improvements.
If you need to manipulate spl_autoload()'s registry, any autoloaders that are callbacks pointing to instance methods can pose issues, as spl_autoload_functions() does not return the exact same callbacks. Zend_Loader_Autoloader has no such limitation.
Autoloaders managed this way may be any valid PHP callback.
- // Append function 'my_autoloader' to the stack,
- // to manage classes with the prefix 'My_':
- $loader->pushAutoloader('my_autoloader', 'My_');
- // Prepend static method Foo_Loader::autoload() to the stack,
- // to manage classes with the prefix 'Foo_': | https://framework.zend.com/manual/1.10/en/learning.autoloading.usage.html | CC-MAIN-2016-50 | refinedweb | 563 | 53.71 |
Here we will write a simple program to print Fibonacci Series. In the Fibonacci Series, the next number is the sum of the previous two numbers.
C# Program to Print Fibonacci Series
In the Fibonacci Series, the next number is the sum of the previous two numbers.
0, 1, 1, 2, 3, 5, 8, 13, 21 etc
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; class Program { static void Main(string[] args) { int iFirst = 0, iSecond = 1, iThird, iCount, iInput; Console.Write("Enter the number of elements: "); iInput = int.Parse(Console.ReadLine()); Console.Write(iFirst + " " + iSecond + " "); //printing 0 and 1 for (iCount = 2; iCount < iInput; ++iCount) //loop starts from 2 because 0 and 1 are already printed { iThird = iFirst + iSecond; Console.Write(iThird + " "); iFirst = iSecond; iSecond = iThird; } Console.ReadKey(); } }
Output:
Enter the number of elements: 6 >0 1 1 2 3 5
View More:
- C# Program to check whether a number is prime or not using Recursion.
- C# Program to find the sum of all item of an Integer Array.
- C# Program to find Largest Element in a Matrix.
- C# Program to Swap two numbers without using third variable.
Conclusion:
I hope you would love this post. Please don’t hesitate to comment for any technical help. Your feedback and suggestions are welcome. | https://debugonweb.com/2018/11/02/c-program-print-fibonacci-series/ | CC-MAIN-2019-09 | refinedweb | 221 | 68.47 |
On Fri, 2004-01-30 at 17:54, Gary V. Vaughan wrote: > This patch kind of fell out of me wanting libtool to do automake-like version > mismatch checking at runtime, and autoconf-like AC_PREREQ version-minima. > > If you guys like this, I'll rewrite the docs, update the test directories and > resubmit. > After applying this patch aclocal fails with: NONE:0: /usr/bin/m4: ERROR: EOF in string Which makes it a bit hard to play with :( > If you don't like it, I will throw my toys out of the pram :-b Alternatively, > you might want to convince me to split out just the version checks with eg. > AC_PROG_LIBTOOL(1.5). > Personally I think that the Automakeish single _INIT_ call is probably the nicest "canonical" way of doing it, especially as it already matches an existing add-on tool: LT_INIT_LIBTOOL([1.6 C C++ disable-shared no-pic]) Though if we want to support LT_PREREQ, and maybe a two-args version of LT_INIT_LIBTOOL with the tags listed as the second, we could do that. I'd personally learn towards "only one way to do it" though. The single command has the advantage of being fairly clear, and at least matches one of the other tools already. If you can figure out what's broken aclocal (I couldn't after a brief stare) then I'll see if I can get the language/tag handling working. I'll probably do it as an _LT_LANG function that takes either a language name or tag, and builds up a list of tags to pass to _LT_TAGS so Auto* can trace that. > I've also added an m4_pattern_forbid which means we don't need to keep using > the lame LT_AC_ prefix to pick up unexpanded macros in configure -- we can > migrate to a proper LT_ namespace! :-) > Hurrah, should we make ridding ourselves of *AC* a goal for 1.6? Scott -- Have you ever, ever felt like this? Had strange things happen? Are you going round the twist?
signature.asc
Description: This is a digitally signed message part | http://lists.gnu.org/archive/html/libtool-patches/2004-02/msg00017.html | CC-MAIN-2014-15 | refinedweb | 345 | 68.3 |
Re: [openstack-dev] [Neutron] Simple proposal for stabilizing new features in-tree
i like your idea, as an operator, it gives me new features while keep my core running fine. only one think i didn't like it why all url,api, etc has to include the word 'preview'? i imagine that i would be consuming the new feature using heat, puppet, local scripts, custom horizon, whatever. Why
Re: [openstack-dev] [all] The future of the integrated release
On 08/22/2014 02:13 PM, Michael Chapman wrote: We try to target puppet module master at upstream OpenStack master, but without CI/CD we fall behind. The missing piece is building packages and creating a local repo before doing the puppet run, which I'm working on slowly as I want a single
Re: [openstack-dev] [nova] Question about USB passthrough
On 02/24/2014 01:10 AM, Liuji (Jeremy) wrote: Hi, Boris and all other guys: I have found a BP about USB device passthrough in. I have also read the latest nova code and make sure it doesn't support USB passthrough by now.
Re: [openstack-dev] [Off Topic] Sunday/Monday in Hong Kong ___
Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR
On 11/27/2013 01:50 PM, Maru Newby wrote: Just a heads up, the console output for neutron gate jobs is about to get a lot noisier. Any log output that contains 'ERROR' is going to be dumped into the console output so that we can identify and eliminate unnecessary error logging. Once we've
Re: [openstack-dev] [all] The future of the integrated release
On 08/06/2014 06:10 PM, Michael Still wrote: We also talked about tweaking the ratio of tech debt runways vs 'feature runways. So, perhaps every second release is focussed on burning down tech debt and stability, whilst the others are focussed on adding features. I would suggest if we do
Re: [openstack-dev] [Tripleo][Neutron] Tripleo Neutron
On 04/07/2014 10:49 AM, Roman Podoliaka wrote: Hi all, Perhaps, we should file a design session for Neutron-specific questions? 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This comes under
Re: [openstack-dev] [neutron] explanations on the current state of config file handling
On 05/02/2014 11:09 AM, Mark McClain wrote: To throw something out, what if moved to using config-dir for optional configs since it would still support plugin scoped configuration files. Neutron Servers/Network Nodes /etc/neutron.d neutron.conf (Common Options) server.d
Re: [openstack-dev] [neutron] explanations on the current state of config file handling
On 05/04/2014 01:22 PM, Mark McClain wrote: On May 4, 2014, at 8:08, Sean Dague s...@dague.net wrote: Question (because I honestly don't know), when would you want more than 1 l3 agent running on the same box? For the legacy case where there are multiple external networks connected to a
Re: [openstack-dev] [Openstack-operators] [openstack-operators]flush expired tokens and moves deleted instance
On 01/28/2015 01:13 AM, Fischer, Matt wrote: Our keystone database is clustered across regions, so we have this job running on node1 in each site on alternating hours. I don’t think you’d want a bunch of cron jobs firing off all at once to cleanup tokens on multiple clustered nodes. That’s
Re: [openstack-dev] [Openstack-operators] [Neutron] Deprecating the use_namespaces option - Now's the time to speak up!
On 2015-03-21 02:57, Assaf Muller wrote: Hello everyone, The use_namespaces option in the L3 and DHCP Neutron agents controls if you can create multiple routers and DHCP networks managed by a single L3/DHCP agent, or if the agent manages only a single resource. Are the setups out there *not*
Re: [openstack-dev] [javascript] Linters
On 2015-06-06 03:26, Michael Krotscheck wrote: Right
[openstack-dev] PGP keysigning party for Mitaka summit in Tokyo?
Hello I see that is empty and no email thread about the topic, will be any more or less formal keysigning party in Tokyo? is it too late? -- 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333 keybase: | https://www.mail-archive.com/search?l=openstack-dev@lists.openstack.org&q=from:%22gustavo+panizzo+%5C%28gfa%5C%29%22 | CC-MAIN-2019-43 | refinedweb | 716 | 54.15 |
The React Gantt Chart is a project planning and management tool used to display and manage hierarchical tasks with timeline details. It helps assess how long a project should take, determine the resources needed, manage the dependencies between tasks, and plan the order in which the tasks should be completed.
Bind data seamlessly with various local and remote data sources such as JSON, RESTful services, OData services, and WCF services. The React Gantt Chart uses the data manager for binding the data source.
The React Gantt Chart supports different configurable timeline views such as hour, day, week, month, and year.
Create relationships between different tasks in project management. When a project is complex and contains many tasks that depend on the completion of others, task dependencies enable you to decide when a task can be started or finished using finish-to-start, start-to-finish, start-to-start, and finish-to-finish task link types.
The Gantt Chart.
The React Gantt Chart provides built-in support for unscheduled tasks, tasks that are not scheduled with proper dates or duration at the start of the project, but can be scheduled properly at any time during the project implementation based on factors such as resource availability, dependent tasks, and more.
Filtering helps view specific or related records that meet a given filtering criteria. It supports various filter types that include powerful Excel-like filter. The React Gantt Chart filter allows users to define their own custom filtering logic and customize the filtering UI based on their application needs. It also has an option to filter diacritic characters.
Sort tasks and resources by specified criteria. Sort a column in the ascending or descending order by simply clicking the header. A Ctrl + header click performs multi-sorting.
React Gantt Chart allows users to select rows or cells by simply clicking on them. One or more rows or cells can be selected by holding the Ctrl or Shift key, or programmatically.
Columns define the schema of a data source in Gantt Charts. They support formatting, column definitions, column chooser, column menu, column reorder, and other important features.
Visualize the list of tasks assigned to each resource in a hierarchical manner. Multiple tasks assigned to each resource can be visualized in a row when the records are in a collapsed state.
Easily export the Gantt Chart control in various file formats such as Excel, PDF, or CSV. Users can also programmatically customize the exported document.
Enable and disable the Gantt editing operations with the read-only option.
Users can easily plan and schedule tasks in both auto and manual mode to track their projects. Scheduling helps establish a realistic time frame for the completion of a project.
Customize the appearance and style of the taskbars, records, cell and row elements, etc. using templates.
Focus on the tasks not visible but scheduled later along the timeline by selecting their rows in the tree grid section. Auto focus on specific tasks to make the Gantt timeline more informative and comprehensible.
A tooltip is a great way of showing important data only if needed without overflowing the screen with text..
The Gantt Chart provides toolbar support to handle editing, searching, expanding, collapsing, and deleting selected tasks along with an option for adding new tasks. It accepts the collection of built-in toolbar items and custom toolbar items.
Baseline is the project’s original plan. Setting a baseline helps measure the effectiveness and scope of a project. Compare the current task’s progress with the planned dates using baselines.
Display nonworking days using the holidays feature. Highlight the workdays in one color and weekends and holidays in another.
Allocate multiple resources like staff, equipment, and materials to each task.
Row height is a major factor when displaying the number of records in the view port, and it can be customized effortlessly based on the application’s UI requirement. The height of child and parent taskbars can be customized by using the taskbarHeight property.
The context menu improves user action with Gantt Chart using a pop-up menu. It appears when the cell or header is right clicked. In addition to built-in default menu items, it allows you to add custom context menu items.
Change the Gantt Chart size by setting the width and height properties. Horizontal and vertical scrollbars appear when the content overflows the Gantt Chart element. For the Gantt Chart to fill its parent container, simply set the height and width to 100%.
User-friendly touch gestures and an interactive UI design on Gantt Chart help produce the best user experience. All Gantt Chart features work on touch devices with zero configuration.
Responsive feature allows the Gantt Chart layout to be viewed on various devices. It is also possible to hide certain columns for specific screen sizes using column-based media query support.
The React Gantt Chart works well with all modern web browsers such as Chrome, Firefox, Microsoft Edge, Safari, and IE11. It supports IE11 with the help of poly ills.
The React Gantt Chart ensures that every cell is keyboard accessible e. Major features like sorting, selection, and editing can be performed using keyboard commands alone with no mouse interaction required. This helps in creating highly accessible applications using this component.
The React Gantt Chart has complete WAI-ARIA accessibility support. The Gantt Chart UI includes high-contrast visual elements, helping visually impaired people to have the best viewing experience. Also, the valid UI descriptions are easily accessible through assistive technologies such as screen readers.
Gantt Chart enables users from different locales to use it by formatting the date, currency, and numbering to suit their language preferences. It uses the internalization (i18n) library to handle value formatting.
Users can localize all the strings used in the user interface of Angular Gantt Chart control. It uses the localization (l10n) library to localize UI strings.
The Gantt Chart component is also available in JavaScript, Angular, Vue and Blazor frameworks. Check out the different Gantt Chart platforms from the links below,
Four built-in, SASS-based themes are available: Material, Bootstrap, Fabric, and high contrast.
Simplify theme customization either by overriding the existing SASS styling or creating custom themes by using the Theme Studio application.
The React Chart functionalities with ease.
Easily get started with the React Gantt Chart using a few simple lines of TSX code example as demonstrated below. Also explore our React Gantt Chart Example that shows you how to render and configure a Gantt Chart in React.
import * as ReactDOM from 'react-dom'; import * as React from 'react'; import { GanttComponent, Inject, Selection } from '@syncfusion/ej2-react-gantt'; import { projectNewData } from './data'; import { SampleBase } from '../common/sample-base'; export class Default extends SampleBase<{}, {}> { public taskFields: any = { id: 'TaskID', name: 'TaskName', startDate: 'StartDate', endDate: 'EndDate', duration: 'Duration', progress: 'Progress', dependency: 'Predecessor', child: 'subtasks' }; public labelSettings: any = { leftLabel: 'TaskName' }; public projectStartDate: Date = new Date('03/24/2019'); public projectEndDate: Date = new Date('07/06/2019'); render() { return ( <div className='control-pane'> <div className='control-section'> <GanttComponent id='Default' dataSource={projectNewData} taskFields={this.taskFields} labelSettings={this.labelSettings} height='410px' projectStartDate={this.projectStartDate} projectEndDate={this.projectEndDate}> <Inject services={[Selection]} /> </GanttComponent> </div> </div> ) } }
We do not sell the React Gantt Chart separately. It is only available for purchase as part of the Syncfusion React suite, which contains over 70 React components, including the Gantt Chart. Gantt Chart, are not sold individually, only as a single package. However, we have competitively priced the product so it only costs a little bit more than what some other vendors charge for their Gantt Chart. | https://www.syncfusion.com/react-ui-components/react-gantt-chart | CC-MAIN-2021-49 | refinedweb | 1,261 | 55.54 |
i love this
i love this
Directory & File deletions
This program can also delete a file. Very helpful. Many thanks
how to delete a file with given prefix
your erroes
you have done many erros in it..
1..>if the file will writeprotected then it will not delete by this programme..
2..>and ofcurese it will delete only single directory or file if directory content subdirectoris then it will not delete it ..as u k
u need this
import java.io.*;
import java.util.*;
public class DeleteAny
{
public static void main(String args[])throws IOException
{
File f[]=new File[args.length];
for(int i=0; i<f.length; i++)
JAVA
its goods to learn for new people.
how to delete a directory/ folder using java
please any body reply me how to delete a folder using java
java
Hai!
This is Balaji, i am pursuing my MCA and i am presently doing project on java, so i am stuck in a problem, when i select some components like radiobutton or type some text in textfield when i click search button it has to search those selected
How to delete a folder using java
I want to delete a folder which contain PDF, pS and like files also at a same time i will have to run a procedure also
Deleting a folder with sub folders and files
public static boolean deleteDir(File dir) {
if (dir.isDirectory()) {
String[] children = dir.list();
for (int i=0; i<children.length; i++) {
boolean success = deleteDir(new File(dir, children[i]));
delete file
more example of deleting files code,,
how to delete the directory using java
can u help me to know can we delete the directory on machine using java
Showing ResultSet is closed, using .next 2 times
Hello,
I am not able to use ResultSet(ref. rs) two times within the same function. After using while(rs.next()), shows resultset is closed. I am not able to use that rs ref. again and again in the same func. please help me out....
re:how to delete a folder
recursive delte:
in java,only a empty folder canbe deleted,so you must first delte files in folder,then delete the empty folder.
delete file
the method delete() will delete only a directory but not a file for instanse
package ioDemo;
import java.io.File;
public class FileDemo
{
public static void main(String[] arg)
{
File f1 = new File("C:/Documents and Se
I wonder - Java Beginners
I wonder Write two separate Java?s class definition where the first one is a class Health Convertor which has at least four main members:
i. Attribute weight
ii. Attribute height
iii. A method to determine number
Java I/O - Java Beginners
Creating Directory Java I/O Hi, I wanted to know how to create a directory in Java I/O? Hi, Creating directory with the help of Java Program is not that difficult, go through the given link for Java Example Codehttp
java i/o - Java Beginners
java i/o thnx alot sir that this code helped me much in my program... so that i could write it line by line such as-
Hello Java in roseindia
Hello... Java in roseindia Hello Java in roseindia Hello Java in roseindia Hello Java
java i/o - Java Beginners
java i/o Dear sir,
i wrote a program where program asks "Enter your... gets closed.
when i open the program again and enter text in this case previous texts get replaced with new text.
i tried my best but got failed plz tell me
file i/o - Java Beginners
file i/o hiii,
i have to read some integers from a text file and store them in link list..
so please can i have source code for it.??
thanks
File I/O - Java Beginners
File I/O Suppose a text file has 10 lines.How do I retrieve the first word of each line in java?
abc fd ds 10
fg yy6 uf ui
.
.
.
.
.
.
yt oi oiu 25
ewr ytro 9+ po
I want to retrieve 'abc' 'fg' .... 'yt' 'ewr
File I/O - Java Beginners
File I/O How to search for a specific word in a text file in java? Hi Arnab,
Please check the following code.
=====================
import java.io.File;
import java.io.BufferedReader;
import
This is what i need - Java Beginners
This is what i need Implement a standalone procedure to read in a file containing words and white space and produce a compressed version of the file....
for this question i need just :
one function can read string like (I like
Parameter month I Report - Java Beginners
Parameter month I Report hy,
I want to ask about how to make parameter in I Report, parameter is month from date. How to view data from i report... like Java/JSP/Servlet/JSF/Struts etc ...
Thanks | http://roseindia.net/tutorialhelp/allcomments/214 | CC-MAIN-2014-15 | refinedweb | 807 | 71.14 |
Second level cache (ehcache) working?Tom Barry May 16, 2007 1:24 PM
Has anyone been able to get ehcache working in a Seam application? I've enabled ehcache in my persistence unit:
<property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.EhCacheProvider"/>
and enabled caching for one of my entities:
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class Organization extends PersistentObject { ... }
I created a basic ehcache.xml file as follows:
" /> </ehcache>
I put ehcache.xml into a jar file called ehcache.jar in server/default/lib so ehcache is able to find it. Where is this file really supposed to go? Does it really have to live there because that's where ehcache-1.2.3.jar is?
After loading an entity in my Seam application I update it in underlying database using an external application. I would expect the entity to remain unchanged when viewing in it my Seam application because the object should be cached, but it's obviously being reloaded from the database.
Any ideas?
This content has been marked as final. Show 3 replies
1. Re: Second level cache (ehcache) working?Joseph Nusairat May 16, 2007 2:25 PM (in response to Tom Barry)
How are you loading it btw??
Just doing a get all or is it by a query?
2. Re: Second level cache (ehcache) working?Thomas Barry May 16, 2007 3:04 PM (in response to Tom Barry)
Via a query...
3. Re: Second level cache (ehcache) working?Joseph Nusairat May 16, 2007 3:21 PM (in response to Tom Barry)
If it's via a query where you are not retrieving from the PK then it will not get it from the cache.
It will always attempt to go off of live data.
Cache only helps when you are pulling like a .get(Object ) | https://developer.jboss.org/message/473760 | CC-MAIN-2019-09 | refinedweb | 308 | 68.87 |
Hi, I'm having this error, please could you help me?
It is running in a Windows 10 with a Proxy (do I need to open any por diferent from 80 and 443?).
For example if I try to run:
import eikon as ek #Eikon DATA API
ek.set_app_key("My key")
symbology = ek.get_timeseries(['US4642872422'])
I'm getting:
2020-11-30 17:13:11,275 P[13012] [MainThread 10028] Error code 503 | Server Error: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "">
<div id="content"> <p>The following error was encountered while trying to retrieve the URL: <a href=""></a></p> <blockquote id="error"> <p><b>Connection to 127.0.0.1 failed.</b></p> </blockquote> <p id="sysmsg">The system returned: <i>(111) Connection refused</i></p> <p>The remote host or network may be down. Please try the request again.</p>
Thanks
Hi,
If your proxy url is set with HTTP_PROXY or HTTPS_PROXY environment variable, the lastest 1.1.8 eikon version should fix your issue.
Could you test and confim ? | https://community.developers.refinitiv.com/questions/69925/notebook-api-error-503.html | CC-MAIN-2021-17 | refinedweb | 174 | 76.72 |
December 8, 2020
Bartek Iwańczuk, Luca Casonato, Ryan Dahl
Today we are releasing Deno 1.6.0. This release contains some major features, and many bug fixes. Here are some highlights:
deno compilecan build your Deno projects into completely standalone executables
If you already have Deno installed you can upgrade to 1# Build from source using cargocargo install deno
deno compile: self-contained, standalone binaries
We aim to provide a useful toolchain of utilities in the Deno CLI. Examples of
this are
deno fmt, and
deno lint. Today we are pleased to add another
developer tool to the Deno toolchain:
deno compile.
deno compile does for Deno what
nexe or
pkg do for Node: create a
standalone, self-contained binary from your JavaScript or TypeScript source
code. This has been the single most upvoted issue on the Deno issue tracker.
It works like this:
$ deno compile --unstable file_server$ ./file_serverHTTP server listening on
As with all new features in Deno,
deno compile requires the
--unstable flag
to communicate that there may be breaking changes to the interface in the short
term. If you have feedback, please comment in the
Deno discord, or create an issue with feature
requests on the Deno issue tracker.
For implementation details, see #8539.
For now there are several limitations you may encounter when using
deno compile. If you have a use case for one of these, please respond in the
corresponding tracking issues.
You might have noticed that unlike other tools that create standalone,
self-contained binaries for JS (like
pkg),
deno compile does not have a
virtual file system that can be used to bundle assets. We are hoping that with
future TC39 proposals like
import assertions, and
asset references, the
need for a virtual file system will disappear, because assets can then be
expressed right in the JS module graph.
Currently the
deno compile subcommand does not support cross platform
compilation. Compilation for a specific platform has to happen on that platform.
If there is demand, we would like to add the ability to cross compile for a
different architecture using a
--target flag when compiling. The tracking
issue for this is #8567.
Due to how the packaging of the binary works currently, a lot of unnecessary code is included the binary. From preliminary tests we have determined that we could reduce the final binary size by around 60% (to around 20MB) when stripping out this unnecessary code. Work on this front is happening at the moment (e.g. in #8640).
Deno 1.6 ships with a new
deno lsp subcommand that provides a language server
implementing
Language Server Protocol.
LSP allows editors to communicate with Deno to provide all sorts of advanced
features like code completion, linting, and on-hover documentation.
The new
deno lsp subcommand is not yet feature-complete, but it implements
many of the main LSP functionalities:
deno fmtintegration
deno lintintegration
The
Deno VSCode extension
does not yet support
deno lsp. It is still more feature rich than the nascent
deno lsp can provide. However, we expect this to change in the coming weeks as
the LSP becomes more mature. For now, if you want to try
deno lsp with VSCode,
you must install
VSCode Deno Canary.
Make sure that you have installed Deno 1.6 before trying this new extension. And
make sure to disable the old version of the extension, otherwise diagnostics
might be duplicated.
To track the progress of the development follow
issue #8643. We will release a
new version of vscode-deno that uses
deno lsp when #8643 is complete.
In Deno 1.4 we introduced some stricter TypeScript type checks in
--unstable
that enabled us to move a bunch of code from JS into Rust (enabling huge
performance increases in TypeScript transpilation, and bundling). In Deno 1.5
these stricter type checks were enabled for everyone by default, with a opt-out
in the form of the
"isolatedModules": false TypeScript compiler option.
In this release this override has been removed. All TypeScript code is now run
with
"isolatedModules": true.
For more details on this, see the Deno 1.5 blog post.
Deno 1.6 ships with the latest stable version of TypeScript.
For more information on new features in Typescript 4.1 see Announcing TypeScript 4.1
For advanced users that would like to test out bug fixes and features before
they land in the next stable Deno release, we now provide a
canary update
channel. Canary releases are made multiple times a day, once per commit on the
master branch of the Deno
repository.
You can identify these releases by the 7 character commit hash at the end of the
version, and the
canary string in the
deno --version output.
Starting with Deno 1.6, you can switch to the canary channel, and download the
latest canary by running
deno upgrade --canary. You can jump to a specific
commit hash using
deno upgrade --canary --version 5eedcb6b8d471e487179ac66d7da9038279884df.
Warning: jumping between canary versions, or downgrading to stable, may
corrupt your
DENO_DIR.
The zip files of the canary releases can be downloaded from.
aarch64-apple-darwin builds are not supported in canary yet.
Users of the new Apple computers with M1 processors will be able to run Deno
natively. We refer to this target by the LLVM target triple
aarch64-apple-darwin in our release zip files.
This target is still considered experimental because it has been built using Rust nightly (we normally use Rust stable), and because we do not yet have automated CI processes to build and test this target. That said, Deno on M1 fully passes the test suite, so we're relatively confident it will be a smooth experience.
Binaries of
rusty_v8 v0.14.0 targeting M1 are also provided
with the same caveats.
std/bytes
As a part of the efforts of the
Standard Library Working Group;
std/bytes module has seen major overhaul. This is a first step towards
stabilizing the Deno standard library.
Most of the APIs were renamed to better align with the APIs available on
Array:
copyBytes->
copy
equal->
equals
findIndex->
indexOf
findLastIndex->
lastIndexOf
hasPrefix->
startsWith
hasSuffix->
endsWith
The full release notes, including bug fixes, can be found at. | https://deno.land/posts/v1.6 | CC-MAIN-2021-04 | refinedweb | 1,031 | 63.49 |
I am new to Java, I am woring on learning inheritance.
I am creating an abstract class called Thing.
I have to put this requirement in, Please could somebody help me with Syntax
Thing will have the following private class field:
• Things, an ArrayList defined with a Thing generic. This ArrayList must be maintained as Thing objects are constructed. This ArrayList must contain all unique objects for which (object instanceof Thing) is true. Each time an object for which (object instanceof Thing) is true is constructed, a reference to that object must be put in the ArrayList, if that object does not already exist in the ArrayList. The Thing equals method will be used to test if the object is not unique, i.e., already exists in the ArrayList. Your code must not useany object casts and must not use any instance of operator.
Java Code:
Code :
import java.util.*; public abstract class Thing { public String iName = new String(); private ArrayList<Thing> iThings = new ArrayList<Thing>(); public static void main(String[] args) { Thing vehicle1 = new Vehicle(); } public Thing(String name) { iName = name; } public String getName() { return iName; } public abstract void show(); public abstract boolean equals(Thing b); public void showThings() { System.out.println("Things"); show(); } }
Thank you,
urpalshu | http://www.javaprogrammingforums.com/%20collections-generics/11478-arraylist-inheritance-printingthethread.html | CC-MAIN-2014-10 | refinedweb | 208 | 63.19 |
Download the archive pp8.zip. It contains all the files for this project.
Consider the following function select(a, k):
def select(a, k): b = sorted(a) return b[k]This function returns the \(k\)th smallest value in the list a. For instance, if k = 0, it returns the smallest element, when k = len(a) - 1, it returns the largest element.
Note that even if the input list a is not sorted, the function still selects the \(k\)th element correctly.
Your task is to write a function quick_select(a, k) that returns the same result as select(a, k), but which does not sort the list. Your code will be faster than sorting on average.
Follow the strategy of Quicksort (the quick_sort function is available in a separate file for your reference). Pick a pivot, and partition the list a into pieces based on the pivot. Look at the length of each piece, and determine where to look for the \(k\)th element recursively. Different from Quicksort, you go into recursion in one piece only.
Use the script timing.py to measure the run time of both select and quick_select.
You can use the unit tests test_quickselect.py to check your implementation.
Submission: Upload your file selection.py to the submission server.
In this project, we implement merge sort for a doubly-linked list.
Your task is to add several methods to the DoublyLinkedList class provided in listsort.py:
>>> a = DoublyLinkedList(1, 2, 3) >>> a.median() <2> >>> a.append(4) >>> a [1, 2, 3, 4] >>> a.median() <2> >>> a.append(5) >>> a [1, 2, 3, 4, 5] >>> a.median() <3>Note is allowed for n to be the front sentinel of this list: in that case this list becomes empty, all list elements become part of the returned object. It is not allowed for n to be the rear sentinel.
Here are some examples:
>>> a = DoublyLinkedList("CS206","is","fun","and","one","learns","a","lot") >>> n = a.first().next.next >>> n <'fun'> >>> b = a.split(n) >>> a [CS206, is, fun] >>> b [and, one, learns, a, lot] >>> c = b.split(b.first()) >>> b [and] >>> c [one, learns, a, lot] >>> d = a.split(a.first().prev) # split on front sentinel >>> a [] >>> d [CS206, is, fun]
This method must run in constant time, and does not create any new nodes (it "steals" the node from the other list and uses it for this list).
For example:
>>> a = DoublyLinkedList(1,3,13,17,27) >>> b = DoublyLinkedList(2,15,16,25) >>> a [1, 3, 13, 17, 27] >>> b [2, 15, 16, 25] >>> a.steal(b) >>> a [1, 3, 13, 17, 27, 2] >>> b [15, 16, 25] >>> b.steal(a) >>> a [3, 13, 17, 27, 2] >>> b [15, 16, 25, 1]
This method must run in \(O(n + m)\) time, where \(n\) is the size of this list and \(m\) is the size of the other list. It must not create new nodes, and instead use the nodes of the other list.
For example:
>>> a = DoublyLinkedList(1,3,13,17,27) >>> b = DoublyLinkedList(2,15,16,25) >>> a [1, 3, 13, 17, 27] >>> b [2, 15, 16, 25] >>> a.merge(b) >>> a [1, 2, 3, 13, 15, 16, 17, 25, 27] >>> b []
When you are done with these three methods, the sort method that is already implemented will work correctly:
def sort(self): # is length <= 1 ? if self.is_empty() or self._front.next.next == self._rear: return other = self.split(self.median()) self.sort() other.sort() self.merge(other)Note how it determines in constant time if the length of the list is at most one—using len(self) instead would have taken linear time.
You can use the unit tests test_listsort.py to test the four methods.
Submission: Upload your file listsort.py to the submission server. | http://otfried.org/courses/cs206/pp8.html | CC-MAIN-2018-05 | refinedweb | 633 | 75.81 |
| Join
Last post 11-22-2006 5:53 PM by bleroy. 4 replies.
Sort Posts:
Oldest to newest
Newest to oldest
Hi,
I haven't seen Method Overloading when I watch the MicrosoftAjax.js it is possible but I don't have an "official method" to do this. What do you think abut my method ? I think it will be interesting to have this feature in Visual Studio "Orcas".
// Création d'un namespace
Type.registerNamespace('Sample');
// Constructeur de la class Personne
Sample.Person = function(firstName, lastName){
this._firstName = firstName;
this._lastName = lastName;
}
// Rajout des différentes méthodes à notre type
Sample.Person.prototype = {
get_firstName : function(){
return this._firstName;
},
set_firstName : function(value){
this._firstName = value;
get_lastName : function(){
return this._lastName;
set_lastName : function(value){
this._lastName = value;
toString : function(){
if (arguments.length == 1 && Object.getType(arguments[0]) == String){
return this.toString$String.apply(this, arguments);
} else if (arguments.length == 2 && Object.getType(arguments[0]) == String && Object.getType(arguments[1]) == Sample.Person){
return this.toString$String$Person.apply(this, arguments);
} else {
return this.toString$.apply(this, arguments);
}
toString$ : function(){
return String.format('je suis {0} {1}', this.get_firstName(), this.get_lastName());
toString$String : function(format){
return String.format(format, this.get_firstName(), this.get_lastName());
toString$String$Person : function(format, person){
return String.format('{0} , {1}', this.toString(format), person.toString(format));
}
// Enregistrement de notre type dans le framework Atlas
Sample.Person.registerClass('Sample.Person');
and we use it like this :
window.pageLoad = function(){
var p = new Sample.Person('Cyril', 'Durand');
alert(p.toString());
alert(p.toString('{0} {1}'));
var p2 = new Sample.Person('Toto', 'Bidule');
alert(p.toString('{0} {1}', p2));
hello.
i'd say that this isn't possible. when you define the 2nd method, it'll just "replace" thes 1st without complaining. so, i think this is just one of those things that we won't have.
My sample works ! I have 4 methods : toString , toString$, toString$String and toString$String$Person no method will be replaced. Wen we call toString('{0}, {1}'); the toString method will be execute it analyze the type of the arguments and call toString$String methods.
My question is what do you think about this method ? In my project I need it and I want to follow the Atlas naming conventions but there is nothing about method overloading :-)
For information I blog about this here : Surcharge de méthode (in French)
hello again.
ah, ok. i didn't read the whole post.
regarding the code, i do like the idea. regarding the name convention, well, since it's your framework i think you're free to choose the names you want.
I do like the idea *if* there is a preprocessor that generates the main method's parameter counting goo from the raw overloads that I declare without any other consideration about JavaScript not supporting it out of the box.
On the other hand, it may confuse users into thinking they doing something they're not, and the debugging experience may look weird. Furthermore, looking at the arguments collection is a *very* expensive operation.
So in the end, I'm not convinced this is worth the additional complexity. From our experience, optional parameters and additional methods are OK as workarounds. It's just JavaScript, we don't have to map every single concept that exists in the managed world.
Advertise |
Ads by BanManPro |
Running IIS7
Trademarks |
Privacy Statement
© 2009 Microsoft Corporation. | http://forums.asp.net/p/1046724/1469982.aspx | crawl-002 | refinedweb | 561 | 53.98 |
On 2014-04-27 21:02:10, Αριστοτέλης Πανάρας wrote: > Well, actually there was a slight problem. > More specifically, the manpages were not installed. It was not a packaging > error, but a Makefile mishap. > It seems that *Sphinx* (ran through the Makefile in ./doc) is used to build > the complete manpages, but the thing is that the following lines in the > root Makefile do not work as they are supposed to: > > install-manpages: build-manpages > $(call colorecho,INSTALL,"man pages") > $(QUIET)mkdir -m 755 -p ${DESTDIR}${MANPREFIX}/man1 > ${DESTDIR}${MANPREFIX}/man5 > ifneq "$(wildcard ./doc/_build/man/${PROJECT}.1)" "" > $(QUIET)install -m 644 ./doc/_build/man/${PROJECT}.1 > ${DESTDIR}${MANPREFIX}/man1 > endif > ifneq "$(wildcard ./doc/_build/man/${PROJECT}rc.5)" "" > $(QUIET)install -m 644 ./doc/_build/man/${PROJECT}rc.5 > ${DESTDIR}${MANPREFIX}/man5 > endif > > That is because *Sphinx*, after building the manpages for *zathura* and > *zathurarc*, it does not leave them under *./doc/_build/man/*, but rather > under *./doc/_build/*. > So removing the */man/* part of the path in these four lines fixes the > problem.
Thanks for noticing. This should be fixed now. Cheers -- Sebastian Ramacher _______________________________________________ zathura mailing list zathura@lists.pwmt.org | https://www.mail-archive.com/zathura@lists.pwmt.org/msg00397.html | CC-MAIN-2018-39 | refinedweb | 193 | 52.15 |
:
from pylab import *
import numarray
image = numarray.ones((30,30))
matshow(image)
show()
just to let you know.
N.
Dear all,
Perhaps this idea appears strange to some, but in my field (atmospheric
turbulence) it is a common problem: I want to plot data with a log-axis (say
the x-axis) with both positive and negative numbers for x. This implies that I
want to zoom in on small values of |x|. The way to do this, is to define a
'gap' around zero in which no data exist, or are ignored. So if my x-data would
range
from -10 to -0.01 and from 0.01 to 10, the x-axis would look like:
|-------|-------|-------|-------|-------|------|
-10 -1 -0.1 +/-0.01 0.1 1 10
There are few (if any) plotting programs that can do this, but it would make
life a lot easier for me. By now I have hacked my own pylab script to do this,
but it has many limitations. To do it properly, it should be done on a somewhat
lower level in the code, I suppose. The idea is to split the data into either 2
(semilogx and semilogy) or 4 quadrants (loglog) and to plot the data in each
quadrant seperately. If the lower limit of the x-axis (or y-axis) is taken
positive, a normal semilogx (or semilogy) plot is recovered.
More people that need/like this? Any volunteers who know what they are doing (in
terms of low-level pylab coding)?
Regards,
Arnold
--
------------------------------------------------------------------------
Arnold F. Moene NEW tel: +31 (0)317 482604
Meteorology and Air Quality Group fax: +31 (0)317 482811
Wageningen University e-mail: Arnold.Moene at wur.nl
Duivendaal 2 url:
6701 AP Wageningen
The Netherlands
------------------------------------------------------------------------
Openoffice.org - Freedom at work
Firefox - The browser you can trust ()
------------------------------------------------------------------------
Hi, I'm new to using matplotlib and am having some problems installing
the software package. Basically my setup is:
* Redhat 9.0
* gcc 3.3
* Python 2.4 (installed in a non-standard location)
* freetype 2.1.9 and numarray 1.2.2 also installed in a non-standard
location.
Following the instructions on the matplotlib homepage, I:
1. Substitute:
'linux2' : [
with:
'linux2' : [os.environ['ACSROOT'], os.environ['PYTHON_ROOT']
within matplotlib-0.72.1/setupext.py. The ACSROOT environment variable
is where numarray is installed and the numarray headers can be found in
$ACSROOT/include/numarray/.
2. Set BUILD_AGG=1
3. Run the command 'python setup.py build' which gives output that looks
correct until gcc is executed:
running build_ext
building 'matplotlib._na_transforms' extension
creating build/temp.linux-i686-2.4
creating build/temp.linux-i686-2.4/src
creating build/temp.linux-i686-2.4/CXX
gcc -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes
-pipe -D_POSIX_THREADS -D_POSIX_THREAD_SAFE_FUNCTIONS -D_REENTRANT
-DACE_HAS_AIO_CALLS -fcheck-new -Wall -fPIC -g -DDEBUG -O -DCCS_LIGHT
-fPIC -Isrc -I. -I/alma/ACS-4.0/Python/include/python2.4 -c
src/_na_transforms.cpp -o build/temp.linux-i686-2.4/src/_na_transforms.o
-DNUMARRAY=1
In file included from /alma/ACS-4.0/Python/include/python2.4/Python.h:8,
from CXX/Objects.hxx:9,
from CXX/Extensions.hxx:18,
from src/_transforms.h:12,
from src/_na_transforms.cpp:2:
/alma/ACS-4.0/Python/include/python2.4/pyconfig.h:835:1: warning:
"_POSIX_C_SOURCE" redefined
In file included from
/alma/ACS-4.0/gnu/include/c++/3.3/i386-redhat-linux/bits/os_defines.h:39,
from
/alma/ACS-4.0/gnu/include/c++/3.3/i386-redhat-linux/bits/c++config.h:35,
from /alma/ACS-4.0/gnu/include/c++/3.3/functional:53,
from src/_na_transforms.cpp:1:
/usr/include/features.h:131:1: warning: this is the location of the
previous definition
src/_na_transforms.cpp:6:35: numarray/arrayobject.h: No such file or
directory
...
Just looking at the output it appears as if the changes made to basedir
(i.e., 'linux2') in setupext.py are not having any sort of effect (hence
the error message about numarray/arrayobject.h not existing). Is there
something blatantly wrong I'm doing? Any help would be greatly appreciated.
Thanks in advance,
David Fugate
>>>>> "David" == David Fugate <dfugate@...> writes:
David> Just looking at the output it appears as if the changes
David> made to basedir (i.e., 'linux2') in setupext.py are not
David> having any sort of effect (hence the error message about
David> numarray/arrayobject.h not existing). Is there something
David> blatantly wrong I'm doing? Any help would be greatly
David> appreciated.
John Hunter wrote:
>
Yes, this fixed it! Thank you.
David
BTW
A new identical problem appeared which was easily fixed by doing the
same thing to the build_contour function.
--
There are 10 types of people in the world. Those that understand binary
and those that don't.
>>>>> "David" == David Fugate <dfugate@...> writes:
John> add_base_flags(module) before the call to
John> ext_modules.append(module) in both the Numeric and numarray
John> sections. Ditto for the build_contour function in
John> setupext.py.
David> BTW A new identical problem appeared which was easily fixed
David> by doing the same thing to the build_contour function.
Which is why I said "Ditto for the build_contour function in
setupext.py" <wink>
Glad it helped -- thanks for the report and persevering!
JDH
Hi everyone
Does anyone know how to asign different patches in
legend when plotting 2 or more histograms in the same
figure.
I've tried:
legend((patches1,patches2),(hist1,hist2)) ,
but this gives the same patches in the legend inset.
Kristen
__________________________________
Do you Yahoo!?
Take Yahoo! Mail with you! Get it on your mobile phone.
>>>>> "kristen" == kristen kaasbjerg <cosbys@...> writes:
kristen> Hi everyone Does anyone know how to asign different
kristen> patches in legend when plotting 2 or more histograms in
kristen> the same figure. I've tried:
kristen> legend((patches1,patches2),(hist1,hist2)) ,
kristen> but this gives the same patches in the legend inset.
I'm assuming patches1 and patches2 are the return values from hist, in
which case they are each a *list* of patches. What you want to do is
pass a *single* item from each of those lists as representative
patches.
legend( (patches1[0],patches2[0]), ('label1', 'label2') )
Next time if you post a more complete code snippet, I won't have to
guess what patches1 and patches2 are!
Hope this helps,
JDH
Hi again
Working with legend I've encountered another problem.
Changing the fontsize in a legend seems to be a little
harder than first assumed. Is there an easy way to do
this??
Kristen
__________________________________
Celebrate Yahoo!'s 10th Birthday!
Yahoo! Netrospective: 100 Moments of the Web
kristen kaasbjerg wrote:
> Hi again
> Working with legend I've encountered another problem.
> Changing the fontsize in a legend seems to be a little
> harder than first assumed. Is there an easy way to do
> this??
From the mailing list a couple of days ago...
You need to pass in a FontProperties instance that specifies the size you want:
prop = FontProperties(size="x-small')
size - Either an absolute value of xx-small, x-small, small,
medium, large, x-large, xx-large; or a relative value
of smaller or larger; or an absolute font size, e.g. 12;
or scalable
i.e. lgnd = ax.legend((lines, labels, prop = FontProperties(size="x-small'),
..other_params_as_required)
Robert
PS This looks like something to add to my 'Getting Started' document....
>>>>> "kristen" == kristen kaasbjerg <cosbys@...> writes:
kristen> Hi again Working with legend I've encountered another
kristen> problem. Changing the fontsize in a legend seems to be a
kristen> little harder than first assumed. Is there an easy way to
kristen> do this?? shows you how to
customize the legend text font size. The examples directory is really
an indispensable tool in learning matplotlib. If you are using the
source distribution, the examples directory is included. If you are
using a binary distribution, a zip file is found here
The relevant code fragment from legend_demo.py is
ltext = leg.get_texts() # all the text.Text instance in the legend
set(ltext, fontsize='small') # the legend text fontsize
JDH
John Hunter wrote:
>
> Repeatable is good. Standalone much better. So you are running the
> pure Agg backend (no GUI?). It would help to post the output of
>
> c:> python myscript.py --verbose-helpful
Correct, no gui, verbose output:
matplotlib data path C:\Python24\share\matplotlib
loaded rc file C:\Python24\share\matplotlib\.matplotlibrc
matplotlib version 0.72.1
verbose.level helpful
interactive is False
platform is win32
numerix numarray 1.2.2
font search path ['C:\\Python24\\share\\matplotlib']
loaded ttfcache file c:\home\robert\.ttffont.cache
backend TkAgg version 8.4
> It probably won't happen with 0.71 and this would be worth testing.
Everything works without error on 0.71, using either numarray or Numeric.
> But if you can
> get a standalone script, that would be most efficient.
Working on it now.
Robert
>>>>> "Robert" == Robert Leftwich <robert@...> writes:
Robert> John Hunter wrote:
>> Repeatable is good. Standalone much better. So you are
>> running the pure Agg backend (no GUI?). It would help to post
>> the output of c:> python myscript.py --verbose-helpful
Robert> Correct, no gui, verbose output:
You say no gui, but the verbose report says tkagg:
backend TkAgg version 8.4
Do you see the problem when using agg alone?
>> But if you can get a standalone script, that would be most
>> efficient.
Robert> Working on it now.
Looking forward to it :-)
JDH
John Hunter wrote:
>>>>>>"Robert" == Robert Leftwich <robert@...> writes:
>
> Robert> Correct, no gui, verbose output:
>
> You say no gui, but the verbose report says tkagg:
>
> backend TkAgg version 8.4
My bad, forgot to change the rc when re-installing 0.72.
>
> Do you see the problem when using agg alone?
Yes, in fact it seems to be worse.
>
> >> But if you can get a standalone script, that would be most
> >> efficient.
>
> Robert> Working on it now.
>
> Looking forward to it :-)
I sent it direct to you, rather than everyone on the list.
Robert
>>>>> "Robert" == Robert Leftwich <robert@...> writes:
John> But if you can get a standalone script, that would be most
John> efficient.
Robert> I sent it direct to you, rather than everyone on the list.
OK, the good news is that my first hunch was correct. The fact that
the minimal script (off-list) needed more iterations than the full,
and that the number of iterations on my system before the crash was
different than yours indicated to me that it was a memory problem.
matplotlib uses a fair number of cyclic references (figure refers to
canvas and canvas refers to figure -- this one was actually added in
0.72 which is where I suspect the culprit is).
In the pylab interface, I call gc.collect in the destroy method of
each figure manager to force the garbage collector to clean up. In
your script, which doesn't use the pylab interface, you need to
manually add this call yourself. I'll make it a FAQ because it is
fairly important, and add a link or the faq in the agg_oo example.
import gc
def graphAll(self):
for i in range(100):
print i
self.graphRSIDailyBenchmark()
gc.collect()
Should cure what ails you!
python gurus -- would it be advisable for me to insert a gc.collect()
somewhere in the matplotlib OO hierarchy since I know cyclic
references are a problem? Eg in the call to the FigureCanvas init
function?
JDH
John Hunter wrote:
>
> gc.collect()
>
> Should cure what ails you!
>
The good news is that it is a huge improvement, but the bad news is that I'm
still getting a GPF, just a lot less often :-( Try bumping the minimal test loop
up to 5k, it failed at 3057 for me.
Robert
>>>>> "Robert" == Robert Leftwich <robert@...> writes:
Robert> John Hunter wrote:
>> gc.collect() Should cure what ails you!
>>. Maybe you should use
the full test script. At lease you'll fail faster :-)
matplotlib does some caching in various places for efficiency which
will slowly eat memory. We need to add some automated means to clear
this cache but we don't have it yet.
What happens if you comment out this line in text.py
self.cached[key] = ret
and this line in backend_agg.py
_fontd[key] = font. See also. Todd Miller knows a
clever way of getting python to report how may object references it
has a hold of, but I can't remember the magic command right now.
With matplotlib CVS on linux, that script is reporting no detectable
leak. But your script may be exposing a leak not covered by that
one.
JDH
John Hunter wrote:
>>>>>>"Robert" == Robert Leftwich <robert@...> writes:
>
>
>.
Not on the new laptop!
> Maybe you should use
> the full test script. At lease you'll fail faster :-)
Yep, using the real data, it fails pretty quickly.
>
>
> What happens if you comment out this line in text.py
>
> self.cached[key] = ret
>
> and this line in backend_agg.py
>
> _fontd[key] = font
It's worse, the minimal test fails at 4!, but the real data takes 180 or so.
>
>.
I'll attempt this when time permits, possibly late today, but sometime later
this week is more likely.
Robert
I took up John's suggestion to 'new users starting on the path to matplotlib OO
API enlightenment to make notes and write a tutorial as you go'.
I'm posting it to the list to generate some feedback and hopefully help out any
fellow newbies (although I would prefer it to be on a Wiki somewhere).
Feel free to comment....
Robert
==================================================
Getting Started With Matplotlib's OO Class Library
==================================================
Introduction
------------
For those people coming to Matplotlib without any prior experience
of MatLab and who are familiar with the basic concepts of
programming API's and classes, learning to use Matplotlib via the
class library is an excellent choice. The library is well put
together and works very intuitively once a few fundamentals are
grasped.
The advice from John Hunter, the original developer of the library
is 'don't be afraid to open up matplotlib/pylab.py to see how the
pylab interface forwards its calls to the OO layer.' That in
combination with the user's guide, the examples/embedding demos,
and the mailing lists, which are regularly read by many developers
are an excellent way to learn the class library.
Following is a brief introduction to using the class library,
developed as I came to grips with how to produce my first graphs.
Classes/Terms
-------------
FigureCanvas - is primarily a container class to hold the Figure
instance, an approach which enforces a rigid segregation between
the plot elements, and the drawing of those elements. It can be
loosely thought of as 'the place where the ink goes'.
Figure - a container for one or more Axes. It is possible to
create and manage an arbitrary number of figures using the Figure
class. Note also that a figure can have its own line, text and
patch elements, independent of any axes.
Axes - the rectangular area which holds the basic elements (Line2D,
Rectangle, Text, etc) that are generated by the Axes plotting
commands (e.g. the Axes plot, scatter, bar, hist methods). The Figure
methods add_axes and add_subplot are used to create and add an Axes
instance to the Figure. You should not instantiate an Axes instance
yourself, but rather use one of these helper methods.
Line - implemented in the Line2D class, can draw lines(!) with a
variety of styles (solid, dashed, dotted, etc), markers (location
indicators on the line - point, circle, triangle, diamond, etc) and
colours (k or #000000 - black, w or #FFFFFF - white, b or #000099 -
blue, etc)
Text - a class that handles storing and drawing of text in window
or data coordinates. The text can be coloured, rotated, aligned in
various ways relative to the origin point, and have font properties
(weight, style, etc) assigned to it.
Patch - a patch is a two dimensional shape with a separately
specifiable face and edge colour. Specialised patch classes include
circle, rectangle, regular polygon and more.
Ticks - the indicators of where on an axis a particular value
lies. Separate classes exist for the x and y axis ticks, (XTick,
YTick) and each are comprised of the primitive Line2D and Text
instances that make up the tick.
Artist - Everything that draws into a canvas derives from Artist
(Figure, Axes, Axis, Line2D, Rectangle, Text, and more). Some of
these are primitives (Line2D, Rectangle, Text, Circle, etc) in that
they do not contain any other Artists, some are simple composites,
e.g. XTick which is mad up of a couple of Line2D and Text instances
(upper and lower tick lines and labels), and some are complex, e.g.
and Axes or a Figure, which contain both composite and primitive
artists.
Techniques
----------
1. Setting up an agg backend canvas:
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
fig = Figure()
canvas = FigureCanvas(fig)
2. To set the size of the Figure, use the figsize keyword, which uses
inches as the units:
# this results in a 204x192 pixel png - if output at 100 dpi, using
# canvas.print_figure(.., dpi=100)
fig = Figure(figsize=(2.04,1.92))
canvas = FigureCanvas(fig)
3. To add a single subplot:
# The 111 specifies 1 row, 1 column on subplot #1
ax = fig.add_subplot(111)
4. To change the axes size and location on construction, e.g to fit
the labels in on a small graph:
ax = fig.add_axes([0.2,0.2,0.5,0.7])
An existing Axes position/location can be changed by calling
the set_position() method.
5. Adding a graph of some sort is as simple as calling the required
function on the axes instance:
p1 = ax.bar(...) or p1 = ax.plot(...)
6. Setting a label with extra small text:
ax.set_xlabel('Yrs', size='x-small')
7. Setting the graph title:
ax.set_title(A graph title', size='x-small')
8. To enable only the horizontal grid on the major ticks:
ax.grid(False)
ax.yaxis.grid(True, which='major')
9. To only have a left y-axis and a bottom x-axis:
# set the edgecolor and facecolor of the axes rectangle to be the same
frame = ax.axesPatch
frame.set_edgecolor(frame.get_facecolor())
# Specify a line in axes coords to represent the left and bottom axes.
bottom = Line2D([0, 1], [0, 0], transform=ax.transAxes)
left = Line2D([0, 0], [0, 1], transform=ax.transAxes)
ax.add_line(bottom)
ax.add_line(left)
10. To change the size of the tick labels :
labels = ax.get_xticklabels() + ax.get_yticklabels()
for label in labels:
label.set_size('x-small')
11. Removing the black rectangle around an individual bar graph
rectangle (by changing it to the bar colour) :
c = '#7FBFFF'
p1 = ax.bar(ind, values, width, color=c)
for r in p1:
r.set_edgecolor(c)
Putting it together
-------------------
Following is a simple example of how to use the class library
This is examples/agg_oo.py in the matplotlib src distribution, also
found (like all examples) at
#!/usr/bin/env python
"""
A pure OO (look Ma, no pylab!) example using the agg backend
"""
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
fig = Figure()
canvas = FigureCanvas(fig)
ax = fig.add_subplot(111)
ax.plot([1,2,3])
ax.set_title('hi mom')
ax.grid(True)
ax.set_xlabel('time')
ax.set_ylabel('volts')
canvas.print_figure('test')
>>>>> "Axel" == Axel Kowald <A.Kowald@...> writes:
Axel> Hi, I'm using matplotlib 0.71 and I think I found a bug in
Axel> polyfit.
Axel> This simple linear regression on two data points gives the
Axel> correct answer:
>>>> polyfit([731.924,731.988],[915,742],1)
^^^^
floats
Axel> However, if I multiply my x values by 1000 the result is
Axel> wrong:
>>>> polyfit([731924,731988],[915,742],1)
^^^^
integers
Both of these should work
print polyfit([731.924,731.988],[915.,742.],1)
print polyfit([731924.,731988.],[915.,742.],1)
I fixed the polyfit code to explicitly convert the input arrays to
floats arrays, which fixes this bug.
Thanks for the report.
JDH
>>>>>
>>>>> "Humufr" == Humufr <humufr@...> writes:
Humufr> Hi, I see a problem when I'm using autoscale. I
Humufr> have a spectra with huge difference in y. I used xlim to
Humufr> look only a part of my spectra and the ylim is not
Humufr> autoscale to this peculiar part of the spectra but on all
Humufr> the spectra.
Humufr> I'm using the last CVS version.
This was done intentionally for performance reasons. Every plot
command calls autoscale, and each time this happens the autoscaler
would have to iterate over all the data in the axes (text, polygons,
lines, etc) and determine the vertices in the view limits. Certainly
doable, but I try to make mpl reasonably efficient for large data sets
and this could get expensive. Instead, when a piece of data is added
to the axes initially, I update the datalim with it and use that in
the autoscaler.
I'm aware of the problem you describe -- autoscaling can be suboptimal
when you initially set an axis view to only a part of the axes. This
is a problem I can live with -- autoscaling doesn't have to be
perfect, it just needs to get it right most of the time. And when it
doesn't do the most sensible thing, eg in this case, it at least
includes your data in the plot, and it is fairly easy for you to use
the navigation controls -- eg constrained y zoom by pressing y and
dragging the right mouse -- or otherwise set the limits manually.
So this is an area where I'd rather trade convenience for performance.
It wouldn't be to much work though, to add an option to the the
autoscaler to fix one of the axes limits (eg the xlim) and autoscale
the other axis only considering data within that range....
JDH
>>>>> "Humufr" == Humufr <humufr@...> writes:
Humufr> Hi, I found something strange inside the eps file create
Humufr> with matplotlib. I used matplotlib to trace a port of a
Humufr> spectra (I used the function plot and axis). I have been
Humufr> very surprise to see that all the spectra was inside the
Humufr> eps file. To see it, I must admit that I did something
Humufr> weird. I create an eps file with matplotlib and I
Humufr> transform the file in svg format with pstoedit and I edit
Humufr> this file with inkscape.
Humufr> I don't know where is the problem but I don't think that
Humufr> it's necessary to have all the point inside the output
Humufr> file, perhaps it's not possible to do anything to change
Humufr> it but that can create some huge file. So if nothing can
Humufr> be done, that will be a good idea to put it in the FAQ to
Humufr> let the users cut their data if needed.
This is intentional, but I can see the problems it could create for a
large PS file, so it may be worth mentioning this in the
documentation. Basically, we leave it or the backend to do the
clipping to the view limits, and in postscript the total line path is
drawn and the clip rectangle is set. It would be difficult and
inefficient for us to do the clipping in the front end. Think about
pathological cases, for example, where the x,y center of a marker is
far outside the view limits, but the marker is very large so that some
of its vertices are inside the view limits.
Earlier versions of matplotlib had a data clipping feature where line
points outside the view box were removed before adding them to the
backend. But noone ever used it and it added complexity to the code
so I eventually deprecated it.
JDH
>>>>> "Stephen" == Stephen Walton <stephen.walton@...> writes:
Stephen> Hi, all, Well, I've hit a new problem with the log
Stephen> plotting issue. Try the following commands after
Stephen> 'ipython -pylab':
Stephen> x=arange(25)+1 semilogx(x,x**2) hold(False)
Stephen> semilogx(x,x**2)
Stephen> I get an apparently unbreakable chain of "Cannot take log
Stephen> of nonnegative value" messages for every following
Stephen> plot(), semilog(), or loglog() command until ipython is
Stephen> exited. None of close(1), clf(), or cla() helps clear
Stephen> the problem. Only creating a new figure with figure(2)
Stephen> and plotting to it seems to help.
rm -rf your build subdir and reinstall matplotlib 0.72.1 or CVS. The
error string you report doesn't exist in the current code base, and I
can't reproduce your error.
JDH
John Hunter wrote:
>rm -rf your build subdir and reinstall matplotlib 0.72.1 or CVS.
>
Ouch. Sorry for the noise. I slipped up when I ran Fernando's
pybrpm-noarch script on matplotlib 0.72 and wound up with a reinstall of
an old 0.70 RPM. | http://sourceforge.net/mailarchive/forum.php?forum_name=matplotlib-users&max_rows=25&style=nested&viewmonth=200502 | CC-MAIN-2013-48 | refinedweb | 4,141 | 67.04 |
* A <code>SetQueueThreshold</code> instance requests to set a given28 * threshold value as the threshold for a given queue.29 */30 public class SetQueueThreshold extends AdminRequest {31 private static final long serialVersionUID = -8457079858157139094L;32 33 /** Identifier of the queue the threshold is set for. */34 private String queueId;35 /** Threshold value. */36 private int threshold;37 38 /**39 * Constructs a <code>SetQueueThreshold</code> instance.40 *41 * @param queueId Identifier of the queue the threshold is set for. 42 * @param threshold Threshold value.43 */44 public SetQueueThreshold(String queueId, int threshold) {45 this.queueId = queueId;46 this.threshold = threshold;47 }48 49 50 /** Returns the identifier of the queue the threshold is set for. */51 public String getQueueId() {52 return queueId;53 }54 55 /** Returns the threshold value. */56 public int getThreshold() {57 return threshold;58 }59 }60
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/objectweb/joram/shared/admin/SetQueueThreshold.java.htm | CC-MAIN-2018-05 | refinedweb | 152 | 68.47 |
compute unsigned (i.e., non-negative) distances from an input point cloud More...
#include <vtkUnsignedDistance.h>
compute unsigned (i.e., non-negative) distances from an input point cloud
vtkUnsignedDistance is a filter that computes non-negative (i.e., unsigned) distances over a volume from an input point cloud. This filter is distinct from vtkSignedDistance in that it does not require point normals. However, isocontouring a zero-valued distance function (e.g., trying to fit a surface will produce unsatisfactory results). Rather this filter, when combined with an isocontouring filter such as vtkFlyingEdges3D, can produce an offset, bounding surface surrounding the input point cloud.
To use this filter, specify the input vtkPolyData (which represents the point cloud); define the sampling volume; specify a radius (which limits the radius of influence of each point); and set an optional point locator (to accelerate proximity operations, a vtkStaticPointLocator is used by default). Note that large radius values may have significant impact on performance. The volume is defined by specifying dimensions in the x-y-z directions, as well as a domain bounds. By default the model bounds are defined from the input points, but the user can also manually specify them. Finally, because the radius data member limits the influence of the distance calculation, some voxels may receive no contribution. These voxel values are set to the CapValue.
This filter has one other unusual capability: it is possible to append data in a sequence of operations to generate a single output. This is useful when you have multiple point clouds (e.g., possibly from multiple acqusition scans) and want to incrementally accumulate all the data. However, the user must be careful to either specify the Bounds or order the input such that the bounds of the first input completely contains all other input data. This is because the geometry and topology of the output sampling volume cannot be changed after the initial Append operation.
Definition at line 75 of file vtkUnsignedDistance.h.
Definition at line 84 of file vtkUnsignedDistance.h.
Standard methods for instantiating the class, providing type information, and printing. the i-j-k dimensions on which to computer the distance function.
Set / get the region in space in which to perform the sampling.
If not specified, it will be computed automatically.
Control how the model bounds are computed.
If the ivar AdjustBounds is set, then the bounds specified (or computed automatically) is modified by the fraction given by AdjustDistance. This means that the model bounds is expanded in each of the x-y-z directions.
Specify the amount to grow the model bounds (if the ivar AdjustBounds is set).
The value is a fraction of the maximum length of the sides of the box specified by the model bounds.
Set / get the radius of influence of each point.
Smaller values generally improve performance markedly.
Specify a point locator.
By default a vtkStaticPointLocator is used. The locator performs efficient searches to locate points surrounding a voxel (within the specified radius).
The outer boundary of the volume can be assigned a particular value after distances are computed.
This can be used to close or "cap" all surfaces during isocontouring.
Specify the capping value to use.
The CapValue is also used as an initial distance value at each point in the dataset. By default, the CapValue is VTK_FLOAT_MAX;
Set the desired output scalar type.
Currently only real types are supported. By default, VTK_FLOAT scalars are created.
Definition at line 175 of file vtkUnsignedDistance.h.
Definition at line 176 of file vtkUnsignedDistance.h.
Initialize the filter for appending data.
You must invoke the StartAppend() method before doing successive Appends(). It's also a good idea to manually specify the model bounds; otherwise the input bounds for the data will be used.
Append a data set to the existing output.
To use this function, you'll have to invoke the StartAppend() method before doing successive appends. It's also a good idea to specify the model bounds; otherwise the input model bounds is used. When you've finished appending, use the EndAppend() method.
Method completes the append process.
Process a request from the executive.
For vtkImageAlgorithm, the request will be delegated to one of the following methods: RequestData, RequestInformation, or RequestUpdateExtent.
Reimplemented from vtkImageAlgorithm.
Subclasses can reimplement this method to collect information from their inputs and set information for their outputs..
Fill the input port information objects for this algorithm.
This is invoked by the first call to GetInputPortInformation for each port so subclasses can specify what they can handle.
Reimplemented from vtkImageAlgorithm.
Definition at line 209 of file vtkUnsignedDistance.h.
Definition at line 210 of file vtkUnsignedDistance.h.
Definition at line 211 of file vtkUnsignedDistance.h.
Definition at line 212 of file vtkUnsignedDistance.h.
Definition at line 213 of file vtkUnsignedDistance.h.
Definition at line 214 of file vtkUnsignedDistance.h.
Definition at line 215 of file vtkUnsignedDistance.h.
Definition at line 216 of file vtkUnsignedDistance.h.
Definition at line 217 of file vtkUnsignedDistance.h.
Definition at line 220 of file vtkUnsignedDistance.h. | https://vtk.org/doc/nightly/html/classvtkUnsignedDistance.html | CC-MAIN-2020-50 | refinedweb | 836 | 51.34 |
Hello and thank you for using Just Answer. This action can take several forms. The action can be a gift to you because your father deposit the $15,000 into the checking account this year. There is a $12,000 exclusion per person per year so you father will file a gift tax return (Form 709) for the balance of $3,000. If the $15,000 was left in the account for over one year the amount could be divided over years and no gift tax would be due. You can treat the $15,000 as a gift from both parents and no git tax return would be due.
An individual doesn't make a gift when he opens a joint bank account with his own funds for himself and another under the terms of which he can regain the entire fund without the other's consent. A gift is made only when the other person withdraws money for his own benefit (Reg 25.2511-1(h)(4)).
Gifts from a nonresident alien or foreign estate, reporting is required only if the aggregate amount of gifts from that person exceeds $100,000 during the tax year.
You would not have to report the $75,000 if you transfers the amount to your banking account. | http://www.justanswer.com/tax/18gia-hypothetically-speaking-let-s-say-father.html | CC-MAIN-2015-06 | refinedweb | 214 | 76.45 |
Leveraging trimming to make the Microsoft Store faster and reduce its binary size
In our previous blog post about the new Microsoft Store, we talked about how it’s a native C#/XAML Windows application running on the UWP framework, and how it leverages .NET Native to be distributed as a 100% AOT binary with great startup performance. We also discussed how we’ve been working to refactor the codebase to make it faster, easier to maintain, and to work almost entirely in C# running on the .NET runtime.
As we recently announced at Microsoft Build in our session on Windows Application Performance, lately we’ve also worked to enable trimming in the Microsoft Store, allowing us to both improve startup times and significantly reduce binary size. In this blog post, we wanted to elaborate and introduce trimming to developers that might not be familiar with it already. We’ll explain how it works, the benefits it brings, and the tradeoffs that must be done in order to leverage it.
We’ll also use examples from our changes to the Store to explain common scenarios that might require extra work in order to make your codebase trimmable. And lastly, we’ll show you how to stay ahead by spotting code that might not be trimming friendly and architecting new code to make transitioning to trimming as smooth as possible.
So, here’s a deep dive on trimming in the .NET world! 🍿
Trimming in a nutshell
Trimming is a process that lets the compiler remove all unreferenced code and unnecessary metadata when compiling an application. In the context of the Microsoft Store, which ships as a 100% native binary produced by the .NET Native toolchain, trimming can affect both generated code as well as reflection metadata: both can be independently stripped if not needed.
Specifically, trimming is performed in the dependency reduction step of the toolchain. Here, the compiler will combine all the application code that is directly referenced, along with additional information from XAML files and from some special-cased reflection APIs, and use this information to determine all code that could ever be executed.
Everything else is just thrown away! 🧹
Why should you be interested in this? The main benefit it brings is that removing all of this unnecessary code can greatly reduce the binary size of your application, which brings down hosting costs and also makes downloads faster. It can also improve performance in some cases, especially during startup, due to the runtime having less metadata to load and keep around while executing.
Recent versions of .NET support this as well, as we’ll see later on, and they do so both through IL trimming and binary trimming (in ReadyToRun and NativeAOT builds). So almost everything in this blog post (the risks of trimming, making code suitable for being statically analyzed, etc.) applies to nearly all trimming technologies in .NET, which is why we wanted to share this with the community 🙌
How to enable .NET Native trimming
The way trimming (among other things) is controlled on .NET Native is through a runtime directive file (the generated
Default.rd.xml file in a blank UWP app). In this case, this file will contain hints to instruct the compiler about how trimming should be performed. That is, directives here will force it to preserve additional members on top of those already statically determined to be necessary by the dependency reduction step.
This also means that this file can disable trimming completely, if the hint given to the compiler simply says “just keep everything”. In fact, this is what all UWP applications do by default: they will preserve all metadata for the entire application package. The default runtime directive file looks something like this:
<!-- Directives in Default.rd.xml --> <Directives xmlns=""> <Application> <Assembly Name="*Application*" Dynamic="Required All" /> </Application> </Directives>
That
*Application* name is a special identifier that tells the compiler to just disable trimming entirely and preserve all code in the whole application package. This makes sense as a default setting, for compatibility reasons, and is conceptually similar to how modern .NET applications work: trimming is disabled by default (even when using AOT features such as ReadyToRun), to ensure less friction and consistent behavior especially for developers migrating code. Modern .NET also uses a very similar file to allow developers to manually give hints to the compiler for things that should be preserved, if needed.
What
Dynamic="Required All" does is it instructs the compiler to act as if all code in the entire application was being accessed via reflection. As such, the compiler will generate AOT code and all necessary data structures to make all that code be executable at runtime. Of course, this severely affects the binary size of the resulting application.
Instead, with trimming enabled, the behavior would be the opposite: the compiler would start compiling just the
Main() method of the application and then from there it would crawl the entire package and create a graph to identify all code paths that could potentially be executed, discarding everything else.
So, all that’s really needed to get started with this is to just remove that directive, compile the app with .NET Native, and start experimenting. Creating a package without that line is also a very good idea (even without actually making sure the app works) to get a rough estimate of how much binary size could potentially be saved with trimming. This can also help to evaluate how much effort could be worth it, depending on how large this size delta is for your application. ⚗️
Common pitfalls and things that can break
Perhaps unsurprisingly, having the compiler physically delete code can have all sorts of unwanted side effects if one is not very careful. The issue is that in order for trimming to be safe, the compiler needs to be extremely accurate when trimming out code that should never be executed.
When reflection and dynamic programming are used instead, these assumptions can end up being inaccurate, and bad things can happen. What’s worse, many of these issues can only be detected at runtime, so thoroughly testing the code is crucial to avoid introducing little ticking time bombs all over the codebase when enabling trimming. 💣
To make some examples of things that might break:
Type.GetType(string)might return
null(the target type might even be gone entirely).
Type.GetProperty(string)and
Type.GetMethod(string)might return
nullas well, if the compiler has accidentally removed the member we’re looking for.
Activator.CreateInstance(Type)might fail to run.
- Reflection-heavy code in general (eg.
Newtonsoft.Jsonfor JSON serialization) might crash or just work incorrectly, for instance by missing some properties of objects being serialized.
Some of these APIs can be slightly more resilient on .NET 6+ in cases where the target types are known at compile time, thanks to the new trimming annotations. For instance, an expression such as
typeof(MyType).GetMethod("Foo") will work with trimming too, because
GetMethod is annotated as accessing all public methods on the target type. Here that type is known at compile time (it’s
MyType), so all of its methods will be safe to reflect upon at runtime.
.NET Native also recognizes plenty of reflection APIs in similar cases, but the fact those APIs are not annotated and that there is no analyzer available means that it’s not possible to really know for sure whether a given call is safe to trim, which makes things trickier. That is, a lot more trial and error is generally involved on .NET Native compared to .NET 6, which shows how the ongoing effort to improve things in this area by the .NET team is making things a lot better over time. 🏆
That said, trimming will still cause problems in all cases where arguments are not immediately known when building code, so trying to minimize reflection use in general is still very important. Furthermore, doing so will also allow you to save even more binary size: if you only need to access the method
Foo, why would you want to keep all public methods on that type instead of just that one? Proper trimming-friendly code can help you go the extra mile there! ✈️
There are two things that can be done to work around issues caused by trimming:
- Refactoring code to just stop using these APIs entirely.
- Adding some runtime directives to give hints to the compiler (“hey, I really need this metadata!”).
We’ll see how both approaches can be used and how to choose between the two in the rest of this post.
Key point for debugging trimming
There is a crucial aspect of trimming that must be kept in mind when using it. Since trimming generally isn’t used in Debug builds, the runtime behavior can differ between Debug and Release, meaning it’s extremely important to be very careful when investigating and testing out code.
It’s good practice to regularly test Release builds with trimming enabled (or Debug ones too if trimming can be enabled there as well) to account for things that might have broken because of it. This is especially recommended after making larger changes to your application. 🔍
Case study: retrieving target properties
The first example of refactoring from the Store is about removing a pattern that is relatively common to see in applications. You might have seen code similar to this yourself in the past, especially when looking at application code. Consider a method trying to dynamically retrieve the value of some well known property:
public string TryGetIdFromObject(object obj) { return obj.GetType().GetProperty("Id")?.GetValue(obj) as string; }
The
GetType().GetProperty(string)?.GetValue bit is quite convenient: you want to get the value of a property with a given name, and you know that this method will be used with a variety of different objects that may or may not have the property. This solves the issue in a single line of code, so it’s plausible that a developer would end up using this solution.
But, this is terrible for trimming: there is no way for the compiler to know exactly what object type will be received as input by this method, except for very specific cases where it might be able to inline the method into a callsite where the argument type was known. As a result, trimming might very well remove the reflection metadata for that property, causing that line to fail to retrieve any values.
The fix for this case is simple. Instead of relying on reflection to interact with members on any arbitrary object, the relevant members should be moved directly onto a type, with consumers leveraging the exposed type directly. This pattern is known as dependency inversion, and will be a core principle in your journey to making your codebase trimming-friendly.
In this case, we can achieve this by simply adding an interface:
public interface IHaveId { string Id { get; } }
And then that method would simply become:
public string TryGetIdFromObject(object obj) { return (obj as IHaveId)?.Id; }
This is just a minimal example, but we’re trying to introduce a mindset that rethinks how components depend on each other, in order to facilitate trimming. Let’s move to an actual example of code we have in the Store to showcase this.
In our product page, we display a button to let users perform various actions such as buying, renting, installing, etc. in an area that we refer to as “buybox”. In our code, we have a
SkuSelectorConfiguration type that’s one of the components involved in setting this button. This type takes care of doing some checks on the available SKUs for the product (we have a
Sku data model we’ll be using in this example) to determine how the buybox button should be configured and displayed.
Part of the code we used to configure these buttons used to look like this:
// Some selector for buying products return new SkuSelectorConfiguration { Id = configurationId, PropertyPath = "AvailableForPurchase", // Other properties... }; // Some selector for renting products return new SkuSelectorConfiguration { Id = SelectorConfigurationId.BuyOrRent, PropertyPath = "IsRental", // Other properties... }; // Some selector for choosing a streaming service return new SkuSelectorConfiguration { Id = SelectorConfigurationId.StreamOption, PropertyPath = "ExternalStreamingService.Id", // Other properties }
These are a small set of all the possible SKUs. As you can see, there are quite a lot of different combinations to support. In order to make this system flexible, this
SkuSelectorConfiguration type was exposing a
PropertyPath property taking the path of the property to retrieve the value for. This would then be used in some other shared logic of the selector to determine the right buybox button configuration.
As a side note, this code should’ve used a
nameof expression where possible instead of hardcoded literals, which would’ve made it less error prone. Of course, doing so wouldn’t have solved the issues with respect to trimming, but it’s still good to keep this feature in mind in case you had to write something similar to this. For instance,
"IsRental" could have been written as
nameof(Sku.IsRental), which eliminates the chance of typos and ensures the literal remains correct even when refactoring.
As you’d expect, a lot of reflection is involved here. Specifically, that code needed to parse the various property path components of that input path, and explore the target
Sku object to try to retrieve the value of that (potentially nested) property being specified. When trimming is enabled, all the metadata for those properties might be gone entirely given the compiler would not see it being useful. As a result, this wouldn’t work correctly at runtime.
One way to fix this is to replace all of those
string parameters to accept a
Func<Sku, object?> object instead. This would allow consumers to still be able to express arbitrary paths to retrieve properties and nested properties for a given
Sku object, while still making the code statically analyzable.
This also has the advantage of making the code less brittle, as it’d be no longer possible to make typos in the property path, or to forget to update a path in case a property was renamed. We will see how these two very welcome side effects are a recurring theme when doing trimming-oriented refactorings.
Here is what the selectors shown above would look like with this change:
return new SkuSelectorConfiguration { Id = configurationId, PropertyAccessor = static sku => sku.AvailableForPurchase, // Other properties... }; // Some selector for renting products return new SkuSelectorConfiguration { Id = SelectorConfigurationId.BuyOrRent, PropertyAccessor = static sku => sku.IsRental, // Other properties... }; // Some selector for choosing a streaming service return new SkuSelectorConfiguration { Id = SelectorConfigurationId.StreamOption, PropertyAccessor = static sku => sku.ExternalStreamingService.Id, // Other properties }
The result is still very easy to read, and the code is now perfectly trimming-friendly 🎉
Of course, this is a very small change, and that’s why we started here to introduce the topic. Our goal here is raising awareness on what to look out for in order to make a codebase trimming-friendly. This is important, as many of these code paths might not be perceived as potentially problematic, unless looking at them in the context of trimming.
Once you’re in this mindset and can identify and make these changes, the payoff is well worth it!
Let’s look at two more examples from the Store that required more changes and restructuring in order to allow the code to support trimming.
Case study: type name for page layout selectors in XAML
The Store has a very dynamic UI, and one of the things that makes it so is the fact that we have many different templates and layouts to display content based on several criteria, such as adapting to different form factors and screen sizes, or rearranging or restyling templates depending on the device in use or product being viewed. The end result is a rich and beautiful UX for the end user.
This was one of the areas where some refactoring had to be done to make the code more trimmer friendly. Again, this is code that was working just fine before, and had been working fine for years, but that just wasn’t meant to be used together with trimming. As such, it makes for another good example of how you can change your approach to make code better suited for static analysis.
You can imagine we had some interface to act as a template selector, that might look like this:
public interface ILayoutSelector { string SelectLayout(ProductModel product); }
Objects using this interface will be responsible for selecting a given layout to use for an input product, and return a string representing the resource key to get that layout from the XAML resources. All XAML resources are preserved automatically, so we don’t need to worry about trimming interfering with that aspect.
We had several controls in the Store that were working with these layout selectors in an abstract way. That is, they would be initialized with the type of layout selector to use, and then they’d try to create an instance and use that to select layouts for input products. This made them very flexible and easily reusable across different views.
The way they were initially implement though was not trimming-friendly. We had something like this:
public abstract class PageLayout { private ILayoutSelector _layoutSelector; public string LayoutSelectorTypeName { set { try { _layoutSelector = (ILayoutSelector)Activator.CreateInstance(Type.GetType(value)); } catch (TargetInvocationException e) { Log.Instance.LogException(e, $"Failed to create a {value} instance."); } catch (ArgumentNullException e) { Log.Instance.LogException(e, $"Failed to find type {value}."); } } } // Other members here... }
This was resilient to errors, including the constructors of the input layouts throwing some exception (in that case the layout selector being
null would instead gracefully be handled by the rest of the code in the control), and allowed developers to easily pass arbitrary selectors in XAML, like so:
<DataTemplate x: <local:AppPageLayout </DataTemplate>
Now, what if we wanted to enable trimming? Our layout selectors (eg.
StandardPdpLayoutSelector in this case) will be seen by the compiler as never being instantiated: there are no direct calls to their constructors anywhere in the codebase. This is because we’ve hardcoded the type name as a string, and are constructing an instance with
Activator.CreateInstance (what’s worse, the non-generic overload, which gives the compiler no type information whatsoever).
As a result, when performing trimming, the compiler would just completely remove these types, causing that
Type.GetType call to return
null, and the layout selector instantiation to fail completely. What’s worse, this is all happening at runtime, so there’s no easy way to catch this in advance.
Furthermore, even ignoring this, the code above is particularly error prone. Each type name is just a hardcoded string that might contain some typos or might not be updated when the type is renamed or refactored. We want to find a solution that’s both less brittle and statically analyzable.
A first solution would be to just change
PageLayout to directly accept an
ILayoutSelector instance, and then instantiate the layout selector directly from XAML, like so:
<DataTemplate x: <local:AppPageLayout> <local:AppPageLayout.LayoutSelector> <controls:StandardPdpLayoutSelector/> </local:AppPageLayout.LayoutSelector> </local:AppPageLayout> </DataTemplate>
This is much better already: the type is directly referenced from XAML, there is no chance of typos, refactoring is covered (the XAML would be updated), and the compiler can now easily see this type being directly used. But we’re still not covering all the cases we were before. What if instantiating a layout selector threw an exception? This might be the case for special selectors that use other APIs to help select which layouts to use, such as getting more information about the current device.
To work around this, we can once again invert our previous logic, and introduce a factory interface:
public interface ILayoutSelectorFactory { ILayoutSelector CreateSelector(); }
It’s critical for this interface to be non-generic, since we’ll need to use this in XAML.
Now, we can write a very small factory associated to each of our selector types. For instance:
public sealed class StandardPdpLayoutSelectorFactory : ILayoutSelectorFactory { public ILayoutSelector CreateSelector() { return new StandardPdpLayoutSelector(); } }
These factory instances simply act as very thin stubs for the constructors of the target type we want to instantiate. These are extremely small objects that are also only ever instantiated once in each control that uses them, so they’re essentially free and add no overhead whatsoever. Not to mention, they’re still much faster than going through an
Activator.CreateInstance call anyway.
Now, we can update the definition of
PageLayout to use this interface instead:
public abstract class PageLayout { private ILayoutSelector _layoutSelector; public ILayoutSelectorFactory LayoutSelectorFactory { set { try { _layoutSelector = value.CreateSelector(); } catch (Exception e) { Log.Instance.LogException(e, $"Failed to create a selector from {value}."); } } } // Other members here... }
With this, all callsites in XAML will look like this:
<DataTemplate x: <local:AppPageLayout> <local:AppPageLayout.LayoutSelectorFactory> <controls:StandardPdpLayoutSelectorFactory/> </local:AppPageLayout.LayoutSelectorFactory> </local:AppPageLayout> </DataTemplate>
And that’s it! This gives us all the benefits we’re looking for:
- It’s validated at build time so there’s no chances for typos or issues if code is refactored.
- It’s completely statically analyzable given all types are now directly referenced.
We can now enable trimming without worrying about this part of the codebase causing issues.
Also worth mentioning how once again, making the code reflection-free not only solved our issues with trimming here, but also ended up making the code more resilient in general, which reduces the chances of bugs being introduced. This is something that’s extremely common with this type of refactoring, and a good reason why you should keep these principles in mind (especially for new code), even if you don’t plan to enable trimming in the short term.
Case study: data contract attributes factory
Just like you’d expect for an app that interacts with the web, the Store has a lot of code to deal with web requests and JSON responses.
One aspect of our architecture that required a lot of restructuring to make it trimmer friendly was creating the data models that wrapped the deserialized JSON responses from web requests. We call these response objects data contracts, and we then have our own data models wrapping each of these instances to layer additional functionality on top of them. The rest of our code, especially the one in the UI layer, only ever interacts with these data models, and never with the data contracts themselves.
To make a minimal example, you could imagine something like this:
public sealed record ProductContract(string Id, string Name, string Description); // Other properties... public sealed class ProductModel { private readonly ProductContract _contract; public ProductModel(ProductContract contract) { _contract = contract; } // Public properties here for the rest of the app... }
Now, this is particularly simple here, but there are some key aspects to consider:
- Many of our data models can be constructed from and/or wrap different contract types. That is, they may have multiple constructors, each receiving a different contract object as input, and then initializing the state of that data model. There is no specific contract that a data model has to respect: each model might only map to a single contract types, or several of them, depending on the specific case.
- The code doing web requests doesn’t directly interact with the contract types used to deserialize the JSON responses. These types are hidden away and considered an implementation detail, especially given that in many cases they’re not even known at build time, but only when the JSON responses are deserialized (since many responses are polymorphic). All that the code doing a request knows is what endpoint it wants to use, what parameters it wants to pass, and what data model
Tit wants to receive back as a response.
The architecture to handle this that the Store has been using since the start was (understandably) based on reflection. Specifically, there was a
[DataContract] attribute that was used to annotate data models with their respective contract types. These attributes were then being retrieved at runtime and used to detect applicable constructors to lookup and invoke to create data model instances (again through reflection). It looked something like this:
[AttributeUsage(AttributeTargets.Class, AllowMultiple = true)] public sealed class DataContractAttribute : Attribute { public DataContractAttribute(Type contractType) { ContractType = contractType; } public Type ContractType { get; } }
A data model, like the one mentioned in the example above, would then be annotated like so:
[DataContract(typeof(ProductContract))] public sealed class ProductModel { public ProductModel(ProductContract contract) { // Initialization here... } // Rest of the logic here... }
If a data model had multiple constructors accepting different data contract types, it would then also be annotated with additional
[DataContract] attributes to indicate that. Then at runtime, this was all being used as follows:
public static T? CreateDataModel<T>(object dataContract) where T : class { // Parameter validation here... (omitted for brevity) // If the data contract is already of type T, nothing else to do if (dataContract is T model) { return model; } // Get all [DataContract] attributes on the data model type object[] contractAttributes = typeof(T).GetCustomAttributes(typeof(DataContractAttribute), inherit: false); // Get all data contract types from those retrieved attributes IEnumerable<Type> contractTypes = contractAttributes.Select(static a => ((DataContractAttribute)a).ContractType); // Loop through all declared contract types for the data model type foreach (Type contractType in contractTypes) { // If the contract type is not of this type (nor assignable), skip it if (!contractType.IsAssignableFrom(dataContract.GetType())) { continue; } // Locate constructor that accepts this contract type as parameter ConstructorInfo constructor = typeof(T).GetConstructor(new[] { contractType })!; return (T)constructor.Invoke(new[] { dataContract }); } return null; }
With this method, each caller can just do a web request, get some
object back representing the deserialized web response, and invoke this method to try to create a
T data model instance. This works just fine, and for people that might have worked in similar applications before, a setup like this might look familiar. There is nothing inherently wrong with this, and in fact it’s just using reflection, a very powerful tool that’s a fundamental building block of .NET, to implement an API that makes things simpler for callers. It is also making code self-descriptive as each data model can declare what contracts it supports right over its own type definition.
But, this does have some considerable drawbacks:
- There’s no build-time validation that a data model type effectively exposes a constructor matching the contract that it declared via its attributes. It’s easy to just forget to add a contructor and then have code trying to look it up via reflection later on just fail and crash.
- There is also no build-time validation that each constructor has a matching attribute: it’s more than possible to just forget to add an attribute, causing that data model to fail to be created. If you’re wondering “Why have the attributes in the first place, instead of just going through all constructors directly?”, that is certainly possible, but doing so could introduce other problems, in case a data model needed to expose additional constructors that are meant to be called manually elsewhere. Having the attributes instead guarantees that only those meant to be used for automatic deserialization will be looked up.
- The elephant in the room here is the fact that as far as the compiler is concerned, none of those constructors are ever referenced. As a result, trying to enable trimming here would (and did) cause the compiler to just remove all those methods entirely, and then the code to fail to run because those constructors it was trying to look up via reflection were now nowhere to be seen.
There are multiple ways to work around this, and one is to manually give some hints to the compiler to tell it to preserve additional metadata and code here. For instance, you might look at the .NET Native troubleshooting guide and the
MissingMetadataException troubleshooting tools for types and methods and come up with something like this:
<!-- Directives in Default.rd.xml --> <Directives xmlns=""> <Application> <Type Name="MicrosoftStore.DataModels.DataContractAttribute"> <AttributeImplies Activate="Required Public" /> </Type> </Application> </Directives>
Such a directive would inform the compiler to preserve the public constructors for all types annotated with this
[DataContractAttribute], which would avoid having the app crash. But, even this still has several drawbacks:
- We haven’t solved the issue of code being brittle and error prone.
- We still suffer the same performance issues as before (this reflection-based approach is very slow, even using some caching to avoid retrieving attributes and constructors multiple times).
- …We have just blocked trimming here, we haven’t actually fixed the issue. That is, while this might prevent the crashes we would’ve experienced otherwise, it’s more of a workaround and not an actual solution. With directives like this, the compiler will be forced to preserve more metadata and code than what’s actually needed, meaning it won’t give us the results we’re looking for.
- This approach introduces another problem too: if we start accumulating directives like this for the whole codebase, we’re now introducing yet another possible failure point, where it needs to be manually kept in sync with changes in the rest of the code to avoid causing crashes at runtime. What’s worse, we not only won’t have any kind of build-time checks to help us here, but this is something that cannot even be detected in normal Debug builds as trimming is not generally used there. This makes bugs especially difficult to spot.
Let’s take a step back. As we said, the current solution is fine on its own, but it’s not what you’d want to write if you were building a system like this from scratch, with support for trimming in mind. This is what we meant by trimming requiring a shift in mindset when writing code: can we rethink the way this code is structured to make this code statically analyzable and no longer rely on reflection?
One thing immediately comes to mind here when looking at the code: like many libraries that heavily rely on reflection, the control flow here has each type self-describing its contract through attributes, and then there is a centralized piece of logic elsewhere that reads these annotations and performs logic to create instances of these data model types.
What if we turned this upside down and instead made each type be responsible for its own instantiation logic, with the rest of the code only delegating to these helpers in each contract type?
This kind of change is exactly what we ended up doing in the Microsoft Store. Instead of using attributes and the code we just showed, we introduced this interface to clearly define a “data contract”:
public interface IDataModel<TSelf> where TSelf : class { TSelf? Create(object dataContract); public static class Factory { public static TSelf? Create(object dataContract); } }
Enter
IDataModel<TSelf>. This interface is the new contract that each data model will implement, and it will expose a single API (the
Create method) to allow each data model to provide the logic to create an instance of that type directly from there. This is an example of the dependency chain inversion that is often very beneficial for trimming: since we now have the logic be implemented at the leaf nodes of our class diagram, there is no need to use reflection to introspect into these types from a central place later on to try to figure out what to do.
This is how the data model showed earlier looks like using this new interface:
public sealed class ProductModel : IDataModel<ProductModel> { private SomeDataModel(ProductContract dataContract) { // Initialization here... } ProductModel IDataModel<ProductModel>.Create(object dataContract) { return dataContract switch { ProductContract productContract => new(productContract), // As many other supported contracts as we need here... _ => null }; } // Rest of the logic here... }
With this approach, the
Create is explicitly implemented to only ever be visible when accessed through the interface, since that’s the only case where it’s meant to be invoked.
The implementation is stateless (this will be important later) and only does type checks on the input and forwards the downcast object to the right constructor. This ensures that the compiler can “see” a reference to each constructor being invoked, which means it will be able to preserve it automatically (but not its reflection metadata). This gives us both functionality with trimming enabled, as well as reduced binary size, as only the actual code that needs to run is kept.
Now, these
Create methods from each data model have to be invoked from somewhere, but on what instance? We’re trying to create some
T object, so we have no
T to invoke this interface method upon. To solve this, here’s the implementation of that static
Create method in the nested
IDataModel<TSelf>.Factory class:
public static class Factory { private static readonly TSelf _dummy = (TSelf)RuntimeHelpers.GetUninitializedObject(typeof(TSelf)); public static TSelf? Create(object dataContract) { // If the data contract is already of the requested type, just return it if (dataContract is TSelf dataModel) { return dataModel; } // If the the model type is an IDataModel<T>, try to create a model through its API if (_dummy is IDataModel<TSelf> factory) { return factory.Create(dataContract); } return null; } }
Here, we’re relying on the
RuntimeHelpers.GetUnitializedObject API, which is a special method that will create a dummy instance of a given type, without running a constructor at all (which means the type doesn’t even need to have a parameterless constructor at all). This API is also recognized by .NET Native and will work with trimming enabled as well. What we’re doing is using it to create a cached dummy instance of some
TSelf type, and then using that to call the
Create factory method if that object implements
IDataModel<TSelf>.
This gives us extra flexibility because it means we can check support for this interface at runtime, without needing to have the self constraint on the generic type parameter bubble up to all callers. That is, this both makes our public API surface simpler, and allows using this code for other types as well, such as when trying to deserialize some web response as eg. a simple
IDictionary<string, string> object.
With this, callers can now be rewritten as follows:
object response = await GetSomeWebResponseAsync(); // Before DataModel model = DataSources.CreateDataModel<DataModel>(response); // After DataModel model = IDataModel<DataModel>.Factory.Create(response);
Same functionality for callers, but completely reflection-free and statically analyzable! 🎊
This new version is also completely allocation free (and much faster), as we’re no longer using those reflection APIs. Making all of this faster and reducing allocations wasn’t even the main goal here, but it’s just a nice side effect that comes often with code refactorings that are meant to remove reflection and make the code more trimming-friendly.
Picking your battles: when not trimming is ok
As we just showed, there are cases where not using reflection might not be possible, and where it might make more sense to just give some hints to the compiler and sacrifice binary size a bit. This is the case for our JSON serialization in the Store. Currently, we’re using
Newtonsoft.Json, which is completely reflection-based.
We want to stress that this is not inherently bad: this is a great library that is incredibly powerful and easy to use, and there’s nothing wrong with it. Moreover, trimming has only been a real concern in the .NET world in recent years, and reflection has been a fundamental building block for .NET applications since day 1, so it makes perfect sense for this to use reflection. It has also been written way before Roslyn source generators existed.
While we start investigating eventually moving to
System.Text.Json and the new source generator powered serializers in the Store in the future, we still need to get this to work with trimming today. This is a good example when it’s completely fine to just give up on trimming, and to just use some directives to inform the compiler about what we need to do. We can still enable trimming in the entirety of our application package, and just preserve reflection metadata specifically for the data contract types we need to deserialize from web requests.
That is, we can just add something like this to our runtime directives:
<!-- Directives in Default.rd.xml --> <Directives xmlns=""> <Application> <Assembly Name="MicrosoftStore.DataContracts" Dynamic="Required Public"/> </Application> </Directives>
This will preserve all metadata and logic for public members of all types in our data contract assembly. That means that when
Newtonsoft.Json tries to introspect them to setup its deserializers, it will work as expected as if trimming wasn’t enabled – because it just wouldn’t be, specifically for those types. This is a good compromise between functionality and binary size: being able to still trim everything else in the whole package we can still get all the advantages that trimming brings, while only giving up on a few KBs for the types in this assembly.
Always think about where it makes sense to invest time when annotating a library for trimming: it may end up you’ll save most of the binary size without reaching 100% coverage, while reducing the time needed to implement the feature by quite a lot. Just like when optimizing code, always remember to take a step back and consider the bigger picture. 💡
Contributing back to open source
Since being more open and contributing back into open source is something we deeply care about, we took this opportunity to apply the things we learned here to improve code from the OSS projects we were using.
For instance, one of the things that broke when we first enabled trimming in our internal builds (we knew there would’ve been small issues to iron out) were the toast notifications. These are displayed by the Store in several cases, such as when the recently announced Restore Apps feature is triggered, to inform users that apps are being automatically installed on their new device.
To display these toast notifications, we’re using the
ToastContentBuilder APIs from the Windows Community Toolkit. Turns out, those APIs were internally heavily relying on reflection. As a result, when we enabled trimming and started testing Store builds in more scenarios, this is how our notifications started showing up:
…Oops! 🙈
This notification would normally have a meaningful title, a description, and an image. Due to trimming though, reflection metadata had been stripped from the
ToastContentBuilder APIs containing the information on all the components we wanted to display. This caused the serialization logic to just generate a blank XML document to register the notification, causing it to only show up with the app title and some default text.
We could fix this by manually preserving directives for this assembly as showed earlier:
<!-- Directives in Default.rd.xml --> <Directives xmlns=""> <Application> <Assembly Name="Microsoft.Toolkit.Uwp.Notifications" Dynamic="Required Public"/> </Application> </Directives>
Similarly to the data contract types mentioned above, this directive informs the compiler to preserve all metadata information for all public types in the whole
Microsoft.Toolkit.Uwp.Notifications assembly. This does fix the issue, but we decided to also contribute back to the Toolkit to refactor those APIs to stop using reflection entirely. This would allow consumers to not have to deal with this at all, and it would also save some more binary size given the compiler would then be able to delete even more code when compiling the application.
If you’re interested in seeing the kind of changes involved, you can see the PR here.
Binary size difference in the Microsoft Store
Now that it’s clear what kind of changes are required in order to make a codebase trimming-friendly, you might be wondering how much size all of this actually saves. Was all this effort worth it?
Here’s a size comparison for two builds of the Microsoft Store, with the only difference being that one has trimming enabled as well:
You can see that the app package with trimming enabled (on the left) is 25% smaller! 🎉
This means the compiler could literally delete one quarter of our entire binary thanks to this. The package in the screenshot includes both the x86, x64, and the brand new native Arm64 binaries for the Microsoft Store. It being more compact not only means startup is faster as there’s less to load from disk, but updates are faster too as there’s less to download, as well as steady state performance in some scenarios.
Final thoughts
We’ve seen what trimming is, the pitfalls it has and the kind of work it can require to enable it without issues. It was certainly not straightforward to enable in the Store, and while working on this we certainly went through a good amount of crashes, blank pages, and all sorts of invalid states. It was challenging but also lots of fun, and we do feel the improvements it brought were worth the effort!
We have very recently pushed a new version of the Microsoft Store with trimming enabled to Windows Insider users, and we are looking forwards to everyone being able to try this out along with all the other new improvements we have been working on! ✨
While this experiment was successful, we feel like this is a good example of just how tricky it can be to enable trimming on an existing, large codebase that hasn’t been written with support for this in mind, which is why we wrote this blog post.
As mentioned in our Microsoft Build talk on Windows applications performance, it’s important to always keep performance and trimming in mind when writing new code. We recommend using these two concepts as guiding principles when developing, as doing so will save you so much time down the line if you decide to go back and enable trimming, and even without considering that it will still likely force you to write code that is more resilient and less error prone, as the examples above showed.
Adoption tips
If you’re thinking about enabling trimming in a published application, here’s a few additional key points to summarize the pros and cons of enabling it, and good practices you can follow to help minimize risks:
- Trimming brings great benefits for both publishers and users alike, but it’s not risk-free. To prevent bad surprises, you should consider doing additional testing and adopting your release pipeline to it.
- Consider releasing your application to a small group of users before rolling it out to 100% of your target market (eg. going from employees only, to beta testers, to insider users only, etc.).
- Automated UI testing can help avoid regressions, as these tests would be using the actual final build artifacts in the same conditions as the end users. It might not always be possible to specifically test for trimming with just local unit tests, due to different build/run configurations.
- Consider performing extensive manual testing as well, especially in areas that might have undergone extensive refactoring to make them trimmer-friendly, to ensure the changes were effective in solving trimming issues.
Additional resources
If you’re interested in learning more on how trimming and runtime directives work on .NET Native in particular, you can also refer to these blog posts from the .NET Native team on the topic. They contain additional information on how the .NET Native toolchain performs trimming, and examples of common scenarios that might cause issues, and how to fix them:
- .NET Native introduction
- Dynamic features in static code
- Help! I hit a MissingMetadataException!
- Making your library great
Additionally, .NET 6 also has built-in support for trimming (see docs here), meaning it not only supports it for self-contained builds, but also ships with a brand new analyzer and set of annotations (some of which we’ve shown earlier in this post) that greatly help identifying code paths that might be trimming-unfriendly and potentially cause issues.
For more on this, and especially for library authors that want to ensure their packages will allow consumers to confidently enable trimming in their applications, we highly recommend reading the docs on this here from the .NET team. They come bundled with lots of useful information, code samples and useful tips to help you enable trimming in your own code! 🚀
Happy coding! 💻
In the section about refactoring string literals, I believe you have a typo in the new code.
Shouldn’t that be sku.AvailableForPurchase? Granted, if this were a real code change, the complier would obviously catch this, so point taken.
Whoops, yup that was indeed a typo, fixed! Thanks 😄
.NET Native being now in maintenance mode, what’s your plan going forward? Will the Microsoft Store eventually be migrated to WinUI & .Net?
BTW, the one thing I’m missing in .NET Native is ARM64EC support, which .Net has no plans to adopt anyway.
The Microsoft Store is already running on WinUI and .NET as a UWP app.
All of the wisdom in this blog post can be applied to any kind of code trimmer. | https://devblogs.microsoft.com/ifdef-windows/leveraging-trimming-to-make-the-microsoft-store-faster-and-reduce-its-binary-size/ | CC-MAIN-2022-33 | refinedweb | 7,511 | 50.46 |
Getting access to swf bridge in loaded sub applicationGreen Goby Jun 29, 2010 7:33 AM
In my appplication that has been loaded, I want to get a handle to the swf bridge so I can communicate with the parent application. The parent application, via the SWFLoader has direct access to the bridge, but what is the best / most reliable way to get access to the bridge in the loaded application?
I see in some of the flex sdk code they are casting the SystemManager to a DisplayObject and using loaderInfo.sharedEvents. That just doesn't seem right.
Thanks in advance for any help.
Irv
1. Re: Getting access to swf bridge in loaded sub applicationFlex harUI
Jun 29, 2010 4:46 PM (in response to Green Goby)
SWFLoader.swfBridge?
2. Re: Getting access to swf bridge in loaded sub applicationJean Demonceau Jun 30, 2010 2:47 AM (in response to Green Goby)
Your loaded application, (in the main file !) must dispatch events like this :
var event:CustomEvent = new CustomEvent();
loaderInfo.sharedEvents.dispatchEvent(event);
If you want to dispatch events from a subcomponent of your loaded application, do it via a mainFile reference.
var event:CustomEvent = new CustomEvent();
myMain.loaderInfo.sharedEvents.dispatchEvent(event);
The application that loads the other one must listen for events like this :
loader.source = "subApplication.swf";
loader.addEventListener(Event.COMPLETE, handlerSubApplicationComplete);
//The sub application is fully loaded, now I can listen for custom events
private function handlerSubApplicationComplete(e:Event):void
{
var subApplication:EventDispatcher = (e.target.mx_internal::contentHolder as Loader).contentLoaderInfo.sharedEvents;
subApplication.addEventListener("myCustomEvent",handlerCustomEventReceived, false, 0, true);
}
3. Re: Getting access to swf bridge in loaded sub applicationGreen Goby Jun 30, 2010 5:42 AM (in response to Jean Demonceau)
Thank you for your answers. My question is really if I am not in a displayObject of the loaded application, how do I get a handle to the loaderInfo? What is the best and most reliable way to do that.
I know in the loading application I can get a handle via SWFLoader.loaderInfo. But in the loaded application, (and not in a display object) how does one get a handle to the loaderInfo.
Thanks in advance.
Irv
4. Re: Getting access to swf bridge in loaded sub applicationJean Demonceau Jun 30, 2010 6:12 AM (in response to Green Goby)
I'm used to make a Singleton in all applications I write and to set the reference of the main in it.
So, anywhere in my applications, I can write : mySingleton.main.loaderInfo....
Here is an example of singleton class
package
{
import flash.events.EventDispatcher;
public class MySingleton extends EventDispatcher
{
// Singleton
private static var _instance:MySingleton ;
public function MySingleton ()
{
if(_instance != null)
throw new Error("Singleton can only be accessed through MySingleton.getInstance()");
else
{
//Instiante the instance
_instance = this;
}
}
//Singleton
public static function getInstance():MySingleton
{
if(_instance == null) _instance = new MySingleton();
return _instance;
}
[Bindable]
public function get main():MyApp
{
return getInstance()._main;
}
public function set main(value:MyApp):void
{
getInstance()._main = value;
}
}
}
In your MyApp, in the creationComplete handler, you write :
MySingleton.getInstance().main = this;
And anywhere in your application you dispatch your events like this :
MySingleton.getInstance().main.loaderInfo.....
5. Re: Getting access to swf bridge in loaded sub applicationFlex harUI
Jun 30, 2010 9:21 AM (in response to Green Goby)
In the general sense, you can't find out. In Flex, you can try using
SystemManager.getSWFRoot to get a SystemManager that loaded the class for
the non-display object. I would recommend passing in a reference to the
SystemManager to your non-display object though.
6. Re: Getting access to swf bridge in loaded sub applicationGreen Goby Oct 14, 2010 9:27 AM (in response to Green Goby)
In looking through the code, what about this?
Singleton.getClass("mx.managers::IMarshalSystemManager");
IMarshalSystemManager then lets me call the method useSWFBridge, then let's me get a handle to swfBridgeGroup, which then let's me get a handle to the parentBridge.
It seems this is reliable and is the way the internal classes get their work done.
Does anyone see a problem with this?
Thanks,
Irv
7. Re: Getting access to swf bridge in loaded sub applicationFlex harUI
Oct 14, 2010 1:52 PM (in response to Green Goby)
The parentBridge should be:
DisplayObject(systemManager).loaderInfo.sharedEvents;
8. Re: Getting access to swf bridge in loaded sub applicationGreen Goby Oct 29, 2010 1:23 PM (in response to Flex harUI)
Thank you, that worked great. For what it is worth, doing a Singleton.getInstance on the IMarshalSystemManager actually throws a runtime error. It can find the class, but then doing getInstance on that class, IMarshalSystemManager doesn't have that method, which throws a runtime error. Seems like this violates the Singleton api, but I was able to use the sharedEvents on the main application to communicate.
My question now is if I send an event, and I want the receiver of the event to be able to set some data on the event for the original caller to get, that doesn't seem to be working. I am trying to set a Boolean and a String property, but the original sender of the message isn't getting the values.
In the sender, the code looks like:
var bridgeEvent:BridgeEvent = new BridgeEvent( BridgeEvent.SUB_APPLICATION_INITIALIZED_TYPE, null, new Date().milliseconds + "" );
bridgeEvent.isRequest = true;
bridge.dispatchEvent( bridgeEvent );
// after dispatching get the response data
this._idForBridgeEvents = bridgeEvent.responseData as String;
In the receiver it does:
var eventObject:Object = event;
eventObject.isHandled = true;
eventObject.responseData = this._id;I have verified that the receiver is putting in the right data, but the sender is never seeing the Boolean or String.Is there something I can do here? I thought this was supposed to be the general approach. I can always send back another event with response data, but I'd like to avoid that here.Thanks,Irv
9. Re: Getting access to swf bridge in loaded sub applicationFlex harUI
Oct 29, 2010 1:56 PM (in response to Green Goby)
Maybe the event is getting cloned or marshalled unexpectedly? Use the
debugger to verify you are working with the right instance.
10. Re: Getting access to swf bridge in loaded sub applicationGreen Goby Oct 29, 2010 2:54 PM (in response to Flex harUI)
I am literally only doing what you see in the code. This is supposed to work across the bridge right? I went looking through the source code for flex and I could only find examples of sending data, not using "return" types as I am here.
Is there anything else that can make it get cloned? This is going across different ApplicationDomains.
Irv
11. Re: Getting access to swf bridge in loaded sub applicationFlex harUI
Oct 29, 2010 3:12 PM (in response to Green Goby)
If it gets re-dispatched that will clone it. Set a breakpoint at the
dispatchEvent line. Examine the event. The debugger should show you that is
is object @2398231 or whatever. Set a breakpoint in the handler, see if the
event is the same object. After setting it, use "step into" until you get
back to dispatchEvent to make sure no other handler stomped those
properties.
12. Re: Getting access to swf bridge in loaded sub applicationGreen Goby Nov 16, 2010 8:41 AM (in response to Flex harUI)
I probably should create a new thread, but for anyone else who has to do all this stuff, this is a nice thread with lots of answers.
Our application is like Mosaic but just for applications that the company I work for produces. (So a limited set of like 10 applications) I am trying to decide if I should call unloadAndStop on the SWFLoader when I go to another app. The business folks do want the app to stop, but I am wondering if there is another way to stop the app, but if they come back "reload" it without having to reload the SWF again. I think from looking at the Loader and SWFLoader code it knows if it has loaded the SWF file before, but I am wondering about side affects at this point. Is it safer to unload the SWF? Any memory leaks or gotchas with doing that?
Thanks,
Irv | https://forums.adobe.com/thread/670143 | CC-MAIN-2017-30 | refinedweb | 1,373 | 54.42 |
Hi, I'm new to Clarifai, I just tried to train my first model and used quickstart guide. This is the code I wrote for my script:
import clarifai.rest import ClarifaiApp
myAPIKey = "someAPIKey"
app = ClarifaiApp(api_key=myAPIKey)
for i in range(1,5):
filePath="some file path/" + str(i) + ".jpg"
app.inputs.create_image_from_url(url=filePath, concepts=["concept1"], not_concepts=["concept2"])
for i in range(1,5) :
filePath = "some file path2/" + str(i) + ".jpg"
app.inputs.create_image_from_url(url=filePath, concepts=["concept2"], not_concepts=["concept1"])
model = app.models.create(model_id="model1", concepts=["concept1, concept2"])
model = model.train()
print mode.predict_by_url(url="some url to test image")
It is almost exactly like the quickstart guide but the terminal gives me the error : "You cannot predict with an un-trained model, please call /models/{model_id}/versions to check the status of the model.
I'll appreciate if you can help me with thisThanks
Hey @smtabatabaie! The problem here is likely that the train operation is taking a couple of seconds and the predict is running before it finishes. You'll want to add a timeout or so that it waits to run the last line of code.
Hi , @jared . Thanks very much.So I tried to comment the prediction lines and put them in another script like this:
from clarifai.rest import ClarifaiApp
mAPIKey = "my api key"
app = ClarifaiApp(api_key=mAPIKey)
model = app.models.get(" model1")
print model.predict_by_url(url="some url to a test image")
but I'm still getting the same error and it doesn't seem to be related to the time between training and predicting. Thanks very muchBest
Hmm - were you ever able to do a successful model.train() on this particular one? What's the name of the model that you're working with?
Thanks very much @jared , The model name is "ikco1". I don't know how to check if model.train() has been successful or not.Thanks again very much
No problem @smtabatabaie! I'm seeing the following message in our database for your last train operation, which means that you'll need to add these concepts to at least one image in your training set:
Positive examples missing for concepts: [Runna, Dena]
Let me know if you need any further help.
Hi @jared, Thanks very much. I tried to do it for a new application and name a new model "ikco4" and start over. I trained the model with 5 images for each concept and keep it simple. And waited for several minutes then predict in another script. But again I get the same message and in response details I get "You cannot predict with an un-trained model, please call /models/{model_id}/versions to check the status of you model."I'll appreciate if you can help me with this.Thanks
I think I had made a silly mistake in my code on creating the model in concepts parameter. my code was this:
model = app.models.create(model_id="some_model_id", concepts=["concept1 , concept2"]
Whereas I should have wrote :
model = app.models.create(model_id="some_model_id", concepts=["concept1", "concept2"]
Thanks again very much I think it works now
Ah - sorry that I didn't catch that! | http://community.clarifai.com/t/you-cannot-predict-with-an-un-trained-model-error/910 | CC-MAIN-2018-47 | refinedweb | 527 | 66.33 |
Need Python script to decode or encode in base64
Hi,
Any one know the script to decode or encode in base64 because in FileZilla FTP client using the base64 encryption to encrypt passwords i need to decode those passwords for our record. Normally i am using mobile phone for travel so i need the python script for the same.
Note: I need prompt for input the encoded value & decoded value.
There's the
base64module in the standard library. It can encode and decode base64, base32 and base16 ("hex").
FYI, encoding is not the same as encryption. Encoding means converting data to a different format that can be converted back to the original format. For example, converting audio files to MP3, saving an image as PNG, writing a plain text file as UTF-8 or encoding data in base64 are all encoding, not encryption. In all cases you are able to (and want to) get back the original data.
Encryption means converting data in a way that normal people cannot easily find out the original data, but the intended recipient can. For example HTTPS (which uses SSL/TLS) is encryption - by looking at the data that is sent between client and server, you cannot see what they are talking about, but if you access a website over HTTPS your browser can display the result.
- filippocld
import base64 encsting = raw_input('Encoded Value:') decstring = base64.b64decode(encsting) print decstring
@filippocld Thank you.
@filippocld Hi, I have to do some modification, raw_input (From clipboard) without return a value & the output pasted to clipboard automatically. Please help me.
import base64
encsting = raw_input ('Encoded Value:' "from clipboard")
decstring = base64.b64decode (encsting)
print 'Decoded Value:',decstring "to clipboard
- filippocld
import base64 import clipboard encsting = clipboard.get() decstring = base64.b64decode(encsting) clipboard.set(decstring)
@filippocld Thank you very much.
@filippocld I need some clarification in the below script.
from __future__ import absolute_import from __future__ import print_function import base64 import clipboard decoded = clipboard.get() encoded =base64.b64encode (decoded) clipboard.set(encoded) print("Encoded Value:" ,encoded, end=' ')
When i run the script in Python 2.7 i am getting output but in python 3.5 i am getting error. Please help me.
you did not say what error you get, but i can guess it relates to unicode vs bytes.
base64.encode takes a bytes object, you are giving it a string.
try replaceing decoded with decoded.encode('ascii') before passing into base64.b64encode | https://forum.omz-software.com/topic/2974/need-python-script-to-decode-or-encode-in-base64 | CC-MAIN-2018-43 | refinedweb | 404 | 58.99 |
CPA 2011
a. Spontaneous Financing
b. Spontaneous financing is the amount of working capital that arises naturally in the ordinary course of business without the firm's financial managers needing to take deliberate action.
c. Trade credit arises when a company is offered credit terms by its suppliers.
d. Accrued expenses, such as salaries, wages, interest, dividends, and taxes payable, are another source of (interest-free) spontaneous financing.
e. The portion of capital needs that cannot be satisfied through spontaneous means must be the subject of careful financial planning
Conservative Policy to financing
A firm that adopts a conservative working capital policy seeks to minimize liquidity risk by holding a greater proportion of permanent working capital.
Aggressive Policy financing
a. An aggressive working capital policy involves reducing liquidity and accepting a higher risk of short-term cash flow problems in an effort to increase profitability.
Reasons to hold cash according to Keynes 3
1. the transactional motive to use as a medium of exchange
2. precautionary motive to provide a cushion for the unexpected
3. speculative motive-to take advantage of unexpected opportunities
compensating balance
demand (checking) account. Compensating balances are noninterest¬bearing and are meant to compensate the bank for various services rendered, such as unlimited check writing.
draft
. A draft is a three-party instrument in which one person (the drawer) orders a second person (the drawee) to pay money to a third person (the payee).
payable through draft
payable through draft
(PTD) differs from a check in that (1) it is not payable on demand and (2) the drawee is the payor, not a bank. After the payee presents the PTD to a bank, the bank in turn presents it to the issuer. The issuer then must deposit sufficient funds to cover the PTD. Use of PTDs thus allows a firm to maintain lower cash balances.
zero-balance account
A zero-balance account (ZBA) carries, as the name implies, a balance of $0. At the end of each processing day, the bank transfers just enough from the firm's master account to cover all checks presented against the ZBA that day.
1) This practice allows the firm to maintain higher balances in the master account from which short-term investments can be made. The bank generally charges a fee for this service.
Disbursement float
Disbursement float is the period of time from when the payor puts a check in the mail until the funds are deducted from the payor's account. In an effort to stretch disbursement float, a firm may mail checks to its vendors while being unsure that sufficient funds will be available to cover them all
1) Treasury bills
2) Treasury notes
3) Treasury bonds
1) Treasury bills (T-bills) have maturities of 1 year or less. Rather than bear interest, they are sold on a discount basis.
2) Treasury notes (T-notes) have maturities of 1 to 10 years. They provide the lender with a coupon (interest) payment every 6 months.
3) Treasury bonds (T-bonds) have maturities of 10 years or longer. They provide the lender with a coupon (interest) payment every 6 months.
Repurchase agreements
Repurchase agreements (repos) are a means for dealers in government securities to finance their portfolios. When a company buys a repo, the firm is temporarily purchasing some of the dealer's government securities. The dealer agrees to repurchase them at a later time for a specific (higher) price.
Bankers' acceptances
Bankers' acceptances are drafts drawn by a nonfinancial firm on deposits at a bank. The acceptance by the bank is a guarantee of payment at maturity. The payee can thus rely on the creditworthiness of the bank rather than on that of the (presumably riskier) drawer. Because they are backed by the prestige of a large bank, these
instruments are highly marketable once they have been accepted.
Commercial paper
Commercial paper consists of unsecured, short-term notes issued by large companies that are very good credit risks.
f. Certificates of deposit
f. Certificates of deposit (CDs) are a form of savings deposit that cannot be withdrawn before maturity without a high penalty. CDs often yield a lower return than commercial paper because they are less risky. Negotiable CDs are traded under the regulation of the Federal Reserve System.
Basic Receivables Formula
-AVG days outstanding
The most common credit terms offered are 2/10, net 30. This is a convention meaning that the customer may either deduct 2% of the invoice amount if the invoice is paid within 10 days or pay the entire balance by the 30th day
account receivable, therefore, is outstanding for 28 days [(10 days × 20%) + (30 days × 60%) + (40 days × 20%)].
-AVG accounts receivable
balance in receivables = Daily credit sales x Avg. collection period
Step 1-28 days [(10 days × 20%) + (30 days × 60%) + (40 days × 20%)].
Step 2. The firm in the previous example has $15,000 in daily sales on credit. The firm's average balance in receivables is thus $420,000 ($15,000 × 28 days). So the 15,000 daily credit sales is a known variable
Average balance in receivables formula use knowns from previous
The firm has annual credit sales of $5,400,000. The firm's average balance in receivables is thus $420,000 [$5,400,000 × (28 days ÷ 360 days)].
Accounts receivable turnover use previous example known variable
A/R turnover ratio
The firm turned its accounts receivable over 12.9 times during the year ($5,400,000 ÷ $420,000).
Costs related to inventory
1) Purchase costs
2) Carrying costs
3) Ordering costs
4) Stockout costs
1) Purchase costs are the actual invoice amounts charged by suppliers. This is also referred to as investment in inventory.
2) Carrying costs is a broad category consisting of all those costs associated with holding inventory: storage, insurance, security, inventory taxes, depreciation or rent of facilities, interest, obsolescence and spoilage, and the opportunity cost of funds tied up in inventory. This is sometimes stated as a percentage of investment in inventory.
3) Ordering costs are the fixed costs of placing an order with a vendor, independent of the number of units ordered. For internally manufactured units, these consist of the set-up costs of a production line.
4) Stockout costs are the opportunity cost of missing a customer order. These can also include the costs of expediting a special shipment necessitated by insufficient inventory on hand.
safety stock
Accordingly, safety stock is an inventory buffer held as a hedge against contingencies. Determining the appropriate level of safety stock involves a probabilistic calculation that balances the variability of demand with the level of risk the firm is willing to accept of having to incur stockout costs.
The reorder point is established with the following equation (Average daily demand x Lead time in days) + Safety stock
Non-value adding activities
2) JIT is a pull system
2) JIT is a pull system, meaning it is demand-driven: In a manufacturing environment, production of goods does not begin until an order has been received. In this way, finished goods inventories are also eliminated.
3) A backflush costing system
3) A backflush costing system is often used in a JIT environment. Backflush costing eliminates the traditional sequential tracking of costs. Instead, entries to inventory may be delayed until as late as the end of the period.
kanban system
1) Kanban means ticket. Tickets (also described as cards or markers) control the flow of production or parts so that they are produced or obtained in the needed amounts at the needed times.
2).
A firm's operating cycle is the amount of time that passes between the acquisition of inventory and the collection of cash on the sale of that inventory.
1) The (overlapping) steps in the operating cycle are
a) Acquisition of inventory and incurrence of a payable
b) Settlement of the payable
c) Holding of inventory
d) Selling of inventory and incurrence of a receivable
e) Collection on the receivable and acquisition of further inventory
Which one of the following provides a spontaneous source of financing for a firm?
A. Accounts payable.
B. Mortgage bonds.
C. Accounts receivable.
D. Debentures.
Answer (A) is correct. Trade credit is a spontaneous source of financing because it arises
automatically as part of a purchase transaction. Because of its ease in use, trade credit is
the largest source of short-term financing for many firms both large and small.
Net working capital is the difference between
Current assets and current A. liabilities
Net working capital is defined by accountants as the difference
between current assets and current liabilities. Working capital is a measure of short-term
solvency.
Recording the payment (as distinguished from the declaration) of a cash dividend, the
declaration of which was already recorded, will
Increase the current ratio but have no effect A. on working capital.
B. Decrease both the current ratio and working capital.
C. Increase both the current ratio and working capital.
D. Have no effect on the current ratio or earnings per share.
Answer (A) is correct. The payment of a previously declared cash dividend reduces
current assets and current liabilities equally. An equal reduction in current assets and
current liabilities causes an increase in a positive (greater than 1.0) current ratio.
Depoole's payment of a trade account payable of $64,500 will
Increase the current ratio, but the quick ratio would A. not be affected.
B. Increase the quick ratio, but the current ratio would not be affected.
C. Increase both the current and quick ratios.
D. Decrease both the current and quick ratios.
The current ratio and the quick ratio will increase.
Answer (C) is correct. Given that the quick assets exceed current liabilities, both the
current and quick ratios exceed 1 because the numerator of the current ratio includes other
current assets in addition to the quick assets of cash, net accounts receivable, and shortterm
marketable securities. An equal reduction in the numerator and the denominator,
such as a payment of a trade payable, will cause each ratio to increase.
Depoole's purchase of raw materials for $85,000 on open account will
A. Increase the current ratio.
B. Decrease the current ratio.
C. Increase net working capital.
D. Decrease net working capital.
Answer (B) is correct. The purchase increases both the numerator and denominator of the
current ratio by adding inventory to the numerator and payables to the denominator.
Because the ratio before the purchase was greater than 1, the ratio is decreased
Obsolete inventory of $125,000 was written off by Depoole during the year. This transaction
A. Decreased the quick ratio.
B. Increased the quick ratio.
C. Increased net working capital.
D. Decreased the current ratio
Answer (D) is correct. Writing off obsolete inventory reduced current assets, but not
quick assets (cash, receivables, and marketable securities). Thus, the current ratio was
reduced and the quick ratio was unaffected.
Depoole's issuance of serial bonds in exchange for an office building, with the first installment
of the bonds due late this year,
Decreases net A. working capital.
B. Decreases the current ratio.
C. Decreases the quick ratio.
D. Affects all of the answers as indicated.
(d) The first installment is a current liability; thus the amount of
current liabilities increases with no corresponding increase in current assets. The effect is
to decrease working capital, the current ratio, and the quick ratio.
Depoole's early liquidation of a long-term note with cash affects the
A. Current ratio to a greater degree than the quick ratio.
B. Quick ratio to a greater degree than the current ratio.
C. Current and quick ratio to the same degree.
D. Current ratio but not the quick ratio.
Answer (B) is correct. The numerators of the quick and current ratios are decreased when
cash is expended. Early payment of a long-term liability has no effect on the denominator
(current liabilities). Since the numerator of the quick ratio, which includes cash, net
receivables, and marketable securities, is less than the numerator of the current ratio,
which includes all current assets, the quick ratio is affected to a greater degree.
North Bank is analyzing Belle Corp.'s financial statements for a possible extension of credit.
Belle's quick ratio is significantly better than the industry average. Which of the following
factors should North consider as a possible limitation of using this ratio when evaluating
Belle's creditworthiness?
Fluctuating market prices of short-term investments may adversely A. affect the ratio.
B. Increasing market prices for Belle's inventory may adversely affect the ratio.
C. Belle may need to sell its available-for-sale investments to meet its current obligations.
D. Belle may need to liquidate its inventory to meet its long-term obligations.
Answer (A) is correct. The quick ratio equals current assets minus inventory, divided by
current liabilities. Because short-term marketable securities are included in the numerator,
fluctuating market prices of short-term investments may adversely affect the ratio if Belle
holds a substantial amount of such current assets.
Windham Company has current assets of $400,000 and current liabilities of $500,000.
Windham Company's current ratio will be increased by
A. The purchase of $100,000 of inventory on account.
B. The payment of $100,000 of accounts payable.
C. The collection of $100,000 of accounts receivable.
D. Refinancing a $100,000 long-term loan with short-term debt.
Answer (A) is correct. The current ratio equals current assets divided by current
liabilities. An equal increase in both the numerator and denominator of a current ratio less
than 1.0 causes the ratio to increase. Windham Company's current ratio is .8 ($400,000 ÷
$500,000). The purchase of $100,000 of inventory on account would increase the current
assets to $500,000 and the current liabilities to $600,000, resulting in a new current ratio
of .83.
Given an acid test ratio of 2.0, current assets of $5,000, and inventory of $2,000, the value of
current liabilities is
The acid test or quick ratio equals the ratio of the quick assets
(cash, net accounts receivable, and marketable securities)).
Bond Corporation has a current ratio of 2 to 1 and a (acid test) quick ratio of 1 to 1. A
transaction that would change Bond's quick ratio but not its current ratio is the
Sale of inventory A. on account at cost.
B. Collection of accounts receivable.
C. Payment of accounts payable.
D. Purchase of a patent for cash.
Answer (A) is correct. The quick ratio is determined by dividing the sum of cash, shortterm
marketable securities, and accounts receivable by current liabilities. The current ratio
is equal to current assets divided by current liabilities. The sale of inventory (a nonquick
current asset) on account would increase cash (a quick asset), thereby changing the quick
ratio. The sale of inventory for cash, however, would be replacing one current asset with
another, and the current ratio would be unaffected
Rice, Inc. uses the allowance method to account for uncollectible accounts. An account
receivable that was previously determined uncollectible and written off was collected during
May. The effect of the collection on Rice's current ratio and total working capital is
The entry to record this transaction is to debit receivables, credit
the allowance, debit cash, and credit receivables. The result is to increase both an asset
(cash) and a contra asset (allowance for bad debts). These appear in the current asset
section of the balance sheet. Thus, the collection changes neither the current ratio nor
working capital because the effects are offsetting. The credit for the journal entry is made
to the allowance account on the assumption that another account will become
uncollectible. The company had previously estimated its bad debts and established an
appropriate allowance. It then (presumably) wrote off the wrong account. Accordingly, the
journal entry reinstates a balance in the allowance account to absorb future uncollectibles
Merit, Inc. uses the direct write-off method to account for uncollectible accounts receivable. If
the company subsequently collects an account receivable that was written off in a prior
accounting period, the effect of the collection of the account receivable on Merit's current ratio
and total working capital would be
Because the company uses the direct write-off method, the original
entry involved a debit to a bad debt expense account (closed to retained earnings). The
subsequent collection required a debit to cash and a credit to bad debt expense or retained
earnings. Thus, only one current asset account was involved in the collection entry, and
current assets (cash) increased as a result. If current assets increase and no change occurs
in current liabilities, the current ratio and working capital both increase.
Corp declares and two weeks later pays dividend, how do both incidents effect current ratio.
Decreased by the dividend declaration and increased by the A. dividend payment.
Which one of the following would increase the net working capital of a firm?
Cash payment of payroll A. taxes payable.
B. Purchase of a new plant financed by a 20-year mortgage.
C. Cash collection of accounts receivable.
D. Refinancing a short-term note payable with a 2-year note payable.
Answer (D) is correct. Net working capital equals current assets minus current liabilities.
Refinancing a short-term note with a 2-year note payable decreases current liabilities, thus
increasing working capital.
Badoglio Co.'s current ratio is 3:1. Which of the following transactions would normally
increase its current ratio?
Purchasing inventory A. on account.
B. Selling inventory on account.
C. Collecting an account receivable.
D. Purchasing machinery for cash.
Answer (B) is correct. The current ratio is equal to current assets divided by current
liabilities. Given that the company has a current ratio of 3:1, an increase in current assets
or decrease in current liabilities would cause this ratio to increase. If the company sold
merchandise on open account that earned a normal gross margin, receivables would be
increased at the time of recording the sales revenue in an amount greater than the decrease
in inventory from recording the cost of goods sold. The effect would be an increase in the
current assets and no change in the current liabilities. Thus, the current ratio would be
increased.
According to John Maynard Keynes, the three major motives for holding cash are for
John Maynard Keynes, founder of Keynesian economics,
concluded that there were three major motives for holding cash: for transactional purposes
as a medium of exchange, precautionary purposes, and speculative purposes (but only
during deflationary periods).
An increase in sales resulting from an increased cash discount for prompt payment would be
expected to cause a(n)
Increase in the A. operating cycle.
B. Increase in the average collection period.
C. Decrease in the cash conversion cycle.
D. Decrease in purchase discounts taken.
Answer (C) is correct. If the cause of increased sales is an increase in the cash discount, it
can be inferred that the additional customers would pay during the discount period. Thus,
cash would be collected more quickly than previously and the cash conversion cycle
would be shortened
1. Net credit sales=500k
2. net sales =250k
A/R balance
Jan 1 75k
dec 31 50k
What is a/r turnover for the year?
=500/((75k+50k)/2)=62,500
projected sales collection
1. 40% by 15 day discount date
2. 40% by 30 due date
3. 20% 15 days late
What is the projected sales outstanding?
multiply together like WAC you get 27 days.
Yonder Motors sells 20,000 automobiles per year for $25,000 each. The firm's average
receivables are $30,000,000 and average inventory is $40,000,000.Yonder's average collection
period is closest to which one of the following? Assume a 365-day year.
Average collection period = Days in year ÷ Accounts receivable turnover
= 365 ÷ (Net credit sales ÷ Average net receivables)
= 365 ÷ [(20,000 × $25,000) ÷ $30,000,000]
= 365 ÷ ($500,000,000 ÷ $30,000,000)
= 365 ÷ 16.667
= 21.9 days
Which of the following assumptions is associated with the economic order quantity formula?
The carrying cost per unit will vary with A. quantity ordered.
B. The cost of placing an order will vary with quantity ordered.
C. Periodic demand is known.
D. The purchase cost per unit will vary based on quantity discounts.
Answer (C) is correct. The economic order quantity (EOQ) model is a mathematical tool
for determining the order quantity that minimizes the sum of ordering costs and carrying
costs. The following assumptions underlay the EOQ model: (1) Demand is uniform;
(2) Order (setup) costs and carrying costs are constant; and (3) No quantity discounts are
allowed.
As a consequence of finding a more dependable supplier, Dee Co. reduced its safety stock of
raw materials by 80%. What is the effect of this safety stock reduction on Dee's economic order
quantity?
The variables in the EOQ formula are periodic demand, cost per
order, and the unit carrying cost for the period. Thus, safety stock does not affect the
EOQ. Although the total of the carrying costs changes with the safety stock, the costminimizing
order quantity is not affected
corp just instituted Just in time production system, cost per order has been reduced from 28 to 2 dollars, fixed facility and admin cost increased 2 to 32, how does this effect lot size and relevant cost.
The economic lot size for a production system is similar to the
EOQ. For example, the cost per set-up is equivalent to the cost per order (a numerator
value in the EOQ model). Hence, a reduction in the setup costs reduces the economic lot
size as well as the relevant costs. The fixed facility and administrative costs, however, are
not relevant. The EOQ model includes variable costs only.
The carrying costs associated with inventory management include
Storage costs, handling costs, capital invested, and obsolescence.
The ordering costs associated with inventory management include
Ordering costs are costs incurred when placing and receiving
orders. Ordering costs include purchasing costs, shipping costs, setup costs for a
production run, and quantity discounts lost
1. in avg weekly demand
2. explain reorder formula
1. (Sales/weeks in year)
2. Reorder point = (Average weekly demand × Lead time) + Safety stock
The level of safety stock in inventory management depends on all of the following except the
Level of uncertainty of A. the sales forecast.
B. Level of customer dissatisfaction for back orders.
C. Cost of running out of inventory.
D. Cost to reorder stock.
Answer (D) is correct. Determining the appropriate level of safety stock involves a
complex probabilistic calculation that balances (1) the variability of demand for the good,
(2) the variability in lead time, and (3) the level of risk the firm is willing to accept of
having to incur stockout costs. Thus, the only one of the items listed that does not affect
the level of safety stock is reorder costs.
The result of the economic order quantity (EOQ) formula indicates the
The EOQ model is a deterministic model that calculates the ideal
order (or production lot) quantity given specified demand, ordering or setup costs, and
carrying costs. The model minimizes the sum of inventory carrying costs and either
ordering or production setup costs.
Key Co. changed from a traditional manufacturing operation with a job-order costing system to
a just-in-time operation with a backflush costing system. What are the expected effects of these
changes on Key's inspection costs and recording detail of costs tracked to jobs in process?
JIT system, materials go directly into production without
being inspected. The assumption is that the vendor has already performed all necessary
inspections. The minimization of inventory reduces the number of suppliers, storage costs,
transaction costs, etc. Backflush costing eliminates the traditional sequential tracking of
costs. Instead, entries to inventory may be delayed until as late as the end of the period.
For example, all product costs may be charged initially to cost of sales, and costs may be
flushed back to the inventory accounts only at the end of the period. Thus, the detail of
cost accounting is decreased.
To determine the inventory reorder point, calculations normally include the
A. Ordering cost.
B. Carrying cost.
C. Average daily usage.
D. Economic order quantity.
Answer (C) is correct. The reorder point is the amount of inventory on hand indicating
that a new order should be placed. It equals the sales per unit of time multiplied by the
time required to receive the new order (lead time).
Accounts receivable turnover ratio will normally decrease as a result of
The write-off of an uncollectible account (assume the use of the allowance for doubtful
accounts method).
A.
B. A significant sales volume decrease near the end of the accounting period.
C. An increase in cash sales in proportion to credit sales.
D. A change in credit policy to lengthen the period for cash discounts.
Answer (D) is correct. The accounts receivable turnover ratio equals net credit sales
divided by average receivables. Hence, it will decrease if a company lengthens the credit
period or the discount period because the denominator will increase as receivables are
held for longer times
Inventory turnover ratio formula?
Cost of goods sold/ ( avg inventory. (current former year inventory/ 2)
Yr 1 a/r = 60
Yr 2 a/r =90
Sales = 600
1. What is a/r turnover
2. what is it in days?
600/(90+60/2)=8 times
360/8=45
What ratio's do you need to find operating cycle?
1. time from purchase of inventory to collection of cash
operating cycle = sum of number of days sales in inventory and teh number of days' sales in receivables.
The theory underlying the cost of capital is primarily concerned with the cost of:
a. Long-term funds and old funds.
b. Short-term funds and new funds.
c. Long-term funds and new funds.
d. Any combination of old or new, short-term or long-term funds.
Choice "d" is correct. The cost of capital considers the cost of all funds - whether they are short-term, longterm
, new or old.
Sylvan Corporation has the following capital structure:
Debenture bonds
Preferred equity
Common equity
$10,000,000
1,000,000
39,000,000
The financial leverage of Sylvan Corp. would increase as a result of:
a. Issuing common stock and using the proceeds to retire preferred stock.
b. Issuing common stock and using the proceeds to retire debenture bonds.
c. Financing its future investments with a higher percentage of bonds.
d. Financing its future investments with a higher percentage of equity funds.
Choice "c" is correct. Financial leverage increases when the debt to equity ratio increases. Using a higher
percentage of debt (bonds) for future investments would increase financial leverage.
Residual income is a better measure for performance evaluation of an investment center manager than return
on investment because:
a. The problems associated with measuring the asset base are eliminated.
b. Desirable investment decisions will not be neglected by high-return divisions.
c. Only the gross book value of assets needs to be calculated.
d. The arguments about the implicit cost of interest are eliminated.
Choice "b" is correct. Residual income measures actual dollars that an investment earns over its required
return rate. Performance evaluation on this basis will mean that desirable investment decisions will not be
rejected by high-return divisions.
The basic objective of the residual income approach of performance measurement and evaluation is to have
a division maximize its:
a. Return on investment rate.
b. Imputed interest rate charge.
c. Cash flows in excess of a desired minimum amount.
d. Income in excess of a desired minimum amount.
Choice "d" is correct. Residual income is defined as income
Capital investments require balancing risk and return. Managers have a responsibility to ensure that theinvestrnents that they make in their own firms increase shareholder value. Managers have met that responsibility if the return on the capital investment:
a. Exceeds the rate of return associated with the firm's beta factor.
b. Is less than the rate of return associated with the firm's beta factor.
c. Is greater than the prime rate of return.
d. Is less than the prime rate of return
Choice "a" is correct. A capital investment whose rate of return exceeds the rate of return associated with the
firm's beta factor will increase the value of the firm.
The Stewart Co. uses the Economic Order Quantity (EOQ) model for inventory management. A decrease in
which one of the following variables would increase the EOQ?
a. Cost per order.
b. Safety stock leve l.
c. Carrying costs.
d. Quantity demanded.
Choice "c" is correct. A decrease in carrying costs would increase the Economic Order Quantity (EOQ).
Order size
Annual Sales quantity in units
Cost per purchase Qrder
Annual cost of Carrying one unit in stock for one year
Order size gets larger as "S" or "0" gets bigger (numerator) or as "C" gets smaller (denominator).
The working capital financing policy that subjects the firm to the greatest risk of being unable to meet the
firm's maturing obligations is the policy that finances :
a. Fluctuating current assets with long-term debt.
b. Permanent current assets with long-term debt.
c. Permanent current assets with short-term debt.
d. Fluctuating current assets with short-term debt.
Choice "c" is correct. The working capital financing policy that finances permanent current assets with shortterm
debt subjects the firm to the greatest risk of being unable to meet the firm's maturing obligations.
Calculate ROI Yr2 Yr3
Revenue 900k 1,100k
Expense 650k 700k
Assets 1,200k 2,000k
revenue =400k
Divide:
YR2 +YR3 / 2 =1,600k
ROI= .25
Which of the following inventory management approaches orders at the point where carrying costs equate
nearest to restocking costs in order to minimize total inventory cost?
a. Economic order quantity.
b. Just-in-time.
c. Materials requirements planning.
d. ABC.
Choice "a" is correct. The economic order quantity (EOO) method of inventory control anticipates orders at
the point where carrying costs are nearest to restocking costs. The objective of EOO is to minimize total
inventory costs. The formula for EOO is:
What is the primary disadvantage of using return on investment (ROI) rather than residual income (RI) to
evaluate the performance of investment center managers?
a. ROI is a percentage, while RI is a dollar amount.
b. ROI may lead to rejecting projects that yield positive cash flows .
c. ROI does not necessarily reflect the company's cost of capital.
d. ROI does not reflect all economic gains.
Choice "b" is correct. The primary disadvantage of using return on investment (ROI) rather than residual
income (RI) to evaluate the performance of investment center managers is that ROI may lead to rejecting
projects that yield positive cash fiows . Profitable investment center managers might be reluctant to invest in
projects that might lower their ROI (especially if their bonuses are based only on their investment center's
ROI), even though those projects might generate positive cash flows for the company as a whole. This
characteristic is often known as the "disincentive to invest."
Amicable Wireless, Inc. offers credit terms of 2/1 0, net 30 for its customers. Sixty percent of Amicable's
customers take the 2% discount and pay on day 10. The remainder of Amicable's customers pay on day 30.
How many days' sales are in Amicable's accounts receivable?
a. 6
b. 12
c. 18
d. 20
Choice "c" is correct. Days' sales in accounts receivable is normally calculated as Days' sales = Ending
accounts receivable 1 Average daily sales. However, that formula will not work in this case because the
necessary information is not provided. However, enough information about payments is provided so that the
total days' sales can be determined on a weighted average basis. In this question, nobody pays before the
10th day and 60% of the customers pay on the 10th day, so there are 10 x .60, or 6 day's sales there. The
other 40% of the customers pay on the 30th day so there are 30 x AD, or 12 day's sales there. The total is 18
days sales.
Why would a firm want to finance temporary assets with short-term debt.
Which of the following rates is most commonly compared to the internal rate of return to evaluate whether to
make an investment?
a. Short-term rate on U.S. Treasury bonds.
b. Prime rate of interest.
c. Weighted-average cost of capital.
d. Long-term rate on U.S. Treasury bonds.
Choice "c" is correct. The weighted-average cost of capital is frequently used as the hurdle rate within capital
budgeting techniques. Investments that provide a return that exceeds the weighted-average cost of capital
should continuously add to the value of the firm.
Which of the following assumptions is associated with the economic order quantity formula?
a. The carrying cost per unit will vary with quantity ordered.
b. The cost of placing an order will vary with quantity ordered.
c. Periodic demand is known.
d. The purchase cost per unit will vary based on quantity discounts.
Choice "c" is correct. The economic order quantity formula (EOQ) assumes that periodic demand is known .
Annual sales volume is a crucial variable in the EOQ formula .
Which of the following types of bonds is most likely to maintain a constant market value?
a. Zero-coupon.
b. Floating-rate.
c. Callable.
d. Convertible.
Choice "b" is correct. Floating-rate bonds would automatically adjust the return on a financial instrument to
produce a constant market value for that instrument. No premium or discount would be required since market
changes would be accounted for through the interest rate.
Capital budgeting decisions include all but which of the following?
a. Selecting among long-term investment alternatives.
b. Financing short-term working capital needs.
c. Making investments that produce returns over a long period of time.
d. Financing large expenditures.
Choice "b" is correct. Capital budgeting decisions do not include the financing of short-term working capital
needs, which are more operational in nature.
Which one of the following is most relevant to a manufacturing equipment replacement decision?
a. Original cost of the old equipment.
b. Disposal price of the old equipment.
c. Gain or loss on the disposal of the old equipment.
d. A lump-sum write-off amount from the disposal of the old equipment.
Choice "b" is correct. The disposal price of the old equipment is most relevant because it is an expected
future inflow that will differ among alternatives. If this old equipment is replaced , there will be a cash inflow
from the sale of the old equipment. If the old equipment is kept, there will be no cash inflow from the sale of
the old equipment.
All of the following items are included in discounted cash flow analysis, except:
a. Future operating cash savings.
b. The current asset disposal price.
c. The future asset depreciation expense.
d. The tax effects of future asset depreciation.
Choice "c" is correct. The future asset depreciation expense is not included in discounted cash flow analysis.
• Future operating cash savings
• Current asset disposal price
• Tax effects of future asset depreciation
• Future asset disposal price
All of the following are the rates used in net present value analysis, except for the:
a. Cost of capital.
b. Hurdle rate.
c. Discount rate.
d. Accounting rate of return .
Choice "d" is correct. The accounting rate of return is a capital budgeting technique, not a rate.
• Cost of capital
• Hurdle rate
• Discount rate
• Required rate of return
The net present value (NPV) of a project has been calculated to be $215,000. Which one of the following
changes in assumptions would decrease the NPV?
a. Decrease the estimated effective income tax rate.
b. Extend the project life and associated cash inflows.
c. Increase the estimated salvage value.
d. Increase the discount rate.
Choice "d" is correct. An increase in the discount rate will decrease the present value of future cash inflows
and, therefore, decrease the net present value of the project.
Andrew Corporation is evaluating a capital investment that would result in a $30,000 higher contribution
margin benefit and increased annual personnel costs of $20,000. The effects of income taxes on the net
present value computation on these benefits and costs for the project are to:
a. Decrease both benefits and costs.
b. Decrease benefits but increase costs.
c. Increase benefits but decrease costs.
d. Increase both benefits and costs.
Choice "a" is correct. The effects of income taxes on the net present value computations will decrease both
benefits and costs for the project. Net present value computations focus of the present value of cash flows.
Income taxes decrease both the benefit and the cost of cash flows .
The internal rate of return for a project can be determined:
a. Only if the project cash flows are constant.
b. By finding the discount rate that yields a net present value of zero for the project.
c. By subtracting the firm's cost of capital from the project's profitability index.
d. Only if the project's profitability index is greater than one.
Choice "b" is correct. The internal rate of return (IRR) is the discount rate that produces a NPV of zero.
The internal rate of return is the:
a. Rate of interest that equates the present value of cash outflows and the present value of cash inflows.
b. Risk-adjusted rate of return.
c. Required rate of return.
d. Weighted average rate of return generated by internal funds.
Choice "a" is correct. The internal rate of return is defined as the technique that determines the present value
factor such that the present value of the after-tax cash flows equals the initial investment on the project.
Alternately, the internal rate of return (IRR) is the discount rate that produces a NPV of zero.
Do you use NPV in calculating payback period?
do you use salvage value in factoring payback period ?
NO DUMB ASS
NO DUMB ASS
When evaluating capital budgeting analysis techniques, the payback period emphasizes:
a. Liquidity.
b. Profitability.
c. Net income.
d. The accounting period .
Choice "a" is correct. The payback period is the time period required for cash inflows to recover the initial
investment. The emphasis of the technique is on liquidity (i.e., cash flow) .
The term underwriting spread refers to the:
a. Commission percentage an investment banker receives for underwriting a security lease.
b. Discount investment bankers receive on securities they purchase from the issuing company.
c. Difference between the price the investment banker pays for a new security issue and the price at which
the securities are resold .
d. Commission a broker receives for either buying or selling a security on behalf of an investor.
Choice "c" is correct. Investment bankers are paid their fees partly by being allowed to purchase the new
securities they are underwriting for a discount and then reselling those securities on the market. This is
known as the underwriting spread.
A firm with a higher degree of operating leverage when compared to the industry average implies that the:
a. Firm has higher variable costs.
b. Firm's profits are more sensitive to changes in sales volume.
c. Firm is more profitable.
d. Firm uses a significant amount of debt financing.
Choice "b" is correct. A firm with a higher degree of operating leverage when compared to the industry
average implies that the firm's profits are more sensitive to changes in sales volume.
Rule: Operating leverage is the presence of fixed costs in operations, which allows a small change in sales to
produce a larger relative change in profits.
Which of the following transactions does not change the current ratio and does not change the total current
assets?
a. A cash advance is made to a divisional office.
b. A cash dividend is declared.
c. Short-term notes payable are retired with cash.
d. Equipment is purchased with a three-year note and a 10 percent cash down payment.
Choice "a" is correct. This does not change the current ratio because the reduction of cash is offset by an
increase in accounts receivable
An increase in sales collections resulting from an increased cash discount for prompt payment would be
expected to cause a (n):
a. Increase in the operating cycle.
b. Increase in the average collection period.
c. Decrease in the cash conversion cycle.
d. Increase in bad debt losses.
Choice "a" is correct. This does not change the current ratio because the reduction of cash is offset by an
increase in accounts receivable
Which one of the following represents methods for converting accounts receivable to cash?
a. Trade discounts, collection agencies, and credit approval.
b. Factoring, pledging, and electronic funds transfers.
c. Cash discounts, collection agencies, and electronic funds transfers.
d. Trade discounts, cash discounts, and electronic funds transfers.
Choice "c" is correct. The following are methods of converting accounts receivable (AR) into cash:
1. Collection agencies - used to collect overdue AR.
2. Factoring AR - selling AR to a factor for cash.
3. Cash discounts - offering cash discounts to customers for paying AR quickly (or paying at all). For
example: 2/10, net 30.
4. Electronic fund transfers - a method of payment, which electronically transfers funds between banks.
Which one of the following statements concerning cash discounts is correct?
a. The cost of not taking a 2/10, net 30 cash discount is usually less than the prime rate.
b. With trade terms of 2/15, net 60, if the discount is not taken , the buyer receives 45 days of free credit.
c. The cost of not taking the discount is higher for terms of 2/10, net 60 than for 2/10, net 30.
d. The cost of not taking a cash discount is generally higher than the cost of a bank loan.
Choice "d" is correct. The cost of not taking a cash discount is generally higher than the cost of a bank loan.
Which one of the following is not a characteristic of a negotiable certificate of deposit? Negotiable certificates
of deposit:
a. Have a secondary market for investors.
b. Are regulated by the Federal Reserve System.
c. Are usually sold in denominations of a minimum of $100,000.
d. Have yields considerably greater than bankers' acceptances and commercial paper.
Choice "d" is correct. Negotiable CDs generally carry interest rates slightly lower than bankers' acceptances
(which are drafts drawn on deposits at a bank) or commercial paper (which is unsecured debt issued by credit
worthy customers).
All of the following are alternative marketable securities suitable for investment, except:
a. Eurodollars.
b. Commercial paper.
c. Bankers' acceptances.
d. Convertible bonds.
Choice "d" is correct. Convertible bonds. Temporarily idle cash should be inverted in very liquid , low risk
short-term investments only. U.S. T-bills are basically risk-free. Banker's acceptances and Eurodollars are
only slightly more risky. Commercial paper, the short-term unsecured notes of the most credit-worthy large
U.S. corporations is a little riskier, but still relatively low risk. However, convertible bonds are subject to
default risk, liquidity risk, and maturity (interest rate) risk, and as such are inappropriate securities for shortterm
marketable security investment.
Which one of the following responses is not an advantage to a corporation that uses the commercial paper
market for short-term financing?
a. The borrower avoids the expense of maintaining a compensating balance with a commercial bank.
b. There are no restrictions as to the type of corporation that can enter into this market.
c. This market provides a broad distribution for borrowing.
d. A benefit accrues to the borrower because its name becomes more widely known.
Choice "b" is correct. There are restrictions as to the type of corporation that can enter into the commercial
paper market for short-term financing , since the use of the open market is restricted to a comparatively small
number of the most credit-worthy large corporations.
The commercial paper market:
a. Avoids the expense of maintaining a compensating balance with a commercial bank.
c. Provides a broad distribution for borrowing.
d. Accrues a benefit to the borrower because its name becomes more widely known
Which of the following represents a firm's average gross receivable balance?
I. Days' sales in receivables x accounts receivable turnover.
II. Average daily sales x average collection period.
III. Net sales + average gross receivables.
a. I only.
b. I and II only.
c. II only.
d. II and III only.
Choice "c" is correct. II only - Average daily sales ($27,397) x Average collection period (36.5) = $1,000,000
Avg gross AIR
Which one of the following statements is most correct if a seller extends credit to a purchaser for a period of
time longer than the purchaser's operating cycle? The seller:
a. Will have a lower level of accounts receivable than those companies whose credit period is shorter than
the purchaser's operating cycle.
b. Is, in effect, financing more than just the purchaser's inventory needs.
c. Is, in effect, financing the purchaser's long-term assets.
d. Has no need for a stated discount rate or credit period.
Choice "b" is correct. If a seller extends credit to a purchaser for a period of time longer than the purchaser's
operating cycle, the seller is, in effect, financing more than just the purchaser's inventory needs.
Calculate reorder point:
50 week year
sales: 10,000 units per year
Order quantity: 2,000 units
Safety stock 1,300 units
lead-time 4 weeks
-50 week year would mean that 200units are sold a week,
-therefore 800 units are sold during lead-time 4*200,
-rq safety stock is 1,300 units.
Therefore: 1,300+800 units = 2,100 is reorder point. | https://quizlet.com/5202232/chapter-8-2011-bec-flash-cards/ | CC-MAIN-2015-48 | refinedweb | 7,654 | 57.67 |
This notebook is an element of the risk-engineering.org courseware. It can be distributed under the terms of the Creative Commons Attribution-ShareAlike licence.
Author: Eric Marsden [email protected]
This notebook contains an introduction to different sampling methods in Monte Carlo analysis (standard random sampling, latin hypercube sampling, and low discrepency sequences such as that of Sobol’ and that of Halton). The notebook shows how to use Python, with the SciPy and SymPy libraries. It uses some Python 3 features. See the lecture slides at risk-engineering.org for more background information on stochastic simulation.
import math import numpy import scipy.stats import matplotlib.pyplot as plt
Let’s start with a simple integration problem in 1D,
$\int_1^5 x^2$
This is easy to solve analytically, and we can use the SymPy library in case you’ve forgotten how to resolve simple integrals.
import sympy result = {} # we'll save results using different methods here x = sympy.Symbol('x') i = sympy.integrate(x**2) result['analytical'] = float(i.subs(x, 5) - i.subs(x, 1)) print("Analytical result: {}".format(result['analytical']))
Analytical result: 41.333333333333336
We can estimate this integral using a standard Monte Carlo method, where we use the fact that the expectation of a random variable is related to its integral
$\mathbb{E}(f(x)) = \int_I f(x) dx$
We will sample a large number $N$ of points in $I$ and calculate their average, and multiply by the range over which we are integrating.
N = 10000 accum = 0 for i in range(N): x = numpy.random.uniform(1, 5) accum += x**2 volume = 5 - 1 result['MC'] = volume * accum / float(N) print("Standard Monte Carlo result: {}".format(result['MC']))
Standard Monte Carlo result: 40.9634538942694
If we increase $N$, the estimation will converge to the analytical value. (It will converge relatively slowly, following $1/\sqrt(N)$).
The LHS method consists of dividing the input space into a number of equiprobable regions, then taking random samples from each region. We can use it conveniently in Python thanks to the pyDOE library, which you will probably need to install on your computer, using a command such as
pip install pyDOE
or if you’re using a Google CoLaboratory notebook, execute a code cell containing
!pip install pyDOE
The
lhs function in this library returns an “experimental design” consisting of points in the $[0, 1]^d$ hypercube, where $d$ is the dimension of your problem (it’s 2 in this simple example). You need to scale these points to your input domain.
# obtain the pyDOE library from from pyDOE import lhs seq = lhs(2, N) accum = 0 for i in range(N): x = 1 + seq[i][0]*(5 - 1) y = 1 + seq[i][1]*(5**2 - 1**2) accum += x**2 volume = 5 - 1 result['LHS'] = volume * accum / float(N) print("Latin hypercube result: {}".format(result['LHS']))
Latin hypercube result: 41.333301290132134
Note that the error in this estimation is significantly lower than that obtained using standard Monte Carlo sampling (and if you repeat this experiment many times, you should find this is true in most cases).
A low discrepency (or quasi-random) sequence is a deterministic mathematical sequence of numbers that has the property of low discrepency. This means that there are no clusters of points and that the sequence fills space roughly uniformly. The Halton sequence is a low discrepency sequence that has useful properties for pseudo-stochastic sampling methods (also called “quasi-Monte Carlo” methods).
# from def halton(dim: int, nbpts: int): h = numpy.empty(nbpts * dim) h.fill(numpy.nan) p = numpy.empty(nbpts) p.fill(numpy.nan) P = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31] lognbpts = math.log(nbpts + 1) for i in range(dim): b = P[i] n = int(math.ceil(lognbpts / math.log(b))) for t in range(n): p[t] = pow(b, -(t + 1)) for j in range(nbpts): d = j + 1 sum_ = math.fmod(d, b) * p[0] for t in range(1, n): d = math.floor(d / b) sum_ += math.fmod(d, b) * p[t] h[j*dim + i] = sum_ return h.reshape(nbpts, dim)
N = 1000 seq = halton(2, N) plt.title("2D Halton sequence") plt.scatter(seq[:,0], seq[:,1], marker=".");
N = 1000 plt.title("Pseudo-random sequence") plt.scatter(numpy.random.uniform(size=N), numpy.random.uniform(size=N), marker=".");
Comparing the scatterplot of the 2D Halton sequence with that of a pseudo-random sequence (pseudo-random meaning “as close to randomness as we can get with a computer”), note that the Halton sequence looks less “random” and covers the space in a more regular manner. For this reason, a low discrepency sequence gives, on average, better results for stochastic sampling problems than does a truly stochastic (really pseudo-random) sampling approach. Let’s test that on our integration problem:
seq = halton(2, N) accum = 0 for i in range(N): x = 1 + seq[i][0]*(5 - 1) y = 1 + seq[i][1]*(5**2 - 1**2) accum += x**2 volume = 5 - 1 result['QMC'] = volume * accum / float(N) print("Quasi-Monte Carlo result: {}".format(result['QMC']))
Quasi-Monte Carlo result: 41.21870562744141
Another quasi-random sequence commonly used for this purpose is the Sobol’ sequence.
We can compare the error of our different estimates (each used the same number of runs):
for m in ['MC', 'LHS', 'QMC']: print("{:3} result: {:.8} Error: {:E}".format(m, result[m], abs(result[m]-result['analytical'])))
MC result: 40.963454 Error: 3.698794E-01 LHS result: 41.333301 Error: 3.204320E-05 QMC result: 41.218706 Error: 1.146277E-01
Note that in practice, it’s possible to combine Latin Hypercube sampling with low discrepency sequences.
Let us now analyze an integration problem in dimension 4, the Ishigami function. This is a well-known function in numerical optimization and stochastic analysis, because it is very highly non-linear.
def ishigami(x1, x2, x3) -> float: return numpy.sin(x1) + 7*numpy.sin(x2)**2 + 0.1 * x3**4 * numpy.sin(x1)
We want to resolve the integral over $[-\pi, \pi]^3$. We start by resolving the problem analytically, using SymPy.
x1 = sympy.Symbol('x1') x2 = sympy.Symbol('x2') x3 = sympy.Symbol('x3') expr = sympy.sin(x1) + 7*sympy.sin(x2)**2 + 0.1 * x3**4 * sympy.sin(x1) res = sympy.integrate(expr, (x1, -sympy.pi, sympy.pi), (x2, -sympy.pi, sympy.pi), (x3, -sympy.pi, sympy.pi)) result['analytical'] = float(res)
Result from a standard Monte Carlo sampling method:
N = 10000 accum = 0 for i in range(N): xx1 = numpy.random.uniform(-numpy.pi, numpy.pi) xx2 = numpy.random.uniform(-numpy.pi, numpy.pi) xx3 = numpy.random.uniform(-numpy.pi, numpy.pi) accum += numpy.sin(xx1) + 7*numpy.sin(xx2)**2 + 0.1 * xx3**4 * numpy.sin(xx1) volume = (2 * numpy.pi)**3 result['MC'] = volume * accum / float(N)
Using latin hypercube sampling:
seq = lhs['LHS'] = volume * accum / float(N)
A low-discrepency Halton sequence, for a quasi-Monte Carlo approach:
seq = halton['QMC'] = volume * accum / float(N)
Comparing the results of the three estimation methods:
for m in ['MC', 'LHS', 'QMC']: print("{:3} result: {:.8} Error: {:E}".format(m, result[m], abs(result[m]-result['analytical'])))
MC result: 874.12993 Error: 5.954187E+00 LHS result: 879.01467 Error: 1.083893E+01 QMC result: 868.23893 Error: 6.318098E-02 | https://nbviewer.jupyter.org/urls/risk-engineering.org/static/monte-carlo-LHS.ipynb | CC-MAIN-2018-51 | refinedweb | 1,224 | 51.85 |
How can I get the name of a node used?
I want to get its name by string
How can I get the name of a node used?
I want to get its name by string
out of curiosity what do you plan on doing with this bit of information? Have you tried anything?
The name is stored in the dyn, so t he simple route would be to read the file as a txt and parse the data you want from there.
I do not want to write it manually as a string. My end goal is to export this data to excel.
This will do it for DynamoRevit:
import clr # Adding the DynamoRevitDS.dll module to work with the Dynamo API clr.AddReference('DynamoRevitDS') import Dynamo # access to the current Dynamo instance and workspace dynamoRevit = Dynamo.Applications.DynamoRevit() currentWorkspace = dynamoRevit.RevitDynamoModel.CurrentWorkspace nodeNames = [] for i in currentWorkspace.Nodes: nodeNames.append(i.Name) OUT = nodeNames
Many thanks @john_pierson, I was looking for that and specifically get the name of nodes as input in the Python script, I mean get the name of the nodes feeded into the node script | https://forum.dynamobim.com/t/get-the-name-of-a-node/68019 | CC-MAIN-2021-43 | refinedweb | 190 | 74.39 |
Setting Up Wireless Networking on Your Raspberry Pi Setting Up Wireless Networking on Your Raspberry Pi Virtually every Raspberry Pi project will require a network connection, and considerable flexibility can be gained by ignoring the Ethernet port in favour of a wireless USB dongle. Read More ; plus SD card and micro USB power cable.
- Bluetooth USB adaptor. Adafruit sells a Bluetooth 4.0 BLE module confirmed working (what’s Bluetooth 4.0? How Bluetooth 4.0 Is Shaping the Future of Mobile Connectivity How Bluetooth 4.0 Is Shaping the Future of Mobile Connectivity Bluetooth is the forgotten star on the device specifications sheet. Read:
This can be a bit confusing at first, but the table is split down the middle and the column order is reversed on each side. On the far left and far right is the BCM pin number. Since we’re using 23, you should see the mode listed now as OUT. This is a useful little command just to get a good idea of what’s going on with all your pins at any point.
To write the pin high or low, just use
gpio -g write 23 1 gpio -g write 23 0
Hopefully, if you have the relay wired correctly, you’ll hear it clicking on and off. If not, don’t continue until you’ve figured out the wiring. Remember, you may need a higher voltage to activate the relay.
Once you’ve confirmed the relay and GPIO is working, add the Python modules for GPIO.
sudo apt-get install python-dev python-rpi.gpio
Now let’s modify our Python app to trigger the relay on or off when the phone is detected. You’ll find the final code at this Gist. Copy the existing detect.py to a new lock.py, and add the following import and setup commands:
import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) RELAY = 23 GPIO.setup(RELAY, GPIO.OUT).
Hooking It Up
Once you’ve confirmed the relay is firing with your proximity sensor, add in your electromagnet lock. On the relay side, bring the 12V positive into the terminal labelled COM (common), then the positive power input from the electromagnet to the NO terminal (normally open, ie. this is normally not connected to the common terminal but will be when you activate the relay).
Join the ground from the power supply and the electromagnet on the GND terminal.
Refer to the fitting instructions that came with your lock; the door needs to be quite thick, and it’s easier if it opens away from the side you want the lock to be on. Mine was the opposite, so I need the L-shaped mounting bracket, as well as an additional bit of wood to increase the door thickness.
Improvements
This was a proof of concept for me to build on with other projects, and really just to keep prying eyes out of my office when I’m not there – it’s not designed to be a foolproof security system. For that, you’d need a backup battery to keep the power flowing in the event of being cut.
Of course, if someone breaks into your house and goes to the trouble of cutting your power, you’ve probably got the bigger issue of them being a psychopath who’s out to kill you, rather than a casual burglar. You’d also want a physical deadbolt lock in addition to an electronic one, and a really big stick.
Of course, this proximity detection technique using Bluetooth isn’t just limited to an automatic door lock – you could also use it to trigger your garage door opening when you come home, or turn on your home cinema before you walk in the door.
What feature do you think I should add next? Did you have any problems building this? Let me know in the comments and I’ll do my best to help!
would this all be the same for the pi zero 1.3?
Sir i have a question, I have a 12v relay and how can I connect it to the raspberry pi without damaging it. I tried a lot of methods but my relay is not clicking but the python code is working. I'm also doing a locking system with raspberry pi.
Can more than 1 bluetooth device be added to the script? Ie a list of employees.
Yes. Just duplicate these lines:
result = bluetooth.lookup_name('78:7F:70:38:51:1B', timeout=5)
if (result != None):
print "User present"
with another address. I would skip the else statement or you'll have a list of users not there. Instead, just output when a user is present. You'll probably want to ping a URL to report presence though, not just output to the screen. You'll find our guide on integrating this with OpenHAB here:
Is the code compatible with the inbuilt Bluetooth LE in the Raspberry Pi 3
Good question. You should have a go and report back!
Hi i just bought a bluetooth usb adapter but i cant get to pair with my handphone
any suggest ?
Use this one. It works on cheap nokia devices and alsothe i°phone 4s
The inquiry.py script is offline, where can i get it? Thanks in advance
to use in the wget, use this url, or you'll get a bunch of web page nonsense too:
Hi James,
Very nice tutorials. I came across your website when I was setting up openhab for my raspberry pi.
Can you point me to any useful resource which I can use to create a "binding" for a setup of my own. Let say you want to add this locking system into your openhab how would you do it. I assume there is no binding available so one has to create a binding for the system above. Any tips will be highly appreciated.
That's far beyond my level of ability I'm afraid, but there is a wiki page about it:
Is it also possible to detect 2 or 3 i-phones/telephones for this project ? So i can add multiple users for this project.
I suck at Python programming, but a quick hack to do this would be to nest the if statements on failure, like:
Copy the existing detect.py to a new lock.py
How do i make a new lock.py ?
Just type:
cp detect.py lock.py
This is the linux command for copy, original file into a new file.
Ok thanx. I did copy te script from "at this Gist" and put my adres from my phone into the script. at nano lock.py. So what should i do next? help...
Yuu need to wire up the relay next - that's the final code you've got, so you can continue on to "hooking it up" section.
The relay is connected. But nothing happens if i turn on/off bluetooh.
And what about the next lines ? IF statement ?? Pefix the command with sudo ??.
Did detect.py work at the start of the tutorial? ie, is it actually detecting your phone, without the relay stuff? You need to make sure that one is working first before moving onto the electronics stuff.
"Prefix" means "Put before". So that means type:
sudo python lock.py
to run the version that accesses GPIO.
Yes that works allright detect.py..
Ok iwill try that and see what it does.
Ok yes now its working. Tank you for youre time to help me
How do i start the script: sudo python lock.py ? after reboot or new start ?
Add it to your rc.local file. Start with
sudo nano /etc/rc.local
Then add a line before the exit 0:
sudo python /home/pi/lock.py < /dev/null & (or wherever your lock.py is stored, I've just assumed the pi home directory there). Save by hitting CTRL-X, then Y. Then if this is the first time you've used rc.local, you'll need to set the execute bits on it, with the command: sudo chmod +x /etc/rc.local
Ok that works fine. Nice tutorial. Thanks again...
I am very new to RPi. I have the RPi 2 and i am have the bluetooth and relay connected and can toggle using the gpio -g write 23 1
gpio -g write 23 0 commands.
I am having problems with the script and hope someone can help me. I can't get the relay to toggle. I am using the Keyes 4 channel relay board. Please look at this script .
#!/usr/bin/python
import bluetooth
import time
while True:
print "Checking " + time.strftime("%a, %d %b %Y %H:%M:%S", time.gmtime())
result = bluetooth.lookup_name('68:DB:CA:BE:B8:4E', timeout=3)
if (result != None):
print "User present"
else:
print "User out of range"
time.sleep(5)
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
RELAY = 23
GPIO.setup(RELAY, GPIO.OUT)
Dude, why don't you make it a product and sell it to consumers? Great invention!
Hi thank you very much now the bluetooth problem is solved :)
Hi, i'm currently in the middle of trying out this project. I've written up the codes inside detect.py but it keeps checking when i run the file. It doesn't says that user present. My phone is located right beside the pi.
Couple of thoughts:
1. Check the hardware address is correct and following the same format. Make sure there are no spaces in there accidentally.
2. What kind of Bluetooth adaptor is it, and is your phone compatible with BLE? If it's an older phone without BLE, I'm not sure it would respond to a BLE device, but I could be wrong.
Hi Arifah - actually, I think this is because your phone isn't open on the Bluetooth settings page. It needs to be open on that settings to be in discoverable mode, otherwise it wont broadcast its address. LAter the program can scan for a specific address, but for detect.py to work it must be discoverable.
Hi thank you very much now the bluetooth problem is solved ????. | http://www.makeuseof.com/tag/auto-locking-office-door-smartphone-proximity-sensor/ | CC-MAIN-2017-09 | refinedweb | 1,703 | 75.71 |
From: Ion Gaztañaga (igaztanaga_at_[hidden])
Date: 2007-03-17 08:54:26
Cory Nelson wrote:
>> What is your evaluation of the design?
>
> Very good. It was a pleasure to use. I have long wanted an easy way
> to apply C techniques in a true to C++ style, and this nails it.
>> What is your evaluation of the implementation?
>
> I did not review the r/b tree, but everything else seemed well written.
Thanks.
>> What is your evaluation of the documentation?
>
> I found documentation lacking. I would like to see full class
> references: it took me a bit of hunting to find a commonly used
> function container::current() which was very frustrating.
What do you mean with full class references? Current reference
lists all the operations of every public class. What do you think is
missing?
>> And finally, every review should answer this question:
>> Do you think the library should be accepted as a Boost library?
>
> Absolutely, I vote to accept.
Thanks for you vote!
Regards
Ion
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2007/03/118145.php | CC-MAIN-2020-40 | refinedweb | 188 | 70.09 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.