text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
The SGX architecture defines an extensive set of error codes that areused by ENCL{S,U,V} instructions to provide software with (somewhat)precise error information. Though they are architectural, define theknown error codes in a separate file from sgx_arch.h so that they canbe exposed to userspace. For some ENCLS leafs, e.g. EINIT, returningthe exact error code on failure can enable userspace to make informeddecisions>--- arch/x86/include/uapi/asm/sgx_errno.h | 91 +++++++++++++++++++++++++++ 1 file changed, 91 insertions(+) create mode 100644 arch/x86/include/uapi/asm/sgx_errno.hdiff --git a/arch/x86/include/uapi/asm/sgx_errno.h b/arch/x86/include/uapi/asm/sgx_errno.hnew file mode 100644index 000000000000..48b87aed58d7--- /dev/null+++ b/arch/x86/include/uapi/asm/sgx_errno.h@@ -0,0 +1,91 @@+/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */+/*+ * Copyright(c) 2018 Intel Corporation.+ *+ * Contains the architecturally defined error codes that are returned by SGX+ * instructions, e.g. ENCLS, and may be propagated to userspace via errno.+ */++#ifndef _UAPI_ASM_X86_SGX_ERRNO_H+#define _UAPI_ASM_X86_SGX_ERRNO_H++/**+ * enum sgx_encls_leaves - return codes for ENCLS, ENCLU and ENCLV+ * %SGX_SUCCESS: No error.+ * %SGX_INVALID_SIG_STRUCT: SIGSTRUCT contains an invalid value.+ * %SGX_INVALID_ATTRIBUTE: Enclave is not attempting to access a resource+ * for which it is not authorized.+ * %SGX_BLKSTATE: EPC page is already blocked.+ * %SGX_INVALID_MEASUREMENT: SIGSTRUCT or EINITTOKEN contains an incorrect+ * measurement.+ * %SGX_NOTBLOCKABLE: EPC page type is not one which can be blocked.+ * %SGX_PG_INVLD: EPC page is invalid (and cannot be blocked).+ * %SGX_EPC_PAGE_CONFLICT: EPC page in use by another SGX instruction.+ * %SGX_INVALID_SIGNATURE: Enclave's signature does not validate with+ * public key enclosed in SIGSTRUCT.+ * %SGX_MAC_COMPARE_FAIL: MAC check failed when reloading EPC page.+ * %SGX_PAGE_NOT_BLOCKED: EPC page is not marked as blocked.+ * %SGX_NOT_TRACKED: ETRACK has not been completed on the EPC page.+ * %SGX_VA_SLOT_OCCUPIED: Version array slot contains a valid entry.+ * %SGX_CHILD_PRESENT: Enclave has child pages present in the EPC.+ * %SGX_ENCLAVE_ACT: Logical processors are currently executing+ * inside the enclave.+ * %SGX_ENTRYEPOCH_LOCKED: SECS locked for EPOCH update, i.e. an ETRACK is+ * currently executing on the SECS.+ * %SGX_INVALID_EINITTOKEN: EINITTOKEN is invalid and enclave signer's+ * public key does not match IA32_SGXLEPUBKEYHASH.+ * %SGX_PREV_TRK_INCMPL: All processors did not complete the previous+ * tracking sequence.+ * %SGX_PG_IS_SECS: Target EPC page is an SECS and cannot be+ * blocked.+ * %SGX_PAGE_ATTRIBUTES_MISMATCH: Attributes of the EPC page do not match+ * the expected values.+ * %SGX_PAGE_NOT_MODIFIABLE: EPC page cannot be modified because it is in+ * the PENDING or MODIFIED state.+ * %SGX_PAGE_NOT_DEBUGGABLE: EPC page cannot be modified because it is in+ * the PENDING or MODIFIED state.+ * %SGX_INVALID_COUNTER: {In,De}crementing a counter would cause it to+ * {over,under}flow.+ * %SGX_PG_NONEPC: Target page is not an EPC page.+ * %SGX_TRACK_NOT_REQUIRED: Target page type does not require tracking.+ * %SGX_INVALID_CPUSVN: Security version number reported by CPU is less+ * than what is required by the enclave.+ * %SGX_INVALID_ISVSVN: Security version number of enclave is less than+ * what is required by the KEYREQUEST struct.+ * %SGX_UNMASKED_EVENT: An unmasked event, e.g. INTR, was received+ * while the instruction was executing.+ * %SGX_INVALID_KEYNAME: Requested key is not supported by hardware.+ */+enum sgx_return_codes {+ SGX_SUCCESS = 0,+ SGX_INVALID_SIG_STRUCT = 1,+ SGX_INVALID_ATTRIBUTE = 2,+ SGX_BLKSTATE = 3,+ SGX_INVALID_MEASUREMENT = 4,+ SGX_NOTBLOCKABLE = 5,+ SGX_PG_INVLD = 6,+ SGX_EPC_PAGE_CONFLICT = 7,+ SGX_INVALID_SIGNATURE = 8,+ SGX_MAC_COMPARE_FAIL = 9,+ SGX_PAGE_NOT_BLOCKED = 10,+ SGX_NOT_TRACKED = 11,+ SGX_VA_SLOT_OCCUPIED = 12,+ SGX_CHILD_PRESENT = 13,+ SGX_ENCLAVE_ACT = 14,+ SGX_ENTRYEPOCH_LOCKED = 15,+ SGX_INVALID_EINITTOKEN = 16,+ SGX_PREV_TRK_INCMPL = 17,+ SGX_PG_IS_SECS = 18,+ SGX_PAGE_ATTRIBUTES_MISMATCH = 19,+ SGX_PAGE_NOT_MODIFIABLE = 20,+ SGX_PAGE_NOT_DEBUGGABLE = 21,+ SGX_INVALID_COUNTER = 25,+ SGX_PG_NONEPC = 26,+ SGX_TRACK_NOT_REQUIRED = 27,+ SGX_INVALID_CPUSVN = 32,+ SGX_INVALID_ISVSVN = 64,+ SGX_UNMASKED_EVENT = 128,+ SGX_INVALID_KEYNAME = 256,+};++#endif /* _UAPI_ASM_X86_SGX_ERRNO_H */-- 2.19.1 | https://lkml.org/lkml/2019/4/17/352 | CC-MAIN-2022-05 | en | refinedweb |
14. Bulls and Cows / Mastermind in Tkinter
By Bernd Klein. Last modified: 10 Jan 2022.
Implementation in Python using Tkinter
In this chapter of our advanced Python topics we present an implementation of the game Bulls and Cows using Tkinter as the GUI. This game, which is also known as "Cows and Bulls" or "Pigs and Bulls", is an old code-breaking game played by two players. The game goes back to the 19th century and can be played with paper and pencil. Bulls and Cows -- also known as Cows and Bulls or Pigs and Bulls or Bulls and Cleots -- was the inspirational source of Mastermind, a game invented in 1970 by Mordecai Meirowitz. The game is played by two players. Mastermind and "Bulls and Cows" are very similar and the underlying idea is essentially the same, but Mastermind is sold in a box with a decoding board and pegs for the coding and the feedback pegs. Mastermind uses colours as the underlying code information, while Bulls and Cows uses digits.
The Algorithm is explained in detail in our chapter "Mastermind / Bulls and Cows" in the Applications chapter. You can also find the code for the module combinatorics.
The Code for Mastermind
from tkinter import * from tkinter.messagebox import * import random from combinatorics import all_colours def inconsistent(p, guesses): """ the function checks, if a permutation p, i.e. a list of colours like p = ['pink', 'yellow', 'green', 'red'] is consistent with the previous colours. Each previous colour permuation guess[0] compared (check()) with p has to return the same amount of blacks (rightly positioned colours) and whites (right colour at wrong position) as the corresponding evaluation (guess[1] in the list guesses) """ for guess in guesses: res = check(guess[0], p) (rightly_positioned, permutated) = guess[1] if res != [rightly_positioned, permutated]: return True # inconsistent return False # i.e. consistent def answer_ok(a): """ checking of an evaulation given by the human player makes sense. 3 blacks and 1 white make no sense, for example. """ (rightly_positioned, permutated) = a if (rightly_positioned + permutated > number_of_positions) \ or (rightly_positioned + permutated < len(colours) - number_of_positions): return False if rightly_positioned == 3 and permutated == 1: return False return True def get_evaluation(): """ guess """ rightly_positioned, permutated = get_evaluation() if rightly_positioned == number_of_positions: return(current_colour_choices, (rightly_positioned, permutated)) if not answer_ok((rightly_positioned, permutated)): print("Input Error: Sorry, the input makes no sense") return(current_colour_choices, (-1, permutated)) guesses.append((current_colour_choices, (rightly_positioned, permutated))) view_guesses() current_colour_choices = create_new_guess() show_current_guess(current_colour_choices) if not current_colour_choices: return(current_colour_choices, (-1, permutated)) return(current_colour_choices, (rightly_positioned, permutated)) def check(p1, p2): """ check() calcualtes the number of bulls (blacks) and cows (whites) of two permutations """ blacks = 0 whites = 0 for i in range(len(p1)): if p1[i] == p2[i]: blacks += 1 else: if p1[i] in p2: whites += 1 return [blacks, whites] def create_new_guess(): """ a new guess is created, which is consistent to the previous guesses """ next_choice = next(permutation_iterator) while inconsistent(next_choice, guesses): try: next_choice = next(permutation_iterator) except StopIteration: print("Error: Your answers were inconsistent!") return () return next_choice def() | https://python-course.eu/tkinter/bulls-and-cows-mastermind-in-tkinter.php | CC-MAIN-2022-05 | en | refinedweb |
43. Problems
This chapter contains all the problems at one place. Please try to solve these to gain a better understanding of language. Note that these are not categorized in any way but I have tried to maintain a certain order.
All programs should have function prototypes, global declarations, inline function. constants and macros(never use these) radius as input find out its area and perimeter and print them.
Try to find which of the following characters are invalid in a variable name and which can be used as the first character.
Which of the following variable names are valid? Refer to previous problem to find the answer.
Write a program which takes a number and prints it. If the number is 1 then it prints “One” if it is 2 then it prints “Two”. Do it for up to 10.
Given a radius as input find out its area and perimeter and print them.
Write a program to print Pascal’s triangle. Details of Pascal’s triangle can be found at
Given a number in decimal compute its hex, octal and binary values and print them.
Given a number in hex compute its decimal, octal and binary values and print them.
Given an MxN matrix find its inverse and traspose.
Given an LxM and an MxN matrix find their product.
Given N find the total number of square on NxN chessboard.
Given x in degress find Sine, Cosine and Tangent of x using Taylor’s expansion upto an accuracy of 10 places after decimal.
Take a female’s name, age and color of dress as input. If she is below 16 and wearing blue then print “She is young and she is wearing blue dress” else print “She is grown up and wearing blue dress”. If she is wearing any other color dress then print that color. Try to minimise number of comparisons. Also, try to avoid lots of if-else. Try this using switch.
What would be appropriate data type for previous problem?
Optimize the Fibinacci program given in the book.
Take a string as input. The string can be of any length then print the string.
Find out what is the problem with strcpy and strlen functions of standard library.
Write a pseudo random generator using modular arithmetic.
Consider a company which has lots of employees. An employee can have name, age, address line1, line2, salary, tax cut percentage and take home salary. As input any number of employees can be entered with all the data. Find the take home salary and display all data for all employees.
Write signal handlers for kill, terminate and interrupt signals. When program receives those signals print the signal number and exit.
Write a bignum data structure. This structure should be able to take any integral number only limited by memory. Perform addition, subtraction, multiplication and division. Also write modulus, nth root and nth power functions.
Given a number upto trillions print its textual form.
Implement a loop without using any of the loop constructs. Proved two methods. Note that procesor does not have any instructions for looping.
Optimize an m*n iteration single loop for speed.
Given two strings find out whether second is a substring of first.
Given a string which has only words, numbers, spaces and final period split the string on the basis of spaces. Print all tokens. Print final token without period. Write two programs one using strtok library function and one without it.
Given an array of integers print all duplicates.
Given an array of integers find the max subarray sum.
Given a complex number find its square root without using csqrt library function.
Write a complex number class and implement all functions of complex.h header file.
Implement strcpy, strlen, strcat, strcmp, strncpy, strncat and strncmp.
Implement printf. For printing to screen use fprintf.
Read a text file and print its statistics. Number of alphabets, frequency of alphabets, number of words, frequency of words and number of sentences.
Append “C is cool.” to a text file.
Write a program which will have a function func. It can do anything or nothing. Depending on compilation command argument for -D switch the function should be included or discarded while compilation.
Write a program where you malloc an integer pointer. Do not free it. Check your code with valgrind for possible memory leak.
Generate a core of a program. You will need a SIGSEGV or other signals which generate a core.
Use gdb to put a breakpoint, watch (variable, array, pointer, structure members), passing command line arguments, examine core, listing source, stepping into a function, stepping over and backtrace.
Learn various functionalities of valgrind.
Use clang’s static analyzer for all programs written till now to check your code.
Traverse a matrix in spiral form. Both going in and out.
Given a two dimensional array with all values set to 0. You will be given n cells column and row number as input. Try to draw a straight line by making all values 1 to construct a polygon in best possible fashion. Hint: Use Bresenham’s line drawing algorithm. You can find the details at
Write a program to parse XML and json files.
Write a program which will give similar output when run against a filename.c as given by “gcc -E filename.c”. Essentially you are supposed to write preprocessor.
What is the problem with following code?
#include <stdio.h> int main() { float f = 0.0; for(int i=0; i<10; i++) { f += 0.1; } printf("%f\n", f); if(f == 1.0) printf("%f\n", f); return 0; }
What is the problem with following code?
#include <stdio.h> int main() { int i = 0; if(i == 0); { print("Hello there\n"); } return 0; }
What is the problem with following code?
#include <stdio.h> int main() { for(int i = 0; i < 10; i--) continue; for(int i = 0; i < 10; i--) i = 0; return 0; }
Consider a sorted array. How would you search a value in it?
Print reversed Pascal triangle.
Print following without using loops.
* *** ***** ******* ***** *** *
Use two files as data. Store parity in third file. Then use one of the data files and parity to generate third.
Find your local time then go back by 5 hrs 20 mins and 36 seconds and print that time.
Take an year as input and generate its calendar.
Given two dates as input in following format MMDDYYYY find the no. of days between them.
Open a file, fseek it past end and see what happens to its file size.
Implement tower of Hanoi using iterative and recursive methods. Details of tower of Hanoi can be found at
Compute value of \(\pi\) using various formulas described at upto a given precision.
Write a parser for BNF grammar and use it to validate input. Use C99’s grammar as sample. Details of BNF grammar can be found at Hint: Look up Bison and Flex.
Given an input find that in a given text file. Input can be any valid POSIX regex.
Please find more problems at SPOJ and Valladolid programming championship and practice them. | https://www.ashtavakra.org/c-programming/problems/ | CC-MAIN-2022-05 | en | refinedweb |
Splashscreen
The Splash Screen plugin provides control options for displaying and hiding a splash screen, commonly during application launch.
#Installation
If you have not already setup Ionic Enterprise in your app, follow the one-time setup steps.
Next, install the plugin:
- Capacitor
- Cordova
ionic cordova plugin add @ionic-enterprise/splashscreen
#Preferences
You can add the following preferences in your
config.xml:
AutoHideSplashScreen (boolean, default to
true). Indicates whether to hide splash screen automatically or not. The splash screen is hidden after the amount of time specified in the
SplashScreenDelaypreference.
<preference name="AutoHideSplashScreen" value="true" />
SplashScreenDelay (number, default to 3000). Amount of time in milliseconds to wait before automatically hide splash screen. This value used to be in seconds (but is now milliseconds) so values less than 30 will continue to be treated as seconds. (Consider this a deprecated patch that will disapear in some future version.)
<preference name="SplashScreenDelay" value="3000" />
FadeSplashScreen (boolean, defaults to
true): Set to
falseto"/>
ShowSplashScreenSpinner (boolean, defaults to
true): Set to
falseto hide the splash screen spinner.
<preference name="ShowSplashScreenSpinner" value="false"/>
#Android Only Preferences.
<preference name="SplashMaintainAspectRatio" value="true" />
SplashShowOnlyFirstTime preference is optional and defaults to
true. When set to
truethe).
<preference name="SplashShowOnlyFirstTime" value="true" />
SplashScreenSpinnerColor preference is also optional and is ignored when not set. Setting it to a valid color name or HEX color code will change the color of the spinner on Android 5.0+ devices.
<preference name="SplashScreenSpinnerColor" value="white" />
#Index
#Classes
#Classes
#SplashScreen
SplashScreen:
name: Splash Screen
description: This plugin displays and hides a splash screen during application launch. The methods below allows showing and hiding the splashscreen after the app has loaded.
usage:
import { SplashScreen } from '@ionic-enterprise/splashscreen/ngx'; constructor(private splashScreen: SplashScreen) { } ... this.splashScreen.show(); this.splashScreen.hide();
#hide
▸ hide():
void
Hides the splashscreen
Returns:
void
#show
▸ show():
void
Shows the splashscreen
Returns:
void
Release Notes
#5.0.3 (May 09, 2019)
- Update CI configuration and README (#210, #208, #198, #194)
- Add or update GitHub pull request and issue template
- CB-13826 Incremented plugin version.
- CB-12277 (android) avoid NullPointerException on splashImageView when removing splashscreen
#5.0.2 (Jan 24, 2018)
#5.0.1 (Dec 27, 2017)
#5.0.0 (Dec 15, 2017)
#4.1.0 (Nov 06, 2017)
- CB-13473 (CI) Removed Browser builds from AppVeyor
- CB-12011 (android) added the possibility to change the spinner color on Android 5.0+ apps
- CB-13028 (CI) Browser builds on Travis and AppVeyor
- CB-13094 (android) Don't show splash when activity being finished
- CB-11487 (browser) Documented
AutoHideSplashScreenfor Browser
- CB-11488 (browser) The
hide()call became non re-entrant after the addition of fade out. This fixes the issue.
- CB-11487 (browser) The standard
AutoHideSplashScreen
config.xmlproperty is now supported by the Browser platform.
- CB-11486 (browser)
splashScreenDelaynow feed through
parseIntto ensure it is an integer by the time it's value is passed in to
setTimeout()in
hide().
- CB-12847 added
bugsentry to
package.json.
#4.0.3 (Apr 27, 2017)
#4.0.2 (Feb 28, 2017)
- CB-12353 Corrected merges usage in
plugin.xml
- CB-12369 Add plugin typings from
DefinitelyTyped
- CB-12363 Added build badges for iOS 9.3 and iOS 10.0
- CB-12230 Removed Windows 8.1 build badges
#4.0.1 (Dec 07, 2016)
- CB-12224 Updated version and RELEASENOTES.md for release 4.0.1
- CB-11751 'extendedSplashScreen' is undefined Document that splashscreen needs to be disabled on Windows in case of updating entire document body
- CB-9287 Not enough Icons and Splashscreens for Windows 8.1 and Windows Phone 8.1
- CB-11917 - Remove pull request template checklist item: "iCLA has been submitted…"
- CB-11830 (iOS) Fix doc typos in PR#114
- CB-11829 (iOS) Support for CB-9762; docs (CB-11830)
- CB-11832 Incremented plugin version.
#4.0.0 (Sep 08, 2016)
- CB-11795 Add 'protective' entry to cordovaDependencies
- CB-11326 Prevent crash when initializing plugin after navigating to another URL
- Fix crash on iOS when reloading page from remote Safari
- Add badges for paramedic builds on Jenkins
- Add pull request template.
- CB-11179 Extend the windows-splashscreen docs
- CB-11159 Fix flaky splashscreen native tests
- CB-11156 Change default
FadeSplashScreenDurationvalue
- CB-8056 Updated the dependency version, added it to the docs
- CB-10996 Adding front matter to README.md
- CB-8056 Implement splashscreen for Windows platform
- CB-6498 Misleading documentation in Android Quirks
#3.2.2 (Apr 15, 2016)
- CB-10979 Fix splashscreen iOS native tests. Added
jshintignorefor tests/ios
- CB-10895 Transparent Splashscreen view sometimes remains
- CB-10562
hide()not working in latest splashscreen plug in 3.1.0 in iOS
- CB-10688 Plugin Splashscreen Readme must have examples.
- CB-10864 Run iOS native tests on Travis
#3.2.1 (Mar 09, 2016)
- CB-10764 Remove emoji in cordova-plugin-splashscreen
- CB-10650 Non-index content.src causes Splashscreen to be not displayed on Browser
- CB-10636 Add JSHint for plugins
- CB-10606 fix deprecation warning for interfaceOrientation on iOS
- chore: edit package.json license to match SPDX id
#3.2.0 (Feb 09, 2016)
- CB-10422 Splashscreen displays black screen with no image on Android
- CB-10412 AutoHideSplashScreen "false" isn't taken in account on iOS
- CB-9516 Android SplashScreen - Spinner Does Not Display
- CB-9094 Smarter autohide logic on Android
- CB-8396 Add AutoHideSplashScreen logic to Android's Splashscreen
#3.1.0 (Jan 15, 2016)
- CB-9538 Implementing
FadeSplashScreenfeature for Android
- CB-9240 Cordova splash screen plugin iPad landscape mode issue
- CB-10263 Fix splashscreen plugin filenames for Asset Catalog
- CB-9374 Android add
SplashShowOnlyFirstTimeas preference
- CB-10244 Don't rotate the iPhone 6 Plus splash
- CB-9043 Fix the ios splashscreen being deformed on orientation change
- CB-10079 Splashscreen plugin does not honor
SplashScreenDelayon iOS
- CB-10231 Fix
FadeSplashScreento default to true on iOS
#3.0.0 (Nov 18, 2015)
- CB-10035 Updated
RELEASENOTESto be newest to oldest
- Fixing contribute link.
- CB-9750
FadeSplashDurationis now in
msecs
- CB-8875
FadeSplashScreenwas not fading
- CB-9467 SplashScreen does not show any image in hosted app on Windows 10
- CB-7282 Document
AutoHideSplashScreenpreference
- CB-9327 - Splashscreen not receiving
CDVPageLoadNotification
- WP8: Avoid config
valueof a wrong element.
#2.1.0 (Jun 17, 2015)
- added missing license headers
- CB-9128 cordova-plugin-splashscreen documentation translation: cordova-plugin-splashscreen
- fix npm md issue
- Fixed iOS unit tests.
- CB-3562: Disable screen rotation for iPhone when splash screen is shown. (closes #47)
- CB-8988: Fix rotation on iOS/iPad (closes #46)
- CB-8904: Don't reset the static variable when it's destroyed, otherwise we might as well just have a member variable
- Removed wp7 from
plugin.xmland package.json
- CB-8750 [wp8]: Rewrite resoultion helper
- CB-8750 [wp8]: Allow resolution-specific splashscreen images
- CB-8758 [wp8]: UnauthorizedAccessException on hide()
#2.0.0 (Apr 15, 2015)
- give users a way to install the bleeding edge.
- CB-8746 gave plugin major version bump
- CB-8797 - Splashscreen preferences FadeSplashScreenDuration and FadeSplashScreen (iOS) are missing
- CB-8836 - Crashes after animating splashscreen
- CB-8753 android: Fix missing import in previous commit
- CB-8753 android: Adds
SplashMaintainAspectRatiopreference (close #43)
- CB-8683 changed plugin-id to pacakge-name
- CB-8653 properly updated translated docs to use new id
- CB-8653 updated translated docs to use new id
- CB-8345 Make default for splashscreen resource "screen" (which is what template and CLI assume it to be)
- Revert "CB-8345 android: Make "splash" the default resource ID instead of null"
- Use TRAVIS_BUILD_DIR, install paramedic by npm
- CB-8345 android: Make "splash" the default resource ID instead of null
- docs: added Windows to supported platforms
- CB-7964 Add cordova-plugin-splashscreen support for browser platform
- CB-8653 Updated Readme
- [wp8] oops, Added back config parse result checks
- [WP8] code cleanup, minor refactors, comments to clarify some stuff.
- Extend WP8 Splash Screen to respect SplashScreen and SplashScreenDelay preferences from config file
- CB-8574 Integrate TravisCI
- CB-8438 cordova-plugin-splashscreen documentation translation: cordova-plugin-splashscreen
- CB-8538 Added package.json file
- CB-8397 Add support to 'windows' for showing the Windows Phone splashscreen
#1.0.0 (Feb 04, 2015)
- CB-8351 ios: Stop using deprecated IsIpad macro
- CB-3679 Add engine tag for Android >= 3.6.0 due to use of
preferences
- CB-3679 Make SplashScreen plugin compatible with cordova-android@4.0.x
#0.3.5 (Dec 02, 2014)
- CB-7204 - Race condition when hiding and showing spinner (closes #21)
- CB-7700 cordova-plugin-splashscreen documentation translation: cordova-plugin-splashscreen
#0.3.4 (Oct 03, 2014)
- Finalized iOS splash screen (image name) tests. 176 tests in all, 44 for each type of device (iPad, iPhone, iPhone5, iPhone6, iPhone 6 Plus).
- CB-7633 - (Re-fix based on updated unit tests) iPhone 6 Plus support
- Updated iOS tests for locked orientations
- Added more iOS splash screen tests.
- CB-7633 - Add support for iPhone 6/6+
- Added failing iPhone 6/6 Plus tests.
- Added 'npm test'
- CB-7663 - iOS unit tests for splash screen
- Properly formatted splashscreen preference docs.
#0.3.3 (Sep 17, 2014)
- CB-7249 cordova-plugin-splashscreen documentation translation
- Renamed test dir, added nested
plugin.xml
- added documentation for manual tests
- CB-7196 port splashscreen tests to framework
#0.3.2 (Aug 06, 2014)
- CB-6127 Updated translations for docs
- CB-7041 ios: Fix image filename logic when setting the iPad splash screen
- fixes Splashscreen crash on WP8
- Remove outdated doc
#0.3.1 (Jun 05, 2014)
- documentation translation: cordova-plugin-splashscreen
- Lisa testing pulling in plugins for plugin: cordova-plugin-splashscreen
- Lisa testing pulling in plugins for plugin: cordova-plugin-splashscreen
- Lisa testing pulling in plugins for plugin: cordova-plugin-splashscreen
- Lisa testing pulling in plugins for plugin: cordova-plugin-splashscreen
- CB-6810 Add license to CONTRIBUTING.md
- [wp8] updated quirk for and combined iOS,WP8,BB10 quirks as they are all the same
- [wp] implemented OnInit so splash screen can be shown before cordova page is loaded
- [wp] plugin must be autoloaded for AutoHideSplashScreen preference to work
- CB-6483 Use splash screen image from manifest on Windows8
- CB-6491 add CONTRIBUTING.md
- Revert "Merge branch 'tizen' of"
#0.3.0 (Apr 17, 2014)
- Add Tizen support to plugin
- CB-6422: [windows8] use cordova/exec/proxy
- CB-4051: [ios] - Re-fix - Splashscreen rotation problem (closes #13)
- CB-6460: Update license headers
- CB-6465: Add license headers to Tizen code
- Add NOTICE file
#0.2.7 (Feb 05, 2014)
- CB-3562 Fix aspect ratio on landscape-only iPhone applications
- CB-4051 fix for splashscreen rotation problem
#0.2.6 (Jan 02, 2014)
#0.2.5 (Dec 4, 2013)
- add ubuntu platform
- Added amazon-fireos platform. Change to use amazon-fireos as a platform if the user agent string contains 'cordova-amazon-fireos'
- CB-5124 - Remove splashscreen config.xml values from iOS Configuration Docs, move to plugin docs
#0.2.4 (Oct 28, 2013)
- CB-5128: add repo + issue tag to
plugin.xmlfor splashscreen plugin
- CB-5010 Incremented plugin version on dev branch.
#0.2.3 (Oct 9, 2013)
- CB-4806 Re-fix Update splashscreen image bounds for iOS 7
- CB-4934 plugin-splashscreen should not show by default on Windows8
- CB-4929 plugin-splashscreen not loading proxy windows8
- CB-4915 Incremented plugin version on dev branch. | https://ionic.io/docs/supported-plugins/splashscreen | CC-MAIN-2022-05 | en | refinedweb |
This topic provides an introduction to metadata interfaces and metadata properties that can be defined on content models.
In this topic
- How it works
- Usage
- IContentData
- IContent
- IVersionable
- ILocale
- ILocalizable
- IReadOnly/IReadOnly<T>
- IModifiedTrackable
- IChangeTrackable
- IContentSecurable
- IRoutable
- ICategorizable
- IInitializableContent
- Exportable
How it works
EPiServer.Core.IContentData is the base interface that all content models implement. All content types, except BlockData, also implement the EPiServer.Core.IContent interface, which is required for a content instance to have a unique ID and its own lifecycle (that is, can be loaded/saved individually). Through the EPiServer.IContentRepository, you can perform CRUD (Create, Read, Update, Delete) operations on content instances implementing EPiServer.Core.IContent, for example, listing and move.
There are also many additional metadata interfaces a content type can implement that define the characteristics of the content type. Here is a list of the existing metadata interfaces with a description on the purpose of each interface and which properties they contain.
Usage
Because some of the interfaces are optional, a recommended pattern when working with a "general" IContent instance is to use the is or as operators to check if the instance implements an interface, as in this example:
public static CultureInfo Language(this IContent content) { if (content == null) throw new ArgumentNullException(nameof(content)); return (content is ILocale locale) ? locale.Language : CultureInfo.InvariantCulture; }
public static bool IsModified(this IContent content) { if (content == null) throw new ArgumentNullException(nameof(content)); var modifiedTrackable = content as IModifiedTrackable; return modifiedTrackable == null || modifiedTrackable.IsModified; }
Shared blocks
BlockData does not implement IContent, while shared block instances still have their own identity defined on IContent. This is accomplished so that during runtime, when a shared block instance is created (for example, through a call to IContentRepository.GetDefault<T> where T is a type inheriting from BlockData), the CMS creates a new .NET type inheriting T using a technique called mixin where the new generated subclass will implement some extra interfaces (including IContent).
That means that a shared block instance of T will implement IContent while an instance of T that is a property on a Page will not.
IContentData
Base interface for all content models. It contains a backing PropertyDataCollection that is used when loading/saving data from database.
IContent
Base interface for all content models that can have an own identity, and hence can be loaded/saved individually. IContent inherits IContentData. Note that ContentLink.ID and ContentGuid is the same for all language branches for a content item. So the combination of ContentLink/ContentGuid and ILocale.Language uniquely specifies the content instance (given that the content implements ILocalizable). All built-in base content types, except BlockData, implement IContent. Shared block instances implement IContent (and most other metadata interfaces) at runtime.
IVersionable
An optional interface for content that supports different versions for each language branch. There can only be at most one version published at each time. All built-in types, except ContentFolder, implement IVersionable. See Content Versions for more information regarding versions.
ILocale
An optional interface for content that specifes which language a specific content instance has. All built-in types except ContentFolder implements ILocale.
ILocalizable
An optional interface for content that support multiple language branches. Inherits ILocale. All built-in types, except ContentFolder and MediaData, implement ILocalizable.
IReadOnly/IReadOnly<T>
An optional interface for content that support read-only instances. It is highly recommended that content types implement IReadOnly, which ensures the integrity of the content instances when instances are served from cache and hence reused across different requests. All built-in base content classes implement IReadOnly.
IModifiedTrackable
An optional interface for content instances that support modified tracking. There is a high performance gain during save operations if IModifiedTrackable since then only data that has actually changed needs to be persisted. All built-in base content classes implement IModifiedTrackable.
IChangeTrackable
An optional interface for content instances that support tracking of changes. All built-in base content classes implement IChangeTrackable.
IContentSecurable
An optional interface for content instances that support access checks. All built-in base content classes implement IContentSecurable.
IRoutable
An interface that content items that should be routable through a content URL should implement. All built-in base content classes, except shared blocks, implement IRoutable.
ICategorizable
An interface that content items that should be possible to categorize should implement. All built-in base content classes, except content folders, implement ICategorizable.
IInitializableContent
An optional interface that content items that can be implemented if default values should be added when a new instance of the content type is created.
IExportable
An optional interface that specifies how a content instance should be handled during export.
Last updated: Sep 28, 2021 | https://world.optimizely.com/documentation/developer-guides/CMS/Content/content-metadata-properties/ | CC-MAIN-2022-05 | en | refinedweb |
The Basics
class.
Example #2 Some examples of the $this pseudo-variable
We're assuming that error_reporting is disabled for this example; otherwise the following code would trigger deprecated and strict notices, respectively, depending on the PHP version.
<?php class A { function foo() { if (isset($this)) { echo '$this is defined ('; echo get_class($this); echo ")\n"; } else { echo "\$this is not defined.\n"; } } } class B { function bar() { A::foo(); } } $a = new A(); $a->foo(); A::foo(); $b = new B(); $b->bar(); B::bar(); ?>
Output of the above example in PHP 5:
$this is defined (A) $this is not defined. $this is defined (B) $this is not defined.
Output of the above example in PHP 7:
$this is defined (A) $this is not defined. $this is not defined. .
Note:
If there are no arguments to be passed to the class's constructor, parentheses after the class name may be omitted.
Example #3 Creating an instance
<?php $instance = new SimpleClass(); // This can also be done with a variable: $className = 'SimpleClass'; $instance = new $className(); // new SimpleClass() ?>
In the class context, it is possible to create a new object by new self and new parent.
When assigning an already created instance of a class to a new variable, the new variable will access the same instance as the object that was assigned. This behaviour is the same when passing instances to a function. A copy of an already created object can be made by cloning it.
Example )
PHP 5.4.0 introduced the possibility to access a member of a newly created object in a single expression:
Example #6 Access member of newly created object
<?php echo (new DateTime())->format('Y'); ?>
The above example will output something similar to:
2016
Properties and methods
Class properties and methods live in separate "namespaces", so it is possible to have a property and a method with the same name. Referring to both a property and a method has the same notation, and whether a property will be accessed or a method will be called, solely depends on the context, i.e. whether the usage is a variable access or a function call.
Example #7 Property access vs. method call
<?php class Foo { public $bar = 'property'; public function bar() { return 'method'; } } $obj = new Foo(); echo $obj->bar, PHP_EOL, $obj->bar(), PHP_EOL;
The above example will output:
property method
That means that calling an anonymous function which has been assigned to a property is not directly possible. Instead the property has to be assigned to a variable first, for instance. As of PHP 7.0.0 it is possible to call such a property directly by enclosing it in parentheses.
Example #8 Calling an anonymous function stored in a property
<?php class Foo { public $bar; public function __construct() { $this->bar = function() { return 42; }; } } $obj = new Foo(); // as of PHP 5.3.0: $func = $obj->bar; echo $func(), PHP_EOL; // alternatively, as of PHP 7.0.0: echo ($obj->bar)(), PHP_EOL;
The above example will output: #9 Simple Class Inheritance
<?php #10 Class name resolution
<?php namespace NS { class ClassName { } echo ClassName::class; } ?>
The above example will output:
NS\ClassName
Note:
The class name resolution using ::class is a compile time transformation. That means at the time the class name string is created no autoloading has happened yet. As a consequence, class names are expanded even if the class does not exist. No error is issued in that case. | http://semantic-portal.net/php-language-reference-classes-objects-basics | CC-MAIN-2022-05 | en | refinedweb |
@Target(value={TYPE,METHOD}) @Retention(value=RUNTIME) @Documented @Conditional(value=org.springframework.boot.autoconfigure.condition.OnBeanCondition.class) public @interface ConditionalOnMissingBean
@Conditionalthat only matches when no beans meeting the specified requirements are already contained in the
BeanFactory. None of the requirements must be met for the condition to match and the requirements do not have to be met by the same bean.
When placed on a
@Bean method, the bean class defaults to the return type of
the factory method:
@Configuration public class MyAutoConfiguration { @ConditionalOnMissingBean @Bean public MyService myService() { ... } }
In the sample above the condition will match if no bean of type
MyService is
already contained in the
BeanFactory. Class<?>[] value
BeanFactory.
public abstract String[] type
BeanFactory.
public abstract Class<?>[] ignored
public abstract String[] ignoredType
public abstract Class<? extends Annotation>[] annotation
BeanFactory.
public abstract String[] name
BeanFactory.
public abstract SearchStrategy search
public abstract Class<?>[] parameterizedContainer
value=Name.classand
parameterizedContainer=NameRegistration.classwould detect both
Nameand
NameRegistration<Name>. | https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/condition/ConditionalOnMissingBean.html | CC-MAIN-2020-40 | en | refinedweb |
Home | Online Demos | Downloads | Buy Now | Support | About Us | News | Working Together | Contact Us
This article is in a series of articles across our product platforms showing how to print different pages on different printer trays. We recently provided the PDFOne .NET version. Here is the Document Studio .NET version. While PDFOne can only print PDF documents, Document Studio can print DOCX, DOC and images in addition to PDF. As this article was being written, some customers wanted to know how to print documents with a print preview. So, there is now a small print preview and document preview option as well.
When using Document Studio, printing operations can be done by the
DocumentPrinter control. (The
DocumentViewer control also has a print method but the
DocumentPrinter provides more features.)
When you call
DocumentPrinter.Print() method, the printer control will print to the current default printer. To change current print settings, you should access the
DocumentPrinter.PrintDocument property.
This
DocumentPrinter.PrintDocument property wraps a
System.Drawing.Printing.PrintDocument object. The
PrintDocument exposes the printing subsystem available to the
DocumentPrinter instance. The
DocumentPrinter.PrintDocument.PrintSettings.InstalledPrinters array, for example, will provide you with the names of the currently installed printers.
To change current default printer, set
DocumentPrinter.PrintDocument.PrinterSettings.PrinterName property to one of the printer names in
DocumentPrinter.PrintDocument.PrintSettings.InstalledPrinters.
If you directly set the
System.Drawing.Printing.PrintDocument.PrinterSettings.PrinterName property,
DocumentPrinter will not be making any change. This is because
DocumentPrinter creates its own snapshot of the
System.Drawing.Printing.PrintDocument object when you call its constructor.
After a printer has been selected, the number of trays in that printer can be obtained from
PrintDocument.PrinterSettings.PaperSources collection.
To make
DocumentPrinter print to a particular tray, you need to set the
PrintDocument.DefaultPageSettings.PaperSource property to one of the trays in
PrintDocument.PrinterSettings.PaperSources collection.
You can set the paper tray in the
DocumentPrinter.BeginPreparePage event handler. The event arguments parameter for this handler includes the current page number.
To identify the selected document, you can use a
DocumentViewer control. However, there is a small issue in this. Currently, the
DocumentViewer control locks the document preventing the printer control from reading it. In our other products, a loaded document can be reused in other controls. This feature has not yet been introduced in XtremeDocumentStudio .NET. So, for now, the viewer needs to close the document so that the printer can load it. (We will update this article when the feature is added.) The
DocumentPrinter.PageScaling options can be mapped to the
DocumentViewer.ZoomType property.
To print a document, you need to call
DocumentPrinter.LoadDocument() with the pathname or stream containing the document and then call
DocumentPrinter.Print() method. After the document has been printed, call the
DocumentPrinter.CloseDocument().
There is a small issue here. The
DocumentViewer control used for the preview feature has already loaded the document needs to be printed. Currently, a file lock prevents the printer control from loading the document. In our other products, a printer control can reuse a document already loaded by a viewer control. This feature has not yet been introduced in Document Studio .NET. So, for now, the preview control needs to close the document so that the latter can be loaded by the printer control. We will add this feature in the next update and update this article subsequently.
In the following example, the form load event queries installed printers and makes them available in a list box.
When a printer is selected, its trays are listed in two list boxes - one for the first page and the other for the rest of the pages. The idea is you keep the glossy paper in the first tray and regular printer sheets in the other tray. (It was a user requirement.)
When a printer does not have more than one tray,
DocumentPrinter.BeginPreparePage event handler is not set.
When a file is selected, the document viewer control is used to provide a preview. The viewer control's
ZoomType property is used to provide alternate previews for the
DocumentPrinter.PageScaling setting.
Here is the code. (You can also find it in this sample project archive:
XDocDotNET-PrintToTrays.zip. To get this project working, please replace the
PrinterTraySelection.xdoc_license_key variable with your registered/trial license key. You also have to add references to the DLLs from your XtremeDocumentStudio installation folder.)
using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; using Gnostice.Documents.Controls.WinForms; using System.Drawing.Printing; namespace DocumentStudio_Examples { public partial class PrintToTraysForm : Form { DocumentPrinter dp; DialogResult dr; public PrintToTraysForm() { InitializeComponent(); Gnostice.Documents.Framework.ActivateLicense(PrinterTraySelection.xdoc_license_key); } private void Form1_Load(object sender, EventArgs e) { dp = new DocumentPrinter(); // Load all installed printers in to list box if (PrinterSettings.InstalledPrinters.Count > 0) { foreach (String sPrinterName in PrinterSettings.InstalledPrinters) { lbPrinters.Items.Add(sPrinterName); } lbPrinters.SelectedIndex = 0; dp.PrintDocument.PrinterSettings.PrinterName = PrinterSettings.InstalledPrinters[lbPrinters.SelectedIndex]; } // Load page scaling options from enumeration lbScaling.DataSource = System.Enum.GetValues(typeof(PageScalingOptions)); lbScaling.SelectedIndex = 1; } private void btnSelectFile_Click(object sender, EventArgs e) { dr = openFileDialog1.ShowDialog(); if (dr.Equals(DialogResult.OK)) { // Display the selected document in preview documentViewer1.LoadDocument(openFileDialog1.FileName); documentViewer1.ZoomType = StandardZoomType.FitPage; lbScaling.SelectedIndex = 1; } } // Always print first page of a document in a special tray void dp_BeginPreparePage(object sender, PrinterBeginPreparePageEventArgs e) { if (e.DocumentPageNumber == 1) { dp.PrintDocument.DefaultPageSettings.PaperSource = dp.PrintDocument.PrinterSettings.PaperSources[lbFirstPageTray.SelectedIndex]; } else { dp.PrintDocument.DefaultPageSettings.PaperSource = dp.PrintDocument.PrinterSettings.PaperSources[lbOtherPagesTray.SelectedIndex]; } } // Load trays available in the selected printer private void lbPrinters_SelectedIndexChanged(object sender, EventArgs e) { if (lbPrinters.SelectedIndex >= 0) { dp.PrintDocument.PrinterSettings.PrinterName = PrinterSettings.InstalledPrinters[lbPrinters.SelectedIndex]; lbFirstPageTray.Items.Clear(); lbOtherPagesTray.Items.Clear(); if (dp.PrintDocument.PrinterSettings.PaperSources.Count > 0) { foreach (PaperSource pps in dp.PrintDocument.PrinterSettings.PaperSources) { lbFirstPageTray.Items.Add(pps.SourceName); lbOtherPagesTray.Items.Add(pps.SourceName); } lbFirstPageTray.SelectedIndex = 0; lbOtherPagesTray.SelectedIndex = 0; } dp.PrintDocument.PrinterSettings.PrinterName = PrinterSettings.InstalledPrinters[lbPrinters.SelectedIndex]; } } private void btnPrintFile_Click(object sender, EventArgs e) { if (PrinterSettings.InstalledPrinters.Count > 0) { // Print first page in a different tray only when // there are more than one tray if (dp.PrintDocument.PrinterSettings.PaperSources.Count > 0) { dp.BeginPreparePage += dp_BeginPreparePage; } else { dp.BeginPreparePage -= dp_BeginPreparePage; } if (dr.Equals(DialogResult.OK)) { documentViewer1.CloseDocument(); dp.LoadDocument(openFileDialog1.FileName); dp.Print(); dp.CloseDocument(); documentViewer1.LoadDocument(openFileDialog1.FileName); } } else { MessageBox.Show("No printers are installed."); } } private void lbScaling_SelectedIndexChanged(object sender, EventArgs e) { if (documentViewer1.IsDocumentLoaded) { switch (lbScaling.SelectedIndex) { case 0: documentViewer1.ZoomType = StandardZoomType.ActualSize; dp.PageScaling = PageScalingOptions.Original; break; case 1: documentViewer1.ZoomType = StandardZoomType.FitPage; dp.PageScaling = PageScalingOptions.Fit; break; case 2: documentViewer1.ZoomType = StandardZoomType.FitWidth; dp.PageScaling = PageScalingOptions.ShrinkOverSizedPages; break; default: documentViewer1.ZoomType = StandardZoomType.FitPage; dp.PageScaling = PageScalingOptions.Fit; break; } } } } }
Here is the video demo. (We do not have a printer with multiple trays and we settled for software printers.)
To provide a print preview, as opposed to a document preview, you can use a
System.Windows.Forms.PrintPreviewDialog or a
System.Windows.Forms.PrintPreviewControl control. The
Document property of one of these controls needs to be set to the
DocumentPrinter.PrintDocument property and its done.
printPreviewControl1.Document = dp.PrintDocument; // or printPreviewDialog1.Document = dp.PrintDocument;
The installer/setup application for Document Studio .NET copies a source code project for a more elaborate print with preview project. It uses the
System.Windows.Forms.PrintPreviewDialog control, which wraps a preview, page navigation and print control. It also provides more UI to set print settings such as
DocumentPrinter.PageScaling.
---o0O0o--- | https://gnostice.com/nl_article.asp?id=296&t=Print_select_pages_of_DOCX,_DOC_or_PDF_file_to_a_specific_printer_tray_in_C | CC-MAIN-2020-40 | en | refinedweb |
Occasionally an off hand remark, perhaps even one said with a snide and “challenge” tone to it, will cause me to wonder. And dig, maybe just a tiny bit…
One of those happened a week or three ago.
I was telling a computer engineer friend about the Raspberry Pi and some of the things you could do with it. Probably a bit effusive, but as I’m generally criticized for being too “bland” or “reserved” (thanks Mum…) it’s rare that I’m “effusive” about anything. But in the case of the R.Pi I was happy with it, so may have been…
Don’t remember exactly what I was “effusive” about. I’d gotten Samba and file shares going, a Torrent Server (that’s humming still), and a caching DNS server. I think we’d been talking about his use of Arduino for teaching robotics and I was suggesting the R.Pi might be fun for kids to program too… He was talking about learning Java, as a lot of Arduino projects are heading that way (and kids seem to learn Java easily) or some such. I made some comment about languages on the R.Pi, and that I’d gotten FORTRAN running easily. He then challenged with something like “What, no COBOL?” (in that mocking kind of tone…)
Now both of us learned FORTRAN as our first computer language at the same school and from the same instructor. (Old college roomie… ) He’s an Engineer by degree, training, and career; and has used FORTRAN. So it was a bit of a ‘dig’ to have him slamming me with COBOL… ( I had one COBOL class that I hated… and did some minor maintenance on COBOL programs when out of school… but it’s really a horrid wordy cumbersome language with arcane rules about reading file and writing records or maybe the other way around… in any case, the structures you read are different from those you write, for the “same thing”. )
Well, time passes. And the “slight” fades. But sometimes not the “I wonder…”
So I found this: your COBOL programs on various platforms, including Unix/Linux, Mac OS X, and Microsoft Windows.
The compiler is licensed under GNU General Public License.
The run-time library is licensed under GNU Lesser General Public License.
Since it’s a translator to C, and C runs on the R.Pi; it’s pretty much guaranteed this will work on the Raspberry Pi.
Though I don’t know if I can be that cruel to the Raspberry as to force it to do COBOL…
It is already under general Debian:
For Debian, it’s
apt-get install open-cobol
So it ought to be there… Dare I?
The VM with Emulator with…
Unrelated… it seems that early on with hardware scarce and memory more limited, some folks wanted to do a load of development (and likely compiling a lot of Debian… perhaps even Open COBOL) and made a Virtual Machine to do so. This is complicated by the fact that the R.Pi is an ARM chipset and most folks have an Intel computer. So they used an ARM emulator inside the virtual machine…
Now on a high end quad core or more box with a couple of dozen GB of memory, you can still get a heck of a lot of performance boost by using a Virtual Machine (OpenBox) with an emulator inside of it… though for the rest of us it does look just a tiny bit cruel.…
But then, to install Debian Linux on that emulated chip in that virtual CISC under a Linux OS, inside a core of an Intel CPU, wrapped in MicroSoft Windows and, then, in a fit of fancy, to compile, port, install and run COBOL via a translator to C. Well… I’m sure someone will do it, but I don’t have to watch.
;-)
How to do the VM / qemu stuff:
You Know I Pull Wings Off Flies?
From the “burning ants with lenses and pulling wings off flies department”…
you know I’ve got to “go there”, don’t you?:
pi@dnsTorrent ~ $ sudo apt-get install open-cobol Reading package lists... Done Building dependency tree Reading state information... Done The following extra Suggested packages: db5.1-doc libgmp10-doc libmpfr-dev libtool-doc ncurses-doc autoconf automaken gcj The following NEW open-cobol 0 upgraded, 13 newly installed, 0 to remove and 250 not upgraded. Need to get 2,977 kB of archives. After this operation, 8,298 kB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 wheezy/main libgmpxx4ldbl armhf 2:5.0.5+dfsg-2 [20.6 kB] Get:2 wheezy/main autotools-dev all 20120608.1 [73.0 kB] Get:3 wheezy/main libgmp-dev armhf 2:5.0.5+dfsg-2 [552 kB] Get:4 wheezy/main libgmp3-dev armhf 2:5.0.5+dfsg-2 [13.7 kB] Get:5 wheezy/main libltdl-dev armhf 2.4.2-1.1 [203 kB] Get:6 wheezy/main libtinfo-dev armhf 5.9-10 [89.6 kB] Get:7 wheezy/main libncurses5-dev armhf 5.9-10 [202 kB] Get:8 wheezy/main libtool armhf 2.4.2-1.1 [618 kB] Get:9 wheezy/main libcob1 armhf 1.1-1 [87.6 kB] Get:10 wheezy/main libcob1-dev armhf 1.1-1 [111 kB] Get:11 wheezy/main libdb5.1-dev armhf 5.1.29-5 [775 kB] Get:12 wheezy/main libdb-dev armhf 5.1.6 [2,256 B] Get:13 wheezy/main open-cobol armhf 1.1-1 [228 kB] Fetched 2,977 kB in 33s (88.3 kB/s) Selecting previously unselected package libgmpxx4ldbl:armhf. (Reading database ... 61520 files and directories currently installed.) Unpacking libgmpxx4ldbl:armhf (from .../libgmpxx4ldbl_2%3a5.0.5+dfsg-2_armhf.deb) ... Selecting previously unselected package autotools-dev. Unpacking autotools-dev (from .../autotools-dev_20120608.1_all.deb) ... Selecting previously unselected package libgmp-dev:armhf. Unpacking libgmp-dev:armhf (from .../libgmp-dev_2%3a5.0.5+dfsg-2_armhf.deb) ... Selecting previously unselected package libgmp3-dev. Unpacking libgmp3-dev (from .../libgmp3-dev_2%3a5.0.5+dfsg-2_armhf.deb) ... Selecting previously unselected package libltdl-dev:armhf. Unpacking libltdl-dev:armhf (from .../libltdl-dev_2.4.2-1.1_armhf.deb) ... Selecting previously unselected package libtinfo-dev:armhf. Unpacking libtinfo-dev:armhf (from .../libtinfo-dev_5.9-10_armhf.deb) ... Selecting previously unselected package libncurses5-dev. Unpacking libncurses5-dev (from .../libncurses5-dev_5.9-10_armhf.deb) ... Selecting previously unselected package libtool. Unpacking libtool (from .../libtool_2.4.2-1.1_armhf.deb) ... Selecting previously unselected package libcob1. Unpacking libcob1 (from .../libcob1_1.1-1_armhf.deb) ... Selecting previously unselected package libcob1-dev. Unpacking libcob1-dev (from .../libcob1-dev_1.1-1_armhf.deb) ... Selecting previously unselected package libdb5.1-dev. Unpacking libdb5.1-dev (from .../libdb5.1-dev_5.1.29-5_armhf.deb) ... Selecting previously unselected package libdb-dev:armhf. Unpacking libdb-dev:armhf (from .../libdb-dev_5.1.6_armhf.deb) ... Selecting previously unselected package open-cobol. Unpacking open-cobol (from .../open-cobol_1.1-1_armhf.deb) ... Processing triggers for man-db ... Processing triggers for install-info ... Setting up libgmpxx4ldbl:armhf (2:5.0.5+dfsg-2) ... Setting up autotools-dev (20120608.1) ... Setting up libgmp-dev:armhf (2:5.0.5+dfsg-2) ... Setting up libgmp3-dev (2:5.0.5+dfsg-2) ... Setting up libltdl-dev:armhf (2.4.2-1.1) ... Setting up libtinfo-dev:armhf (5.9-10) ... Setting up libncurses5-dev (5.9-10) ... Setting up libtool (2.4.2-1.1) ... Setting up libcob1 (1.1-1) ... Setting up libcob1-dev (1.1-1) ... Setting up libdb5.1-dev (5.1.29-5) ... Setting up libdb-dev:armhf (5.1.6) ... Setting up open-cobol (1.1-1) ... pi@dnsTorrent ~ $
So now I’ve gone and done it…
I’ve installed COBOL on my Raspberry Pi. It’s no longer just a DNS server / Torrent Server / Samba – M.S. File Server / FORTRAN Climate Codes port station… it’s now also a COBOL workstation. Sigh.
Guess what I’ll be telling the “Old College Roomie” tomorrow?
;-)
I guess next thing I need to do is learn how to remove packages.
;-)
cobol still gives me the creeps. That 60 char limit, the exact placement of text on a line or else. Almost made me drop programming all together.
I new a Cobol programer, he seemed weird, was it a coincidence? LOL
I cut my programming teeth on COBOL on ICL2900 mainframes and have good memories. I found that, mixed with a bit of the system programming language to do the bits that would have been cruel and unusual in COBOL. It proved to be surprisingly versatile. At one one time I wrote a finite state machine to parse a large query/display ‘natural’ language and the interpreter that executed the display language on returned query items. The interpreter ran to about 20-30k LOC, and the parser/lexer (didn’t write the lexer) another 10k or so. It proved to be remarkably easy, apart from the maintenance of the state table.
As for position dependent layout, plainly the world has learned nothing, because we have Python.
A long discussion on various ways and degrees of package removal:
So looks like a variety of choices. Maybe later I’ll have time to decide which one to use…
@Petrossa:
I’d forgotten the 60 char fixed format “issue”… (it’s been a while… and IIRC that limit was removed in the commercial version where I was doing minor maintenance).
@BobN:
IMHO, both COBOL and RPG (RPG II ?) programmers are a ‘different sort’. At one extreme are the C / FORTRAN folks. Very math oriented and lots of short terse symbols used (and compact code too). At the other extreme are the COBOL / database guys with a load of words to do anything of interest. More like reading a novel written by someone with a weird dialect problem… Then RPG is like ticking boxes on a check sheet with some modifiers…almost not a language at all, really. Non-procedural and limited.
(Don’t get me started on PL/I – where you can write FORTRAN, or COBOL, or any of a couple of other languages all inside the approved syntax…; or APL which has been called “write only programming”… and takes a special keyboard with obscure symbols on it…; and LISP for folks in love with parenthesis, and whatever happened to the straitjacket of Pascal and Ada?…)
My favorite was Algol 68. Sort of a half way house between Pascal and C. Not as terse and cryptic as C, but not a straitjacket like Pascal. I wonder if there’s any Algol for Debian… Hmmm…
@Steve Cook:
You have my admiration. I was never able to really master COBOL. “Sort of functional” was about all I could do.
I don’t really “speak Python” (though I’ve ported some things written in it). Is it strongly positional? (Last time I did anything with it was about 2009 and that was just ‘compile and install’ some C libraries for it in GIStemp and run the code as provided…) I kind of remember something about position dependent chars at the front of lines… Guess I need a ‘refresher’….
Oh Dear…
pi@dnsTorrent ~ $ sudo apt-get install algol68g
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following NEW packages will be installed:
algol68g
0 upgraded, 1 newly installed, 0 to remove and 250 not upgraded.
Need to get 422 kB of archives.
After this operation, 1,030 kB of additional disk space will be used.
Get:1 wheezy/main algol68g armhf 2. 4.1-1 [422 kB]
Fetched 422 kB in 2s (150 kB/s)
Selecting previously unselected package algol68g.
(Reading database … 61758 files and directories currently installed.)
Unpacking algol68g (from …/algol68g_2.4.1-1_armhf.deb) …
Processing triggers for man-db …
Setting up algol68g (2.4.1-1) …
Well… now I’ve got Algol68 on the R.Pi too….
In a way, it’s kind of amazing what all is already built and ready to go. Stuff that was just sitting in Debian already and “just works” after a compile or with a dedicated interest group that did some ‘touch up’ for ARM at some point.
(And likely some bits that ‘need patching’ and will get attended after some damn fool, like me, actually tries using Algol or COBOL on a R.Pi and turns up some odd corner with ‘issues’… and that same ‘interest group’ finds out what we’ve done with their favored software on some platform they don’t have ;-)
While I’m on the subject, some resources for Algol:
Open source Algol:
Algol home page:
Has a pretty good list of links on it too.
Well. Now I’m in a pickle…. While I really liked Algol, do I really want to be writing code in a language from 1968? 45 years back? Well, maybe ;-)
Made my living writing/maintaining COBOL programs for 18 years. Pedestrian, adequate, stable, easy to learn. Remember its origin – military – Grace Hopper. It was an advancement for its time and place. Regard it like classical Latin or Greek.
@Gary:
Never could get motivated to learn Greek either ;-)
(Latin I can puzzle out some bits and I’m ‘working on it’…)
So I guess this means you can get a comfortable COBOL workstation for $35 that likely has more processing power than those mainframes from 18 years back…
In fairness, I’d had a fairly intense FORTRAN immersion, and then an Algol class (in the Engineering department) so was pretty much indoctrinated that THEY were the right way to do things. Then signed up for a single (small units / little time or instruction…) class in COBOL that was mostly “here’s the book, there’s the lab… come ask me if you have questions”… I suspect that with a better ‘presentation’ I’d have done better. It was in some ways frustrating as I had to “unlearn” a lot of the FORTRAN way I’d just learned…
Now, a dozen languages later, I’d likely be less biased toward it; especially in a decent class setting. It did seem “easier” when I had to do some maintenance than when I’d been taking the class… even though it was a dozen years later with no use in between.
@ E.M. Smith 25 May 2013 at 5:47 pm
IIRC Python requires indent of ‘if’ clauses rather than using braces. Seemed like a gimmick that is always going to blow up in your face sooner. Like deciding to only use C braces when forced to :-)
When I started programming in C, I had a hard time coming to grips with just how easy it was to shoot oneself in the foot. I’d been spoiled by COBOL. I was used to only having to deal with debugging logic errors and C introduced me to a whole new class of bug I could create just by a typographical error. Ahhh, those were the days.
I went on a Cobol training course and hated it. I concluded that Cobol was commissioned by a committee too dumb ever to be able to do any programming themselves.
I liked Algol, but PL1 was my favourite.
Steve Crook says: 25 May 2013 at 4:41 pm
“I cut my programming teeth on COBOL on ICL2900 mainframes…”
I worked on the 2903, 2910, 2920 and 2930 machines at West Gorton and Kidsgrove. I worked with Ed Mack, David Dace and Charlie Portman. In those days if you used anything other than COBOL you could get fired. At ICL it was hard to know who you worked for thanks to “Matrix Management” that caused confusion at all levels!
My final job at ICL was as project manager for the ME29 which sold quite well. Later while I was working at STC, Kenneth Corfield decided to acquire ICL. Later STC got gobbled up by Nortel and then Fujitsu. I wonder if anything is left other than the pension fund that coughs up a few dollars each year.
Gary,
You have to love the inimitable Grace Hopper. I use this video to help people remember that electrical signals travel roughly one foot in a nano-second:
Or, you could run it on cp/m and get sharp on ddt and pipping.
…”
The art is of course writing the compilers, emulators, transpilers and VM’s in such a way that you lose as little power as possible when crossing each platform/language border.
“He was talking about learning Java, as a lot of Arduino projects are heading that way (and kids seem to learn Java easily) or some such.”
Java is on its way out IMHO. Javascript (a completely different language) is getting more interesting every day. Javascript is very easy to learn as well, has a semantic as powerful as Scheme or Python, and modern JIT compilers give it an edge over Python.
I found this helpful:
As the type system is based on Duck typing like Python, it is way easier to learn than the boxing/unboxing/container/generics mess of Java. I disliked Java from the start due to the lack of templates; which are necessary in statically typed OO languages. Only much much later did they bolt on their generics. As Javascript is not statically typed, it never had that problem but suffered instead from slowness. Which has been rectified since about 2008 by Google and Apple’s JIT compilers.
Perhaps it’s Java Script he was talking about for the kids. Frankly, I didn’t realize there was that much difference between them so have not kept them “tidy” in different buckets. Classing it all as “Java*” … So I guess I need to start keeping two buckets ;-)
Not long after the IBM System/360 appeared I worked at a small company that had an IBM 1401 installation and was upgrading to a 360. Management had the delusion, common at the time, that COBOL was the cure for all data processing problems. Most of the programmers thought COBOL was bad, evil, weak, inefficient, etc. I took a couple programs that had been written in 360 assembly language, and rewrote them in COBOL. It took me less time to debug them than the BAL programmers spent. That did not change any opinions. But my object programs, including all the linked in library routines were smaller than the BAL programs, much smaller. That does not say anything about the quality of the COBOL compiler. The BAL programmers just did not understand the architecture of the 360. We could not shrink their code by tweaking it; the only way was to replace almost all of it. That made COBOL less unacceptable. The 1401 treated a blank like a zero in many circumstances. New programs on the 360 using files produced on the 1401 sometimes failed because the 360 knew the difference when in native mode. I wrote an exception handler in COBOL to fix the data. That took away more of the resistance. COBOL is not a good choice for an exception handler, but it is a fine choice for removing doubt about its use. Related to code size, years later there was a debate about implementation languages at DEC. One survey of developers found that of those that had not used PASCAL or BLISS, twice as many developers thought PASCAL was superior. I worked in the group that developed BLISS compilers. We wondered how good the code was, compared to assembly language. We finally found a module that one developer had written in BLISS for a component of VMS. It was replaced by the assembly language equivalent, written by an ace, one of the original VMS team. The BLISS code was 2% larger. Several years later we were asked to put together a presentation about the same topic. By then, the difference was 1%. I have also observed many times that RPG is a superb language for the many commercial DP applications where it is widely used. This note is not to support a best language or complain about a bad one, but just to observe that many of our strongly held opinions should not be so strongly held.
@Chuck Bradley:
Well, about the only “strongly held” opinion I have on computer languages is that it’s better to use a language you know well, then the newest trendy language you don’t know well…
I’ve sometimes “taken rocks” for programming things in some old / odd language or another rather than the “trendy language du jour”… but I’ve also managed enough projects to know that a really good and experienced programmer can make just about any language do just about anything… and knowing where the potholes and deadfalls are in a given language is more important than some arcane feature you have never used in the new language…
The only exception being that I’ve seen many “Object Oriented” projects suffer massive code bloat / slowness problems. Can’t say if it is a structural thing to the languages, inexperience of the programmers, coincidence due to O.O. taking off just as everyone got lazy about memory (“hardware is cheap” ethic taking over) or what. I’ve also seen a lot of damn fast programs written in things like FORTRAN, ALGOL, C, etc….
Though frankly, the one thing that annoys me most is the tendency to a new language of preference every 5 years or so. I’ve managed to watch a half dozen come and go… Perl and Python, once trendy, now being shoved aside by the Java Javascript trend (that’s already a bit old itself…) I’ve mostly just “given up” on trying to stay “trendy” as it’s just not worth the workload. (I have done maintenance on Perl and Python without too much study, and since most of what I do is “port and fix”, getting proficient at “write from scratch” isn’t really needed.)
In the end, I find myself mostly using FORTRAN these days (as what I’ve been doing for the last 4 years was written in it) along with various scripting languages. I’ve written C, and could do so again, but it’s been a while. (Still read it fairly well though…) And while folks tout things like R and Python and Java – I’ve just not felt motivated enough to bother. (Though don’t mind at all if someone on one of my teams wanted to use the new trendy things… I’ve generally figured “let folks use the tool they want”… and it’s worked well.)
At the end of the day, I mostly like relatively straight forward languages.
I just re-read the “book” on Algol at that algol home site. I was reminded of some of the “quirks” that had bothered me in the past… and why I’d not kept up using it. Nothing wrong with it, and it was my favorite language once… But now, looking back on it after C and others, it is just a bit more “lumpy” than I’d remembered…
So maybe some of the “negatives” remembered about COBOL are from that same mold as the “positives” I was remembering about ALGOL. Both a bit “colored by time” ;-)
Ah, well. Someday the Perfect Programming Language will be developed ;-)
EM
C is pretty straightforward, isn’t it? I like it a lot for the freedom it gives me, and the ease of IL Assembly. C++ has its advantages too, but admittedly a hello world application can easily be a megabyte if you’re not careful.
Python, java,c#,.net etc. i can’t stand exactly for the reason why i like C. They are black box lego systems. You can do what they allow you to do and that’s it.
But market forces…. nowadays you can hardly find a professional programmer that can use anything else but MS VS, which should be outlawed for creating the most unreadable code known to man.
E.M.Smith says:
26 May 2013 at 10:09 pm
“Perhaps it’s Java Script he was talking about for the kids. Frankly, I didn’t realize there was that much difference between them so have not kept them “tidy” in different buckets. Classing it all as “Java*” … So I guess I need to start keeping two buckets ;-)”
Netscape invented Javascript and chose the name to take a ride on the Java hype. Entirely different language.
Petrossa says:
27 May 2013 at 9:08 am
“Python, java,c#,.net etc. i can’t stand exactly for the reason why i like C. They are black box lego systems. You can do what they allow you to do and that’s it.”
Common industry practice these days is to write the engine in C/C++, and expose some functions and objects to an embedded Python or Javascript/ECMAscript/Ruby/Lua interpreter. This gives you the best of both worlds – ability to program on hardware level with C/C++ , and rapid prototyping / changing application logic with the scripting language.
As large scale C++ systems involving lots of libraries can still take on the order of 10 minutes upwards for a complete recompile, even using a monster PC, scriptability has its advantages.
To get a C++ / Python hybrid to work, check out SWIG; and google for “embedding and extending Python”.
Tnx DirkH. Shows how much i am out of the loop already. Luckily i only program for fun, i retired at 45. I amuse myself with racking up my home automation system with the bells, whistles and the kitchensink. Just installed a finnicky warning system to count the number of icecreams my wife eats, which sets of a blue policetype revolving lamp at a certain number.
@Petrossa:
My only real complaint about C is that is presumes a style of I/O file layout (i.e. not fixed format) so using things that are fixed format (like FORTRAN files…. or just fixed column hand typed files) can be annoying. Yes, it does let you do it; but a built in “Fixed FORMAT” type statement would cut down the typing needed… (OTOH, doing variable format layout in FORTRAN is as much a PITA going from C to FORTRAN… the two just don’t have the same idea about how files ought to be laid out… Again, yes, you can do it; just how much “workaround” vs “built in easy” is the issue…)
As someone who tries to get as much done with as little typing as possible, C is very nice… even if it does take a few lines to do fixed format file reads… So it avoids things like typing out the word “MULTIPLY” as in COBOL, and you don’t need to do BEGIN and END when {} would do. Yet, by the time I’ve opened a file, declared all the structures and variables, opened the file, fread data into a struct and then get set up to use it… Well, in FORTRAN you have enough implied variables and easy file open and it’s just FORMAT and READ variable list, then some math and WRITE (often with the same FORMAT). Fast. Simple. Obvious. (Almost as easy in Algol, but then again, our Algol at school had a FORTRAN influenced I/O package…)
For reasons beyond my ken, many language designers have had a chip on their shoulder about Fixed Format files and seem to have designed their languages to make them painful to use. OK, not much of an issue for ‘built new’ use; but if interacting with anything already made, it just becomes a PITA. (One of the most valuable languages / systems I ever used was a DBMS with built in dump / load functions that were easy to configure. You could use it to glue together files of just about any layout. It used a “dump to fixed / load from fixed” as a kind of lingua franca for many interface discontinuities. Made life so easy…)
Other than that, it doesn’t seem to get in my way, nor force me to type books, nor enforce some bizarre ideas about what is “proper” and ‘evil’… nor leave out key facilities due to some bias of the creators… nor…
Oh Well… Why we have so many computer languages, I guess. Just a large tool box so the whole world need not look like a nail… ;-)
I do find the “starting arrays at zero” a bit of a bother too. Algol lets you start and end arrays from any point, as you like it. Yes, the compiler has more work to do. So? The number of “off by one” errors I’ve seen from folks coping with “array item one is zero” or “array item 8 is number 7” just really wants a bit more flexibility there… but you get used to it… find ways to work around it. Eventually it gets built into your brain and you think it is “normal” and everything else is wrong and don’t notice that the ‘work around’ isn’t just how things ought to be done…
There aren’t that many times you want an array to run from ” -20 to +20 “, but when you need it, it can be fairly convenient and make for obvious clear code.
Oh Well…
Golly… the things people do… and the things you can find…
From that Algol “home page” via a link or two, I ended up at a site that preserves old software (even Algol 60!) and the run environment, such as the ICL 1900
A mainframe of years gone by…
Yes, 6 bit characters and 24 bit word size… “Things were different then” ;-)
So why mention this? Well…. Say you wanted your own ICL running “George”…
Yes, in no time at all you can be running Algol 68 on an emulated ICL mainframe on your $35 Raspberry Pi….
Isn’t technology wonderful?….
Update
Here’s some more…
Making an IBM “Mainframe” on your R.Pi… running VM…
So, just how much to I want to be reminded of my VM / IBM days or have a Vax Cluster in my pocket?…
I remember Grace Hopper for her useful quote for dealing with BureauRats “It’s easier to ask forgiveness than it is to get permission.” (So very true)
And of course her ‘BUGS’ so even those of us who are computer illiterate know who Grace Hopper is.
@Gail:
I always liked Grace Hopper. And Ada Lovelace… When teaching “intro to computers” I’d bring them both into the discussion and the women in the class would suddenly wake up ;-) I’d also use knitting books as an example of a programming language (complete with subroutines!) and point out that punched cards came from their earlier use in looms and weaving. About then the guys would be giving me that Oh No look but the women in class were realizing that computing was not a “Guy Thing”… The early history of computing was strongly influenced by some key woman, and Grace was one of them.
@All:
More ways to do odd things on your R.Pi… Want a Commodore Amiga?
I’ve seen another emulator under Linux that is near perfect, so only a matter of time for the R.Pi to be that good.
How about some other smaller nostalgia?
How about all those old DOS programs in your attic or garage?
Just amazing….
Not so much that it is being done, just that it is being done so fast. At this point, the history of computing from DOS and Amiga through BSD / Unix / Linux and even Vax Clusters and VM Mainframes, and more, all on a dinky little board…
God I’d love to be assigned a “History Of Computing” class to teach… I’d have a half dozen of these set up with terminals showing what they could do, and a picture next to each one showing the Real Deal (and how big they were ;-)
I’m starting to wonder if there’s anything folks won’t do with this card ;-)
I’m running out of systems I’ve used to emulate. A Cray X-MP-48 and some Macintosh machines is sort of it… Oh, and a Lisa once. There were a couple of misc mini-boxes, but they were mostly running some flavor of Unix ( like Sun OS a BSD port, and a couple of others). Then there was that HP3000 with the HP operating system … But it’s just not that big a deal for them. (Though a “Toy” X-MP-48 would be fun.. “MP” is multiprocessor. 4 CPUs at 100 M-Flops each, and 8 Meg-words, or 64 MBytes of memory… ought to be able to do that with 4 x R.Pi boards with power left over… Maybe I could even make a little C shaped frame to hold it ;-) No idea where I’d get a copy of the OS… or applications…
I hit “post” too soon:
So add an Acorn Archimedes to the list..
I think I’m gonna need a bunch more SD cards ;-)
UPDATE:
And a virtual Mac:
it then goes into a ‘fix’ done that ought to make it go…
and the Apple IIgs:
Then there is this Java based PC emulator; so one can likely run Windows on it…
So that’s about a years worth of projects just getting one of each image up, configured, and running some cool old applications…
Another UPDATE
Looks like fpc Free Pascal Compiler too, along with something called Lazarus that is called “Object Oriented Pascal”…
Not sure what an “Object Oriented Pascal” would be, but I presume someone who liked Pascal was feeling left out of the O.O. Fad and decided to glue it on…
This page talks about building and using it including a build from sources, and has a neat Mandelbrot program including sources:
though it looks like it pushes things close to the performance edge:
Mandelbrot code: (All of 128 kB…)
So far I’ve resisted putting Pascal, or anything O.O., on the Raspberry Pi… but I like Mandelbrots… though at 2 minutes to render one I think I can resist a bit longer…
Though further down…
It looks like there’s a lot of room for improvement and it’s not the R.Pi hardware that’s the issue:
<blockquote.
Well… as a cross compiler it might be more interesting too… Gee, cross compilers… a whole other area I’ve not investigated yet… ;-)
“Not sure what an “Object Oriented Pascal” would be, but I presume someone who liked Pascal was feeling left out of the O.O. Fad and decided to glue it on…”
Don’t remember Turbo Pascal? Anders Hejlsberg and two other Danes developed it and it got marketed by Philip Kahn in Cali under the company name Borland. Turbo Pascal developed OO capabilities; basically emulating the C++ work of Stroustrup but with pascal as the basic language. I bought a Turbo Pascal 6.0 I think in 1991 and it was a very good OO dev environment on a DOS computer. Later developed on into Delphi, adding the Visual Basic form-based GUI development; Hejlsberg later still went to Microsoft and developed the .NET family of languages, and lately has developed a superset of Javascript called TypeScript (adds optional static typing to JavaScript).
Object Oriented Pascal is probably just Turbo Pascal without the company trademark…
E.M.Smith says (3rd Update) 28 May 2013 at 2:51 am :
Um, no. Seems to me that the “o.o. fad” was a mutation of modular coding, which has been practiced for decades by conscientious programmers. Even in FORTRAN, albeit supported more by personal discipline than by its characteristic separate compilation.As best I’ve been able to figure it, object orientation boils down to a combination of modularity (including visibility and inheritance rules) plus an abstraction for dynamic-storage allocation (to support recursion, reentrancy, or threading). Or have I boiled away anything significant?It was already available in more concrete form in PL/I (no later than F-level ver. 5, ca. 1970), by using
pointervariables as surrogates for abstract “objects” that were dynamically
allocateed using the
controlledstorage class. But I digress, and you did insist that we “[d]on’t get [you] started on PL/I”.Any Pascal described as “o.o.” is likely a rethinking of various modular-Pascal extensions or related language designs, notably Niklaus Wirth’s own Modula (1977). Alas, Wirth just couldn’t let go of some of his math-theoretically ‘sufficient’–some might grumble “puritanically restrictive”–biases.So a more practically useful inspiration would’ve been UCSD Pascal, which was one of the original 2 O.S./development environments released for the IBM PC. It was ported by U.C. San Diego from their original research project native to LSI-11 systems (late 1970s), and commercialized as the “p-System”, as marketed by SofTech Microsystems. Alas, the p-System Pascal produced some variation on Wirth’s p-Code, and ran only on an interpreter with a nonDOS file system, and cost $100s extra; PC-DOS was $free and its compilers produced native 8086 code. Guess which one a professional Pascal enthusiast chose? And which one The Free Market chose?According to About Lazarus, the name properly applies only to “the class libraries for Free Pascal that emulate Delphi.”
Ooops.
Upon further reminiscing, the original PC-DOS (ver. 1) was not provided at no extra charge (see confirming source below), back when the PC’s only direct-access storage was a 5.25-in. floppy drive.
So among programming-language products for it, only the BASIC interpreter came free. Compared to Pascal, it seemed nearly useless to me, but after all, the language had been named the “Beginners’ All-purpose Symbolic Instruction Code” (1965). I have no idea how much resemblance the language from Dartmouth still bears to recent versions of MS Visual BASIC.
I recall the original Microsoft Pascal compiler being widely criticized. Perhaps its $300 cost created overly high expectations. Wikipedia confirms my vague recollection that the original Turbo Pascal cost $49.99. Early on, it was sometimes sold personally by Philippe Kahn himself, from a big box he’d carry around to computer clubs.
And to wrap up my corrections, Wikipedia pointed out that CP/M-86 was a 3rd environment offered for the original IBM PC, altho’ at an unappealing $240, compared to $40 for PC-DOS.
Sigh. I’ve had more than 30 years for these details to fade from memory. And it’s been a long time since I’ve seen my original IBM-logo sales receipt anywhere.
@compuGator:
Well, the O.O. folks I’ve talked with have vociferously asserted that it isn’t just modularity or fancy subroutines. (It looks like it to me…) So since it is their turf, I have to take them at their word on it. They seem to hang a lot on ‘inheritance’ that just looks like a wrapper subroutine call to me. But Lord knows I’m not qualified to judge.
I’ve managed a product development to production rollout that was written in C++ and participated in the code reviews; but never did catch the fever for it.
I’ve written some Pascal. Yeah, nice language. Pure. Straitjacket…
FWIW, I’ve also been employed writing a cost accounting system in H.P. Business Basic. Which I described by calling it “BASIC written by a frustrated Pascal compiler guy” ;-) (It has “BEGIN / END” pairs, variables with long names, functions and subroutines, and your choice of interpreting or compilation, among other “minor enhancements” ;-) So you can turn BASIC into a real language if you graft enough Pascal into it…
So original Pascal did not have inheritance (which seems the critical thing to O.O. folks in defining what makes it different – based only on the fact that THAT is the thing they all bring up and present first and with the most vigor). O.O. Pascal adds the “necessary traits”, but like I said, I can’t say what all they are. (To me it always just looks like regular old reusable modular subroutines wrapped up in fancy language and a painful syntax for how you apply them – oh, and nobody ever knows what all is in the library so ends up rewriting lots of the same Objects anyway…)
It’s on my list of “Someday Things” to actually learn an O.O. language and use it. But every time I’ve tried it’s just seemed way more trouble than it was worth… I’m sure I’m missing something, just now sure it’s worth trying to catch it ;-)
From the wiki:
Which, after a long list of largely self referential words that try to define it as different, ends with a sentence that basically says “you can do the same thing with subroutines and functions” as part of modular programming… but then says, in effect, “but that’s not object oriented programming as it is something new”…
I donno… it just looks like fancy function calls with an odd syntax wrapped up in self aggrandizing new jargon to me. Yeah, it works. But every project I’ve manage with it, or seen done in it, ends up with significant code bloat and efficiency issues compared to doing it with procedural orientation (though nobody cares as “hardware is cheap”…)
The wiki goes on to try ever more to distinguish O.O. from Modular:
Maybe my “block” is just that I’ve always seen data and process as strongly co-dependent. Just don’t see how you can isolate the data structure questions from the function / process questions. They are inherently bound to each other.
Interesting sidebar:
Looks like I worked for the folks who created Object Pascal when they were creating it!
I was at Apple then, knew those folks, and my group reported to Larry Tesler. Which means they may well have been using my equipment in their development work (as we supported the Engineering computer needs / shop…) I remember it as MacApp, and I remember the transition from Pascal to C++; but had not made the connect to that history as “Object Pascal”… I suspect they called it Clascal (that I vaguely recognize) or something else.
I also remember the “Think Pascal” connection.
Oddly, my group was instrumental in the move to the PowerPC chip. (It’s a long story… but some of our work ended up in the PowerPC chip as the IBM chip was morphed into it. We introduced the RS6000? workstation to the guys in Engineering who then blended the designs).
So this means I was, in some small way, part of both making Object Pascal, and then the move to C++ at Apple. Golly.. Yeah, it was the equivalent of “electronic janitor” mixed with “match maker”, but still… My group, my organization, we were all in staff meetings together… ATG, Advanced Technology Group. (Most of what my group did was run the Cray, but we also did all the infrastructure like networks and email and VAX Unix boxes and more. Lots of plastic simulation and a load of other stuff too. Pretty much all the data archives and backups too.) We were not part of corporate I.T. but a dedicated high response high tech group inside Engineering for the exclusive use of Engineers.
Well… guess I ought to learn to use Object Pascal ;-)
Why I find O.O. languages a bit of a pain. From that wiki page on Object Pascal. Here is “Hello World” in plain old Pascal, vs Object Pascal.
Regular Pascal
Pretty darned simple and clear, eh? Declare the program name, then do the deed. Not a lot of overhead or complexity.
Object Pascal
First off, there are FIVE different dialects of Object Pascal, with mutually exclusive syntax. So you must choose one. I’m going to take one from the middle that’s about average length, then also show the longest one so you can compare the two.
Delphi and Free Pascal
Oxygene Object Pascal
Maybe it gets better in really really large programs, where the overhead of all this can be amortized over something that “gives back” enough to make up for it. Maybe not…
But at first blush it looks like a lot of indirection getting in the way of a simple goal state.
EM
Embarcadero’s RADStudio XE has a delphi/pascal and a c++ compiler. I’m totally in love with it, despite it creating the hugest exe’s you’ll ever see. Very nice to mess about with. Highly recommended
…..”I’d also use knitting books as an example of a programming language (complete with subroutines!) and point out that punched cards came from their earlier use in looms and weaving….”
FWIW, the second company I worked for was a family owned business who started out in 1902 making ‘lace’ for carriages – think Fisher Body. ( Lace = narrow fabric some times like fake fur) The first hosiery and lace made on looms was in the mid 1700s – History of Machine-wrought Hosiery and Lace Manufactures
The best loom in the company was an old antique made of walnut. It was used to make the fake fur wound onto a tube for Zerox copy machines. The unique quality of this machine was it wove both sides of the lace so the lace was straight and true without any camber. Most machines including those made today, knit one side and weave the other so the lace wants to bend in a circle or spiral. We had engineers over from Switzerland trying to figure out how to reproduce the workings of that old loom. They failed! This loom made a variety of laces thanks to the punch cards used to program it.
If you own a car or house you have parts made by the company. All those fuzzy channels your car window rolls up into or the fuzzy weatherstripping on doors and windows.
The company now seems to be out of the lace making business. I sure hope they kept that loom or donated it to a museum. (The Carriage Museum of Stony Brook was aware of the loom since I talked to the curator in the late 1980s)
…
I’m not antithetical to it (one of my guys wanted to use C++ to make a product. It worked well and he was productive in it) I just “don’t get it”… Maybe my “needs” are all too small to reach the benefit point so I don’t see it and they all look like a giant “Hello World” to me ;-)
@Gail Combs:
I’d wager the modern engineers could have duplicated it, but were not willing to work in walnut…
Each wood has a particular character. One of them is surface friction. Some are self lubricating, others with just the amount of ‘drag’ wanted on a bit of thread. The degree of ‘flex’ (and that it is slightly different in different directions…) and more.
I’d guess they were trying to use modern materials and it “just wouldn’t work” since some fine point of timing or thread position was not right with a non-wood friction surface and flex. Lots of folks have lost touch with the “feel” of the surfaces of structural materials. It really does matter, especially in things like handling fabrics and threads.
That “feel” is what we sense when picking up fine points of the surface. From roughness to even Van der Waals forces. So take a gecko. They can climb glass due to Van der Waals forces from their pads. I’d wager that making fine lace and ‘fuzz’ gets into Van der Waals land too. How different is walnut from, say, aluminum or plastic on electrostatic and Van der Waals forces? Sure, with enough physics and math you can model it and ‘work it out’… Or just take a weaver with a lifetime of ‘feel’ for the materials and pair them with a craftsman with a lifetime of ‘feel’ for the woods and all… Perhaps in one person. A thousand and 1 little lessons learned over half a century. They make something that works, but has no analytical map for others to follow…
So we could duplicate it, but not replace it… if we were willing to just duplicate it exactly with attention to every little detail of the materials…
It’s “craftsmanship”, and it shows up in all sorts of places, and it matters. Much of our “modern” method of engineered things has lost touch with the craft aspects. We gain a lot of low cost and direct manufacture, like “stack” designs that are easy for robots to assemble. But we lose the subtle bits. One example. A western smith was in the tropics and marveling at their machete blades. He could make a fine blade, but his had a more sharp transition from the hardened edge to the annealed spine. He found a local smith to work with..
The “secret” was a particular gourd that grew in that area. Cut in half, it matched the outline of the machete blade. Red hot from the forge, blades were sunk into the gourd 1/2 with a chop, putting JUST the edge into the melon. The interaction of carbon and nitride from the melon, with cooling from the damp (but not bucket of boiling water fast cooling…) with residual spine heat keeping the center of the edge slightly annealed … It all produced a very hard surface layer (carbides and nitrides an rapid quench) with a strong but non-brittle core to the edge area, with a supple spine.
Now, if you never saw the process of the melon ‘thunk’, how would you back engineer that? Does your typical western smith (or worse, Mechanical Engineer from a college) think in terms of “what melons and gourds do to metals”? Are melons even in the mind of a “modern” designer? Yeah, once explained it “makes sense”. And yeah, we can do the same things with other methods. But none as simple, cheap, and effective… (Urine can be used to nitride a surface too… think you find “urine treatment” in a college text book?… Maybe urea salt treatment, but it’s a lot easier to “find” urine than urea salts…)
So there’s a long list of such bits of ‘craft’ that only exist in the minds of the craftsmen. Trying to backfigure them without ‘living the life’ is very hard.
One other example: I was working in a hospital. The X-ray guy was ‘an old grey beard’ who had been doing it for a very long time, from before it was taught. I’d guess he started about 1930? (as this was the ’70s and he was about to retire). One of his proudest bits of “art”? A Doc had a patient with an ‘issue’ at the top of the spine. Wanted an X-ray, but knew the jaw was in the way (for some reason need a ‘front to rear’ view). Had no idea how to do it, but asked the X-ray guy to do what he could and give them “something” even with the shadows of teeth and all to cope with. He smiles. He has an idea… He knows his craft. Produces an X-ray image from the front, head in normal position, clearly showing the spine in normal alignment with the base of the skull… and no jaw in the image at all.
Needless to say, the Doc was thrilled (though puzzled). How? The X-ray guy put the patient in a brace so they could not move their head, then did a very long exposure as they move their jaw slowly from wide open to closed to open to closed… The jaw and teeth just became ‘background’ shading bias and with the right exposure, that ‘white fuzz’ washes out. The jaw was never in any one place long enough to show as an image.
Yes, now we would just chuck them in an MRI machine and that kind of ‘craft’ is lost. Does it really matter? Probably not so much, but some. Like how to properly put on a southern hoop skirt (all those gussets, stays, petticoats,…) and the best way to polish spats with plant products; they had a time, but most folks will never miss them. (Though real farm hams beat the commercial stuff… and are worth the effort to find someone who still has a smoke house for their own hams…)
So I’d wager that a good local carpenter and weaver could reproduce that loom. Just not in “modern” materials…
EM
O.O is very handy. As an example take string handling. In C you have to use pointer addition, use risky memcpy’s etc. In C++ you have a class Strings which contain a multitude of wellbehaved functions to handle your strings. And adding your own is as easy as pie.
Multithreading…
(working code from on of my little apps)
Forward declaration:
class TDownloader : public TThread
{
protected:
void __fastcall Execute();
public:
__fastcall TDownloader(bool suspended);
};
__fastcall TDownloader::TDownloader(bool suspended) : TThread(suspended)
{
}
Declaration:
TDownloader *Getfile[MAX_CHANNELS];
Getfile[CurrentChannel]=new TDownloader(true);
if (Getfile[CurrentChannel])
{
Abort->Visible=true;
Getfile[CurrentChannel]->FreeOnTerminate = true;
Getfile[CurrentChannel]->Priority=tpNormal;
Getfile[CurrentChannel]->Start();
Stats->HideMessage();
}
Now i can multithread my heart out.
Try write that in pure C.
E.M.Smith says:
31 May 2013 at 3:00 pm
“… ”
In my opinion the key advantage is the information encapsulation. I one downloaded a huge capable 3D editing package, the authors had decided to declare it Open Source; it had been a commercial success 10 years before that and was written in C.
Had they written it in an OO language, they would have naturally created classes for points, vectors, matrices and so on, and declare the implementation details “private” so that code outside the classes methods would not be able to directly access the coordinates but would have to use the methods that the classes offer.
like so:
class Vector3D{
private:
double x; double y; double z;
public:
AddVector(const Vector3D& other){ …
Why is this an advantage? Well, in the package I downloaded it was difficult to impossible to find the place where a certain typical 3D operation is encoded; the operations were spread all over the place, specialized, duplicated etc.
Structuring an application into domain specific classes leads naturally to the implementation of methods in the class that they manipulate. You can still violate this but you would have to have a good reason for that.
So you would expect to find matrix multiplication implemented in the matrix class and, ideally, nowhere else.
I gave up on attempts at refactoring the 3D package into a maintainable form, as I had no commercial interest… the reason they gave it into the Open Source domain was probably the same – they had given up on maintaining it; applications with a more modern design had long overtaken them.
At many times in history and for various, mostly technical reasons, C programmers have emulated C++’s object orientation with the following technique:
// pseudo class
struct Vector2D{
double x;
double y;
};
// pseudo method
void Vector2D_AddVector2D(Vector2D* this,Vector2D* other){
this->x += other->x;
this->y += other->y;
};
The trick is to explicitly give the “this” pointer argument as first argument in the “methods” of our “class”. In C++ and other OO languages this parameter is passed implicitly.
C programs where this technique is used are easy to transform into C++ syntax, and are often well structured, as the C programmer already had the ordering principles of OO in mind.
Petrossa says:
31 May 2013 at 10:26 am
” Embarcadero’s RADStudio XE has a delphi/pascal and a c++ compiler. ”
These are descendants of Anders Hejlsberg’s Turbo Pascal and Turbo C++ from Borland. Borland sold them to Embarcadero.
E.M.Smith says:
31 May 2013 at 7:00 am
“Why I find O.O. languages a bit of a pain. From that wiki page on Object Pascal. Here is “Hello World” in plain old Pascal, vs Object Pascal. ”
ChiefIO, those are artificial Hello World examples that are intentionally constructed to show off the OO syntax. In all OO languages I use it is still possible to write the simple imperative Hello World version known from C. That includes the object oriented Turbo Pascal. When all you need is stdout, just use a simple printf or writeln or print or whatever it’s called.
DirkH, i know. Was a Borland user since Turbo Basic to see MS running after Borland all the time to catch up and never succeeding. Then it became Codegear and now RadStudio.
EM
Encapsulation is sometimes actually a pain in the butt. I had to write a program around an old C library which resulted in this mess:
Petrossa, I looked at your code. What you did there looks like a simple wrapper; to wrap an object oriented calling interface around a non object oriented one. Such wrapping tasks are always repetitive and tedious – whether OO is in play or not -, often get automated with custom source code creators, and when you’re in a project where it is allowed, you can often compress the effort effectively using the X Macro technique.
I like X macros because once you know the trick you end up with less code to read and to understand. Often I work in projects where they prohibit me from using the C preprocessor because they fear chaos. In those cases I often end up generating the wrapper code with custom Python scripts and give them what I generated.
DirkH The point was because the call to the existing C library needed a static pointer. Since in O.O you can’t get a static pointer to a function outside of the class this whole mess was necessary. i.e. Encapsulation can be a pain in the butt.
Petrossa says:
3 June 2013 at 9:26 am
“DirkH The point was because the call to the existing C library needed a static pointer. Since in O.O you can’t get a static pointer to a function outside of the class this whole mess was necessary.”
Why can’t you get a pointer to a function outside the class?
int Function(int a);
typedef int (*funcptr)(int);
class MyClass{
funcptr m_funcptr;
public:
MyClass(funcptr fp){ m_funcptr = fp;};
int Execute(int a){return (*m_funcptr)(a);};
};
…
MyClass obj = new MyClass(Function);
int b = obj->Execute(arg);
Maybe I misunderstand you.
Or maybe you wanted to publish the static function pointer to somebody outside the class. But in that case you can just declare the function as static and make it accessible using “public:”.
Or make it accessible via returning a function pointer.
A class can expose its innards when it wants to.
Tnx for thinking with me DirkH. But having tried all that in 2002 and failing i came up with this. It was some particularity with the library which by now i have longtime forgotten. | https://chiefio.wordpress.com/2013/05/25/cruel-raspberries/ | CC-MAIN-2020-40 | en | refinedweb |
What is Socket?
The Chinese translation of Socket is "Socket". What is a socket? Let's take a look at its English meaning first: socket.
Socket is like a telephone socket. It is responsible for connecting the phones at both ends for point-to-point communication so that the phones can communicate. The port is like a hole in the socket. The port cannot be occupied by other processes at the same time. When we establish a connection, it is like inserting a plug into this socket. After creating a Socket instance to start listening, this phone socket is always listening for incoming messages. Whoever dials my "IP address and port" will connect Who.
In fact, Socket is an abstraction layer between the application layer and the transport layer. It abstracts the complex operations of the TCP / IP layer into several simple interfaces, and the application layer calls to implement the process communication in the network. Socket originated from UNIX. Under the idea that everything in UNIX is a file, inter-process communication is called a
文件描述符(file descriptor) . Socket is an implementation of the "open-read / write-close" mode, server and client. Each end maintains a "file". After the connection is established, the content can be written to the file for the other party to read or read the other party's content, and the file is closed when the communication ends.
In addition, the location of the socket that we often talk about is as follows:
Socket communication process
Socket guarantees communication between different computers, that is, network communication. For a website, the communication model is the communication between the server and the client. A Socket object is established at both ends, and then data is transmitted through the Socket object. Usually the server is in an infinite loop, waiting for a client connection.
A picture is worth a thousand words, here is a connection-oriented TCP timing diagram :
Client process:
The client process is relatively simple. Create a socket, connect to the server, and connect the socket to a remote host (note: only TCP has the concept of "connection", some sockets such as UDP, ICMP and ARP do not have the concept of "connection"), send data, Read the response data until the data exchange is complete, close the connection, and end the TCP session.
import socket import sys if __name__ == '__main__': sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # 创建Socket 连接sock.connect(( '127.0.0.1', 8001)) # 连接服务器while True: data = input( 'Please input data:') if not data: break try: sock.sendall(data) except socket.error as e: print( 'Send Failed...', e) sys.exit( 0) print( 'Send Successfully') res = sock.recv( 4096) # 获取服务器返回的数据,还可以用recvfrom()、recv_into() 等print(res) sock.close()
sock.sendall(data)
The
send() method can also be used here: The difference is that
sendall() will try to send all data before returning, and returns None when successful, while
send() returns the number of bytes sent, and throws an exception if it fails.
Server-side process:
Let's talk about the process of the server. The server first initializes the Socket, establishes a streaming socket, binds to the local address and port, and then notifies TCP, ready to receive the connection, calls
accept() , and waits for the client connection. If the client establishes a connection with the server at this time, the client sends a data request, the server receives the request and processes the request, and then sends the response data to the client, and the client reads the data until the data exchange is complete. Finally the connection is closed and the interaction ends.
import socket import sys if __name__ == '__main__': sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # 创建Socket 连接(TCP) print( 'Socket Created') try: sock.bind(( '127.0.0.1', 8001)) # 配置Socket,绑定IP 地址和端口号except socket.error as e: print( 'Bind Failed...', e) sys.exit( 0) sock.listen( 5) # 设置最大允许连接数,各连接和Server 的通信遵循FIFO 原则while True: # 循环轮询Socket 状态,等待访问conn, addr = sock.accept() try: conn.settimeout( 10) # 如果请求超过10 秒没有完成,就终止操作# 如果要同时处理多个连接,则下面的语句块应该用多线程来处理while True: # 获得一个连接,然后开始循环处理这个连接发送的信息data = conn.recv( 1024) print( 'Get value ' + data, end= '\n\n') if not data: print( 'Exit Server', end= '\n\n') break conn.sendall( 'OK') # 返回数据except socket.timeout: # 建立连接后,该连接在设定的时间内没有数据发来,就会引发超时print( 'Time out') conn.close() # 当一个连接监听循环退出后,连接可以关掉sock.close()
conn, addr = sock.accept()
When
accept() is called, the Socket will enter the waiting state. When the client requests a connection, the method establishes the connection and returns to the server.
accept() returns a tuple (conn, addr) with two elements. The first element conn is the new Socket object, and the server must communicate with the client through it; the second element addr is the client's IP address and port.
data = conn.recv(1024)
Next is the processing phase where the server and client communicate (transmit data) via
send() and
recv() ).
The server calls
send() and sends information to the client as a string.
send() returns the number of characters sent.
The server calls
recv() to receive information from the client. When
recv() is called, the server must specify an integer that corresponds to the maximum amount of data that can be received by this method call.
recv() enters the blocked state when receiving data, and finally returns a string that represents the received data. If the amount of data sent exceeds what
recv() allows, the data will be truncated. Excess data will be buffered at the receiving end. When
recv() is called in the future, the remaining bytes will continue to be read. If there is excess data, it will be deleted from the buffer (and since the last time
recv() called, the client may send Any other data). The transfer ends and the server calls
close() socket to
close() connection.
Looking at the Socket process from the perspective of a TCP connection:
Socket process of TCP three-way handshake:
- After the server calls
socket(),
bind(),
listen()complete initialization, it calls
accept()block waiting;
- The client Socket object calls
connect()to send a SYN to the server and blocks;
- The server completes the first handshake, sending SYN and ACK responses;
- After receiving the response from the server, the client returns from
connect()and sends an ACK to the server.
- The server Socket object receives the client's third handshake ACK confirmation. At this time, the server returns from
accept()to establish a connection.
The next step is to send and receive data between the two connected objects.
TCP waved Socket four times:
- An application process calls
close()actively close and sends a FIN;
- The other end passively performs a shutdown after receiving the FIN, and sends an ACK confirmation;
- Afterwards, the closed application process passively calls
close()close the socket, and also sends a FIN;
- After receiving this FIN, one end acknowledges to the other end.
to sum up:
The above code simply demonstrates the use of basic functions of Socket. In fact, no matter how complicated the network program, these basic functions will be used. The above server code will process the next client request only after processing one client request. Such a server has a weak processing capability, and in practice the server needs to have concurrent processing capabilities. In order to achieve concurrent processing, the server needs to fork A new process or thread processes the request.
It ’s important to learn ideas
| http://www.itworkman.com/76803.html | CC-MAIN-2020-40 | en | refinedweb |
The Object-Oriented Programming vs Functional Programming debate, in a beginner-friendly nutshell.
You may be thinking, “What the function are these coding paradigms you speak of?” Well, before we dive in, let’s get a brief overview on what OO and FP actually are.
Object-oriented programming
According to Wikipedia,”)”, which essentially means, altering the ‘state’ of the object.
Furthermore, in most OOP languages, objects are instances of a class, and are seen as individual entities which interact with each other. These objects mimic the real world (to a certain degree).
Here’s an example in Ruby:
class Dog
attr_accessor :name, :favorite_treat
def initialize(name, favorite_treat)
@name = name
@favorite_treat = favorite_treat
end
def change_favorite_treat(treat)
@favorite_treat = treat
end
endcharlie = Dog.new("Charlie", "bacon")
We are creating a class (almost like a template, if you will) called Dog. We can assume all dogs have a name and a favorite treat, so we ‘initialize’ with a name and favorite treat parameter. We now make a new instance of the dog class, named charlie. Let’s say Charlie is a fickle dog, and his favorite treat has changed from bacon to t-bones.
charlie.change_favorite_treat('t-bone')
charlie.favorite_treat >> 't-bone'
By using the change_favorite_treat method, we have successfully changed the “state” of Charlie, and now when we access his ‘favorite_treat’ attribute, we get ‘t-bone’.
Functional programming
Put very simply, functional programming is a language that focuses on the computation of pure functions. The keyword there is ‘pure’ — everything revolves around keeping everything ‘pure’. What exactly do we mean by pure?
- There is a complete separation between the data of a program, and the behaviors of a program
- All objects created in functional programming are immutable (once something is created, it can’t be changed)
- Shared state is avoided (objects do not share scope with other objects)
- Adherence to pure functions (explained below)
Pure functions
A pure function is a function where:
- The return value only depends on the input (if you input the same value, you will always return the same value)
- There are no side effects (for example: no network or database calls which could affect the return value)
- They do not alter the data that was passed into them. We only want to describe how the input will be changed (think destructive vs non-destructive)
Here are some examples of an impure function:
function number(num){
num * Math.random()
}function hello(greeting){
console.log(greeting)
}var totalPeople = 10
function totalVotes(votes){
return votes * totalPeople
}
The outcome of number has nothing to do with what we input into it with num, as it is multiplying the input with a random number. Not pure! With pure functions, we ONLY care about return values. So, the function hello isn’t pure in the fact that it is creating a ‘side-effect’, which is the console logging. The function totalVotes depends on a variable outside of its scope (shared state!), which is a no no!
Here’s an example of a pure function:
function plusTwo(num){
return num + 2
}
The outcome of plusTwo depends only on the input, num.
OOP vs FP
To highlight how different the approaches are in OOP and FP, I’ve borrowed this example below:
You run a company and you just decided to give all your employees a $10,000.00 raise. How would you tackle this situation programatically?
In OOP:
- Create Employee class which initializes with name and salary, has a change salary instance method
- Create instances of employees
- Use the each method to change salary attribute of employees by +10,000
In FP:
- Create employees array, which is an array of arrays with name and corresponding salary
- Create a change_salary function which returns a copy of a single employee with the salary field updated
- Create a change_salaries function which maps through the employee array and delegates the calculation of the new salary to change_salary
- Use both methods to create a new dataset, named ‘happier employees’
We can see that the FP approach uses pure functions and adheres to immutability (note the use of map instead of each, where map returns a new altered dataset while each alters the attributes/state of the objects). With OOP, we cannot easily identify if the object has had the function called on it unless we start from the beginning and track if this has happened, whereas in FP, the object itself is now a new object with a different name, which makes it considerably easier to know what changes have been made.
(Example and explanation taken from:)
So, what’s the debate about?
Quite obviously, those on team OOP argue that OOP is a better approach to creating programs, while those on team FP argue that FP is better. How so?Team OOP argues that the concept of inheritance (new objects taking on the attributes/methods of existing objects letting us reuse more code) and encapsulation (the data and methods related to a certain object being bound together, creating independent, protected entities) makes it easier to manage and manipulate data. Team FP argues that the separation of data and methods, as well as the high level of abstraction leave less room for errors.
It seems the general consensus is that OOP and FP are better depending on the situation, so we won’t be hearing about the end of this debate anytime soon.
*Please keep in mind that this was a brief overview of the debate which may omit details, so for a more in-depth look, be sure to look at the debates raging on stack overflow and other coding websites.
Sources: | https://medium.com/@sho.miyata.1/the-object-oriented-programming-vs-functional-programming-debate-in-a-beginner-friendly-nutshell-24fb6f8625cc | CC-MAIN-2020-40 | en | refinedweb |
Controlling DrawdownTrading ·
I read the paper “Optimal Portfolio Strategy to Control Maximum Drawdown” and implemented the ideas presented in it in my trading strategy to controll the maximum drawdown.
The paper Optimal Portfolio Strategy to Control Maximum Drawdown describes a formula one can use to limit the drawdown of a trading strategy to a desired maximum percentage. The formula not only allows to controll the drawdown but also controlls the maximum leverage factor to use while still maintaining the maximum drawdown.
The meanings of the individual variables are described in the paper in detail. The result xt will tell us the leverage factor to use.
I implemented this in my C# code in the following way:
// Reference: Optimal Portfolio Strategy to Control Maximum Drawdown // public class DrawdownController { /// <summary> /// Gets or sets the expected future kelly fraction (= Sharpe / Volatility) /// </summary> public double KellyFraction { get; set; } = 1.0; /// <summary> /// Gets or sets the maximum allowed drawdown. Range [0, 1]. /// </summary> public double MaxDrawdown { get; set; } = 0.20; /// <summary> /// Calculates the leverage factor to use to make sure drawdown stays under MaxAllowedDrawdown. /// </summary> /// <param name="currentDrawdown">The current portfolio drawdown.</param> /// <returns>The leverage factor to use.</returns> public double Calculate(double currentDrawdown) { var timing = (MaxDrawdown - currentDrawdown) / (1.0 - currentDrawdown); var leverage = (KellyFraction /* + 0.5 */) / (1.0 - MaxDrawdown * MaxDrawdown); return Math.Max(0.0, leverage * timing); } }
I left out the +0.5 in the formula since I did not completely understand why it was there and I wanted KellyFraction to be the maximum value that could ever come out of the formula.
The KellyFraction is a parameter that needs to be estimated and is just the expected future Sharpe Ratio divided by the expected volatility.
The currentDrawdown parameter is the REDD in the formula and has to be computed from the equity curve in every step.
maxEquity = Math.Max(maxEquity, Portfolio.Equity); ... var currentDrawdown = 1.0 - Portfolio.Equity / maxEquity; var leverage = drawdownController.Calculate(currentDrawdown);
In some cases the drawdown reaches its limit and then it is possible to get stuck there, since then the leverage is very low or even zero. In order to solve this issue i modify the maxEquity variable in my code, so that with time the computed drawdown will be lower and trading starts again.
maxEquity = Portfolio.Equity + (maxEquity - Portfolio.Equity) * 0.9975;
The equity curve of a SPY/TLT minimum variance portfolio with a 15% drawdown limit looks like this:
During the financial crisis the drawdown limit is reached. After 2010 it recovers and then outperforms the S&P500 since it uses leverage. | https://trenki2.github.io/blog/2018/07/07/controlling-drawdown/ | CC-MAIN-2020-40 | en | refinedweb |
Code snippets/cn
本页包含一些用户经验和论坛讨论的例子,代码片段。阅读这些代码,然后写开始您自己的脚本...
Contents
典型 InitGui.py 文件())
典型模块文件示例!') def GetResources(self): return {'Pixmap' : 'path_to_an_icon/myicon.png', 'MenuText': 'Short text', 'ToolTip': 'More detailed text'} FreeCADGui.addCommand('Script_Cmd', ScriptCmd())
导入新文件格式")
添加线() doc.recompute()
添加多边形(0,0,0) n.append(v) v=App.Vector(10,0,0) n.append(v) #... repeat for all nodes # Create a polygon object and set its nodes p=doc.addObject("Part::Polygon","Polygon") p.Nodes=n doc.recompute()
组中添加/删除对象...
添加网格
添加圆弧或圆
import Part doc = App.activeDocument() c = Part.Circle() c.Radius=10.0 f = doc.addObject("Part::Feature", "Circle") # create a document with a circle feature f.Shape = c.toShape() # Assign the circle shape to the shape property doc.recompute()
对象显示的访问和更改
通过Python在3D查看器中观察鼠标事件.Console.PrintMessage("Clicked on position: ("+str(pos[0])+", "+str(pos[1])+")boardEvent --
用 Python 操作场景图()()
添加和删除场景图中的对象)
添加自定义部件到界面
添加标签的组合视图)
打开一个网页
import WebGui WebGui.openBrowser("")
获取网页内容
from PyQt4 import QtGui,QtWebKit a = QtGui.qApp mw = a.activeWindow() v = mw.findChild(QtWebKit.QWebFrame) html = unicode(v.toHtml()) print html | https://wiki.freecadweb.org/Code_snippets/cn | CC-MAIN-2020-40 | en | refinedweb |
Hello Bash developers! Configuration nashi 3.19.0-61-generic #69~14.04.1-Ubuntu SMP Thu Jun 9 09:09:13 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux Machine Type: x86_64-unknown-linux-gnu: Repeat-By: $ uname Linux $ fc -e true uname Linux $ echo $? 1 $ # Would have expected 0 here: successful re-invocation Fix: This is my first dive into Bash's sources; patch follows and is also attached to this mail. Please let me know if you need anything else. As a longtime Bash user, I'm very happy to contribute to my favorite shell! -- regards, ingo From 1cf392a401c67c2f8437f2da459dfcf0f675dc55 Mon Sep 17 00:00:00 2001 From: Ingo Karkat <address@hidden> Date: Thu, 30 Jun 2016 12:30:59 +0200 Subject: [PATCH 1/1] Exit status of fc -e is the wrong way around fc_execute_file() delegates to _evalfile(), which only returns the result of the file's execution if FEVAL_BUILTIN is set (exemplified by source_file()). If unset, an indication of whether the file exists is returned instead (exemplified by maybe_execute_file(), which is used for the .bash_profile, .bash_login, .profile optional init chain). According to the manual (and common sense), fc -e editor should return the recalled command's success. For that, the FEVAL_BUILTIN flag needs to be set. --- builtins/evalfile.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/builtins/evalfile.c b/builtins/evalfile.c index 058d99d..e5c118b 100644 --- a/builtins/evalfile.c +++ b/builtins/evalfile.c @@ -331,7 +331,7 @@ fc_execute_file (filename) /* We want these commands to show up in the history list if remember_on_history is set. */ - flags = FEVAL_ENOENTOK|FEVAL_HISTORY|FEVAL_REGFILE; + flags = FEVAL_ENOENTOK|FEVAL_BUILTIN|FEVAL_HISTORY|FEVAL_REGFILE; return (_evalfile (filename, flags)); } #endif /* HISTORY */ -- 1.9.1 -- -- Ingo Karkat -- /^-- /^-- /^-- /^-- /^-- --
0001-Exit-status-of-fc-e-is-the-wrong-way-around.patch
Description: Text Data
signature.asc
Description: OpenPGP digital signature | https://lists.gnu.org/archive/html/bug-bash/2016-06/msg00129.html | CC-MAIN-2020-40 | en | refinedweb |
Haskaya Super Patience Pips in GBP.
this system can be used for 60 time frame.
This system is for Long term and profitable.
Signals are shown as arrows. as an optional,
you can set a voice alarm or email notification
Below you can find indicator inputs.
BandPeriod=273;// Bollingers Band Period -1
BandDeviations=3;// Bollingers Band Period -1
BandPeriod1=800;// Bollingers Band Period -2
BandDeviations1=2;// Bollingers Band Period -1
InpSARStep1=0.003; // 2.Parabolic Sars Step
InpSARMaximum1=0.2; // 2.Parabolic Sars Maximum
InpSARStep3=0.0009; //3. Parabolic Sars Step
InpSARMaximum3=0.2; // 3 = 79;
note2 = "0=sma, 1=ema, 2=smma, 3=lwma";
e MA1Shift=371;
MA1Mode = 0 ;//0=sma, 1=ema, 2=smma, 3=lwma
note4 = "Second Moving Average";
MA2 = 289;
note5 = "0=sma, 1=ema, 2=smma, 3=lwma";
int MA2Shift=-371;
LIMITED TIME OFFER 30% OFF! parame
This indicator is producing accurate signal based on distances between crossection point of 571 period moving average with 632 period bollinger band and 17 moving average and 171 moving average. You can also use this signal with expert advisor with below code double v1= 0 ; v1= GlobalVariableGet ( Symbol ()+ string ( Period ())+ "HSKPASHAVO1" ); if (v1== 0 ) return ( 0 ); // No Signals if (v1== 1 ) { //Send BUY Order... GlobalVariableSet ( Symb Reversal Day Pattern if enabled. Sho
Daily Trend Scalper (DTS) This indicator is part of the RPTrade Pro Solutions systems. DTS. How does it work DTS is usingредназначен для использования с терминалом MetaTrader 4. осуществляем вход в сделку в заданном направлении.Индикатор никогда. Рекомендуемый таймфрейм М5, экспирация 1 свеча. Рекомендуемые пары: A | https://www.mql5.com/en/market/product/44725 | CC-MAIN-2020-40 | en | refinedweb |
#include <FaultIndicator.h>
A FaultIndicator is typically only an indicator (which may or may not be remotely monitored), and not a piece of equipment that actually initiates a protection event. It is used for FLISR (Fault Location, Isolation and Restoration) purposes, assisting with the dispatch of crews to "most likely" part of the network (i.e. assists with determining circuit section where the fault most likely happened). | https://cim.fein-aachen.org/libcimpp/doc/IEC61970_16v29a/classIEC61970_1_1Base_1_1AuxiliaryEquipment_1_1FaultIndicator.html | CC-MAIN-2020-16 | en | refinedweb |
Hello i am wondering on how to set "Vector[] thingA" and "Vector3[] thingb" as one Vector3, would anyone know how to do this>?
List l=new List(arrayA);
l=AddRange(arrayB);
Vector3[] arrayC = l.ToArray();
Answer by JoshuaMcKenzie
·
Sep 11, 2016 at 08:42 AM
I'm assuming that you mean merge one array of vector3 with another array of vector3
the System.linq namespace not only has a bunch of methods that deal with working with various types of collections (arrays, lists, dictionaries, etc.), including converting between said types and of course the concatenation of multiple collections...
it also uses alot of code sugar and chained commands so that you don't have to write as much code to do so.
using System.linq;
Vector3[] thingA;
Vector3[] thingB;
Vector3[] thingC = thingA.Concat(thingB).To
242 People are following this question.
Stupid question but I have to figure out
0
Answers
[SOLVED]Removing gameobject from list don't change the index
1
Answer
item not adding to list correctly!
1
Answer
"error CS1501: No overload for method `Add' takes `2' arguments"
0
Answers
getting two errors and none of the fixes ive found online work or are relevant to my code CS0019 and CS1061 (full errors below)
0
Answers | https://answers.unity.com/questions/1242131/how-do-i-set-twoor-multiple-as-one.html | CC-MAIN-2020-16 | en | refinedweb |
In order to get OpenGL support these libraries often use libEGL, so I implemented that first.
This also allows us to use platform-independent code for creating an OpenGL context.
Here is an example program that uses libEGL. It works on both KOS and Linux.
Code: Select all
#include <stdio.h> #include <string.h> #include <EGL/egl.h> #include <GL/gl.h> #define CLEANUP(n) { ret = (n); goto cleanup; } static EGLint const gl_config_attribs[] = { EGL_SURFACE_TYPE, EGL_WINDOW_BIT, EGL_RED_SIZE, 1, EGL_GREEN_SIZE, 1, EGL_BLUE_SIZE, 1, EGL_ALPHA_SIZE, 0, EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT, EGL_NONE }; static int init_egl(EGLDisplay * dest_egl_display, EGLSurface * dest_egl_surface, EGLNativeDisplayType display, EGLNativeWindowType surface) { int ret = 0; EGLConfig egl_config; EGLContext egl_context; EGLDisplay egl_display; EGLSurface egl_surface; EGLint n; egl_display = eglGetDisplay(display); if(egl_display == EGL_NO_DISPLAY) { CLEANUP(1); } if(!eglInitialize(egl_display, 0, 0)) { CLEANUP(2); } /* Advertised on Dreamcast, may also be used on Linux without X11/Wayland if(!strstr(eglQueryString(egl_display, EGL_EXTENSIONS), "EGL_KHR_surfaceless_opengl")) { CLEANUP(3); } */ printf("EGL Version \"%s\"\n", eglQueryString(egl_display, EGL_VERSION)); printf("EGL Vendor \"%s\"\n", eglQueryString(egl_display, EGL_VENDOR)); printf("EGL Extensions \"%s\"\n", eglQueryString(egl_display, EGL_EXTENSIONS)); if(!eglBindAPI(EGL_OPENGL_API)) { CLEANUP(4); } if(!eglChooseConfig(egl_display, gl_config_attribs, &egl_config, 1, &n) || n != 1) { CLEANUP(5); } egl_context = eglCreateContext(egl_display, egl_config, EGL_NO_CONTEXT, 0); if(!egl_context) { CLEANUP(6); } egl_surface = eglCreateWindowSurface(egl_display, egl_config, surface, 0); if(egl_surface == EGL_NO_SURFACE) { CLEANUP(7); } eglMakeCurrent(egl_display, egl_surface, egl_surface, egl_context); eglSwapBuffers(egl_display, egl_surface); *dest_egl_display = egl_display; *dest_egl_surface = egl_surface; cleanup: return ret; } int main(int argc, char ** argv) { EGLDisplay egl_display; EGLSurface egl_surface; int ret = init_egl(&egl_display, &egl_surface, EGL_DEFAULT_DISPLAY, 0); if(ret) { printf("Cannot init EGL, error %u, egl error %0x\n", ret, (unsigned)eglGetError()); return 1; } for(;;) { glClearColor(1.f, 1.f, 0.f, 1.f); glClear(GL_COLOR_BUFFER_BIT); eglSwapBuffers(egl_display, egl_surface); } return 0; }
But because OpenGL needs to swap buffers somehow and we don't have a graphics driver that supports a framebuffer etc. and I don't want to complicate things, I suggest we add a non-standard function glSwapBuffersKOS instead (just a rename of glutSwapBuffers in libGL). Then libEGL can call libGL's glSwapBuffersKOS.
The current example code with glutSwapBuffers is non-standard glut, it doesn't even have a main loop, doesn't use glut for input processing, etc. so I think for porting glut code to Dreamcast it's kind of useless and should be rewritten to use glSwapBuffersKOS instead, or ultimately use proper glut code, when the glut library is done (I've made good progress on this, it's easy with libEGL in place).
The only worry I have is that users use glutSwapBuffers right now and they would be surprised about their program not compiling anymore (not link, the glut.h header would be removed from libGL). All they'd need to do is rename the function, though.
Alternatively we could add the function to gl.h with __attribute__ ((deprecated)) and make it call glSwapBuffersKOS properly. Then I think libglut could #undef glutSwapBuffers and provide one of its own.
There's also glutCopyBufferToTexture which is non-standard, but it doesn't name clash so I'm not too worried about it. That said, it has been replaced by standards-conformant frame buffer objects in OpenGL, so I think it's not needed anymore.
Thoughts? | https://dcemulation.org/phpBB/viewtopic.php?p=1053055 | CC-MAIN-2020-16 | en | refinedweb |
Using Robot Automation for Touch Hardware Certification Tests
During Windows 7, multiple partners built robots to automate the touch hardware certification tests. The involved lines and points were laid out using predefined locations so the process was fairly straightforward. In Windows 8, many of the locations are randomized so an API is required to help the robots to find these points and lines. This topic covers the API that was designed to provide that information to the robot and let it command the test in a meaningful way.
You can find an existing solution that implements these APIs at OptoFidelity Robotic Automated Testing System.
Glossary
How to use these APIs
Callbacks
Callbacks are asynchronous, will not always occur on the same thread, and will originate from a multithreaded apartment.
Control Flow Overview
Each touch hardware certification run consists of a series of tests that is configurable by the Hardware Certification Kit (HCK) controller. Each test is designed to test a specific aspect of a touch digitizer, such as a tap, double tap, or drag test. Every test is composed of an interaction the robot is expected to perform and repeat a set number of times. The goal of the robot is to complete all of the tests by iterating through each one sequentially and performing each interaction.
Note
While the basic idea of the interaction remains the same throughout the test, details such as the start and end points, are changed randomly.
Connecting to the Touch Certification Tool
The touch certification tool must be started from the HCK controller, or manually by using command line. The robot should not start the touch certification tool remotely. If the ILogoAutomation interface is created when the touch certification tool is not running, the interface will not be created.
Note
It is not possible to automatically start the robot when the touch certification tool starts. The robot must be started manually for every test run or you must try to create the ILogoAutomation interface until it succeeds.
Selecting a test
Each test that the robot performs is identified by a unique name that remains the same between touch certification test runs. The initial screen in the touch certification tool is the test selection page and is treated as a test with a constant name of Table of Contents. To get the list of tests, use QueryAvailableTests. To start a test, use StartTest.
Running a test
Each test includes a set of interactions. The touch certification tool sends a notification when an interaction starts and finishes. The robot should call ILogoAutomation::QueryInteraction between the notifications and then perform an interaction. When all of the required interactions are complete, the touch certification tool will stop the test and send a test completed notification by using ILogoEventHandler::TestCompleted. At this point, the robot can move to the next test.
Error handling
Some of the error cases are as follows:
The robot performs an interaction but does not receive an ILogoEventHandler::InteractionCompleted callback. The robot controller should have a timeout to detect this. After the timeout, the robot can determine the reason by calling ILogoAutomation::QueryCurrentState.
If the digitizer could not recognize a contact, the touch certification tool will wait indefinitely for input on most tests. If the current state has 0 contacts down and ILogoAutomation::QueryCurrentInteraction returns the same interaction ID as before, the robot should fail the test because the digitizer could not recognize a contact.
If the digitizer reports that contact has not departed, no interactions will pass until the contact has departed.
The robot gets more ILogoEventHandler::InteractionCompleted callbacks than the number of interactions it has performed.
- This is likely due to device ghosting. The robot should resynchronize its state with the touch certification tool by using ILogoAutomation::QueryCurrentState and ILogoAutomation::QueryCurrentInteraction.
Automated Logo Flow Diagram
API Definitions
There are three interfaces that you can use:
Interface: ILogoAutomation
Interface: IInteractionObject
Interface: ILogoEventHandler
Interface: ILogoAutomation
The ILogoAutomation interface allows you to do the following:
Query information about the state of the touch certification tool by using QueryCurrentVersion, QueryCurrentState, QueryAvailableTests.
Control the flow of the touch certification tool similar to the buttons a human would use by using StartTest, ExitTest, Exit, HideStatusUI, ShowStatusUI.
Register touch certification tool events, allowing the robot and the touch certification tool states to be more easily synchronized by using RegisterEventSink and UnregisterEventSink.
Overview); };
Methods
When to implement
You should not implement the interface. It is implemented by the touch certification tool itself.
Error codes
The following table summarizes the custom error codes defined for the ILogoAutomation interface:
Interface: IInteractionObject
This object defines the interaction that a test expects in order for it to pass. A robot looks at the interaction name and properties to determine the required movements.
IInteractionObject provides access to a property bag that holds the information required to complete an interaction. Not all parameters are used for all gesture types. The parameter names are case sensitive.
Overview
interface IInteractionObject : IDispatch { HRESULT SetParameter( [in] BSTR name, [in] VARIANT value); HRESULT GetParameter( [in] BSTR name, [out] VARIANT* value); }
Methods
Parameters
The following are a list of parameters for the IInteractionObject interface:
Note
The type given is a subtype within Variant. You can get the values for startTimes by using VariantToDoubleArrayAlloc.
Interaction Types
More than one interaction type can cover an interaction that the robot could perform. For example, a tap on the screen could be performed with a line interaction with the same start and stop points. In these cases, a more specific type is given to hint at the intent of the test. The interaction type of IInteractionObject can be retrieved by calling the GetParameter method with the name parameter as interactionType. The returned out-parameter is of the BSTR type indicating what interaction type it is.
The following table lists the interaction types:
Note
Even parameters marked as unused may be defined. The API consumer should be robust against extra information stored in the InteractionObject.
Rotate Interaction Type and Parameters
The following figure (Figure 1) shows how the parameters of a rotation interaction are used to perform a rotate interaction.
Acceleration Profiles
The following table lists the four acceleration profiles:
The following figure (Figure 2) shows the robot contact movement for the acceleration profiles. Start contact point and end contact point refer to the circles in the diagram. The diagram shows the robot contact movement viewed from the side of the device screen surface.
When to implement
You should not implement the interface. You will be passed a pointer to this interface in response to a call to ILogoAutomation::QueryInteraction.
Error codes
The following table summarizes the interface custom error codes defined for the IInteractionObject interface:
Interface: ILogoEventHandler
This interface specifies how the touch certification tool will notify the robot controller about what is happening in the test run. The robot controller should implement this interface to get callbacks from the touch certification tool for status updates.
Implementation of this interface should not place time consuming work or any ILogoAutomation method calls inside of these methods since it will block the touch certification tool. Unless these methods return, the touch certification tool will not respond to inputs or any ILogoAutomation method calls.
Caution
Failing to follow this rule may cause the touch certification tool to stop responding.
Overview); };
Methods
Generate Files to Integrate Robot to HCK
To integrate the robot to the HCK, the files logo3.tlb, logo3.h and logo3_i.c are used. These define the interfaces described above.
Define logo3.idl library
Define the logo3.idl. For more information on IDL (Interface Definition Language) file, see this web page.
Refer to the following sample:
import "oaidl.idl"; import "ocidl.idl"; import "Propidl.idl"; [ object, dual, uuid(25d2b24d-679d-4131-89a7-eeaf10f4cb85), pointer_default(unique) ]); }; [ object, uuid(2F900384-8ED7-417D-87EA-F2C9F419F356), dual, nonextensible, pointer_default(unique) ] interface IInteractionObject : IDispatch{ HRESULT SetParameter( [in] BSTR name, [in] VARIANT value); HRESULT GetParameter( [in] BSTR name, [out] VARIANT* value); }; [ object, uuid(453D0AC0-AF7C-465B-AA61-2641C5F24366), dual, pointer_default(unique) ]); }; [ uuid(58147694-B012-4922-8A48-48E3ABCC1B57), version(1.0), ] library logo3Lib { importlib("stdole2.tlb"); interface ILogoEventHandler; [ uuid(D371B1A6-30AF-4F43-8202-E13A3C93DA8C) ] coclass LogoAutomation { [default] interface ILogoAutomation; }; [ uuid(8F34541E-0742-470D-9AA9-100E7984EA22) ] coclass InteractionObject { [default] interface IInteractionObject; }; };
Generate logo3.tlb, logo3.h and logo3_i.c
Once the interfaces and the library have been defined, use the MIDL (Microsoft Interface Definition Language) compiler to create logo3.tlb, logo3.h and logo3_i.c. Use the following process:
Install Visual Studio and the SDK.
Define logo3.idle file as shown above.
Use the Visual Studio command prompt to run midl. If not, ensure that path to cl.exe is accessible.
Run midl.exe. For example,
midl.exe /out c:\Folder c:\Folder\logo3.idl.
Create a Visual Studio project
Add the logo3.idl file to the resources of the project.
Copy logo3_i.c and logo3.h to the project’s folder
In Visual Studio, add project Properties > Additional dependencies Propsys.lib
Build the project.
When to implement
Developers should implement this interface and register it with the touch certification tool by calling ILogoAutomation::RegisterEventSink. This allows the robot controller to respond to touch certification tool events, instead of polling the state of the tool.
After registration, the interface will get callbacks for any events that happen.
Note
Make sure to call this method to unregister the event sink before the robot controller is finished. Failing to do so may cause the status UI to freeze for several minutes before DCOM times out. The touch certification tool may try to call the event sink to deliver status updates and will be blocked by an event sink that is not responding. Eventually, the RPC call will timeout and the touch certification tool will recover and automatically unregister the event sink that is not responding.
If the robot controller crashes, or the network is not available, the call to UnregisterEventSink cannot be guaranteed so the status UI may stop responding for a while. In such cases, if you want to preserve the previous touch certification tool test results, you should wait at least 6 minutes to allow the touch certification tool to recover from an RPC call timeout. If you do not need to preserve the previous test results, you can stop the touch certification tool process.
Code Sample
This is a partial example of how a robot controller could use the robot APIs.
Note
This example does not cover error handling. For more information about error handling, see Error handling.
#include <stdio.h> #include <tchar.h> #include <Windows.h> #include <Propvarutil.h> #include <string> #include <vector> #include <assert.h> // Make sure you have set the correct header file path to the SDK. #include #logo3.h# #include #logo3_i.c# #define CHECK_HR(EXPR) \ do \ { \ hr = (EXPR); \ if (FAILED(hr)) \ { \ printf("File = %s, Line# = %u, %s failed with 0x%x\n", __FILE__, __LINE__, #EXPR, hr); \ goto Exit; \ } \ } while (FALSE, FALSE); \ class AutoBSTR { public: AutoBSTR(PCWSTR str) { m_str = SysAllocString(str); } ~AutoBSTR() { SysFreeString(m_str); } operator BSTR() const throw() { return m_str; } private: BSTR m_str; }; class LogoEventHandler : public ILogoEventHandler { public: LogoEventHandler(ILogoAutomation* plogo) : m_refCount(1), m_eventTestStarted(NULL), m_eventInteractionCompleted(NULL), m_currentInteractionId(0) { } ~LogoEventHandler() { if (m_eventTestStarted != NULL) { CloseHandle(m_eventTestStarted); } if (m_eventInteractionCompleted != NULL) { CloseHandle(m_eventInteractionCompleted); } } HRESULT Initialize() { m_eventTestStarted = CreateEvent(NULL, FALSE, FALSE, NULL); if (m_eventTestStarted == NULL) { return HRESULT_FROM_WIN32(GetLastError()); } m_eventInteractionCompleted = CreateEvent(NULL, FALSE, FALSE, NULL); if (m_eventInteractionCompleted == NULL) { return HRESULT_FROM_WIN32(GetLastError()); } return S_OK; } public: // ILogoEventHandler Methods STDMETHOD(TestStarted)(BSTR testName) { wprintf(L"[Logo] Test started. Name = %s\n", testName); SetEvent(m_eventTestStarted); return S_OK; } STDMETHOD(TestCompleted)(BSTR testName, BOOL testSucceeded) { wprintf(L"[Logo] Test Completed. Name = %s, Result = %s\n", testName, testSucceeded ? L"Passed" : L"Failed"); return S_OK; } STDMETHOD(InteractionStarted)(BSTR testName, ULONG interactionId) { wprintf(L"[Logo] Interaction started. id = %u\n", interactionId); return S_OK; } STDMETHOD(InteractionCompleted)(BSTR testName, ULONG interactionId, BOOL interactionSucceeded, BSTR failureReason) { wprintf(L"[Logo] Interaction completed. id = %u, %s\n", interactionId, interactionSucceeded ? L"succeeded" : L"failed"); if (!interactionSucceeded) { wprintf(L"[Logo] Failure reason: %s\n", failureReason); } SysFreeString(failureReason); m_currentInteractionId = interactionId; SetEvent(m_eventInteractionCompleted); return S_OK; } STDMETHOD(LogoExit)() { wprintf(L"[Logo] Logo exited.\n"); return S_OK; } // end of ILogoEventHandler Methods // IUnknown methods STDMETHODIMP_(ULONG) AddRef() { InterlockedIncrement(&m_refCount); return m_refCount; } STDMETHODIMP_(ULONG) Release() { InterlockedDecrement(&m_refCount); if (m_refCount == 0) { delete this; } return m_refCount; } STDMETHODIMP QueryInterface( __in REFIID riid, __out void **ppvObject) { if (ppvObject == NULL) { return E_POINTER; } if (riid == IID_ILogoEventHandler || riid == IID_IUnknown) { AddRef(); *ppvObject = this; return S_OK; } *ppvObject = NULL; return E_NOINTERFACE; } // End of IUnknown methods // IDispatch methods STDMETHODIMP GetTypeInfoCount( __RPC__out UINT *pctinfo) { return E_NOTIMPL; } STDMETHODIMP GetTypeInfo( __in UINT iTInfo, __in LCID lcid, __RPC__deref_out_opt ITypeInfo **ppTInfo) { return E_NOTIMPL; } STDMETHODIMP GetIDsOfNames( __RPC__in REFIID riid, __RPC__in_ecount_full(cNames) LPOLESTR *rgszNames, __RPC__in_range(0,16384) UINT cNames, LCID lcid, __RPC__out_ecount_full(cNames) DISPID *rgDispId) { return E_NOTIMPL; } STDMETHODIMP Invoke( /* [in] */ DISPID dispIdMember, /* [in] */ REFIID riid, /* [in] */ LCID lcid, /* [in] */ WORD wFlags, /* [out][in] */ DISPPARAMS *pDispParams, /* [out] */ VARIANT *pVarResult, /* [out] */ EXCEPINFO *pExcepInfo, /* [out] */ UINT *puArgErr) { return E_NOTIMPL; } // End of IDispatch methods public: HRESULT WaitForTestStarted() { DWORD index = 0; HRESULT hr = CoWaitForMultipleHandles( 0, //flags 10 * 1000, //timeout 1, //nHandles &m_eventTestStarted, //handles &index); return hr; } HRESULT WaitForInteractionCompleted( __out ULONG* interactionId) { DWORD index = 0; HRESULT hr = CoWaitForMultipleHandles( 0, //flags 10 * 1000, //timeout 1, //nHandles &m_eventInteractionCompleted, //handles &index); *interactionId = m_currentInteractionId; return hr; } private: volatile ULONG m_refCount; HANDLE m_eventTestStarted; HANDLE m_eventInteractionCompleted; ULONG m_currentInteractionId; }; class RobotController { public: RobotController() : m_logo(NULL), m_logoEventHandler(NULL), m_registrationCookie(0), m_testNames(NULL), m_nTests(0) { } ~RobotController() { if (m_logo != NULL) { // Make sure we unregister the event sink before exit. m_logo->UnregisterEventSink(m_registrationCookie); m_logo->Release(); } if (m_logoEventHandler != NULL) { m_logoEventHandler->Release(); } VariantClear(&m_testNamesVar); } HRESULT ConnectToLogo() { HRESULT hr = S_OK; printf("[Robot Controller] Connecting to logo\n"); // Logo should be running before robot controller trying to connect to it. Otherwise, this call will fail. CHECK_HR(CoCreateInstance( CLSID_LogoAutomation, NULL, CLSCTX_LOCAL_SERVER, // | CLSCTX_REMOTE_SERVER for DCOM IID_ILogoAutomation, (LPVOID*)&m_logo)); printf("[Robot Controller] Connected to logo\n"); m_logoEventHandler = new LogoEventHandler(m_logo); if (m_logoEventHandler == NULL) { printf("Cannot allocate memory for LogoEventHandler.\n"); hr = E_OUTOFMEMORY; goto Exit; } CHECK_HR(m_logoEventHandler->Initialize()); // Register event sink to logo so we can get status updates from logo. CHECK_HR(m_logo->RegisterEventSink(m_logoEventHandler, &m_registrationCookie)); CHECK_HR(QueryTestNames()); Exit: return hr; } HRESULT RunTest(UINT index) { HRESULT hr = S_OK; if (index >= m_nTests) { return E_INVALIDARG; } // First we need to make sure logo is on the "Table of Contents" page. Calling ExitTest() will // cause logo to display the "Table of Contents" page. CHECK_HR(m_logo->ExitTest()); // Start the target test - logo will go to the test page. CHECK_HR(m_logo->StartTest(m_testNames[index])); CHECK_HR(m_logoEventHandler->WaitForTestStarted()); CHECK_HR(RunInteractions()); Exit: return hr; } HRESULT RunTests() { HRESULT hr = S_OK; // Test 0 is the "Table of Contents" page. Real tests start from index 1. for (UINT i = 1; i < m_nTests; i++) { CHECK_HR(RunTest(i)); } // Shows "Table of Contents" page for status of all tests. CHECK_HR(m_logo->ExitTest()); Exit: return hr; } private: HRESULT QueryTestNames() { HRESULT hr = S_OK; CHECK_HR(m_logo->QueryAvailableTests(&m_testNamesVar)); m_nTests = m_testNamesVar.parray->rgsabound->cElements; m_testNames = (BSTR*)(m_testNamesVar.parray->pvData); printf("Test names---- (%u)\n", m_nTests); for (UINT i = 0; i < m_nTests; i++) { wprintf(L"test %u: %s\n", i, m_testNames[i]); } Exit: return hr; } HRESULT QueryAndPerformInteraction() { HRESULT hr = S_OK; IInteractionObject* interaction = NULL; { ULONG nContacts = 0; BSTR currentTestName = NULL; // Check current state to make sure no contacts down before a new interaction. CHECK_HR(m_logo->QueryCurrentState(¤tTestName, &nContacts)); if (nContacts != 0) { printf("[Robot Controller] There are contacts down on the screen surface. Cannot perform a new interaction."); // Real robot controller can instruct robot to remove any contacts from the screen surface instead // of failing here. hr = E_UNEXPECTED; goto Exit; } hr = m_logo->QueryInteraction(&interaction); if (hr == 0x80040001) { printf("[Robot Controller] No more interactions - test has completed.\n"); goto Exit; } else if (hr == 0x80040003) { printf("[Robot Controller] This test does not have any interaction object and requires human input.\n"); goto Exit; } CHECK_HR(hr); VARIANT var = {}; ULONG id; CHECK_HR(interaction->GetParameter(AutoBSTR(L"id"), &var)); CHECK_HR(VariantToUInt32(var, &id)); VariantClear(&var); CHECK_HR(interaction->GetParameter(AutoBSTR(L"interactionType"), &var)); std::wstring interactionType(var.bstrVal); VariantClear(&var); wprintf(L"[Robot Controller] Interaction id = %u, type = %s\n", id, interactionType.c_str()); bool interactionPerformed = false; // Get interaction parameters based on the type of interaction and used the parameters to // instruct the robot to perform the interaction. if (interactionType == L"Line") { CHECK_HR(GetParamAndPerformLineInteraction(interaction, &interactionPerformed)); } else if (interactionType == L"Non-Interaction") { CHECK_HR(GetParamAndPerformNonInteraction(interaction, &interactionPerformed)); } // else ... // The sample code only demonstrates how to get interaction parameters for Line and Non-Interaction // interactions. For other interaction types, please refer to the documentation for what parameters // to retrieve from the IInteractionObject. They are similar to how it is done in this sample. if (interactionPerformed) { // If interaction was performed, we should wait for the interaction to complete. A callback // from logo to the m_logoEventHandler tells that. ULONG completedInteractionId = 0; CHECK_HR(m_logoEventHandler->WaitForInteractionCompleted(&completedInteractionId)); if (id != completedInteractionId) { // Interaction id mismatch - it might be device issue or robot performed multiple interactions. hr = E_UNEXPECTED; goto Exit; } } else { // We couldn't perform the interaction. Set hr to 0x80040001 (Test has already been completed) // to allow the sample code to move to the next test. hr = 0x80040001; goto Exit; } } Exit: if (interaction != NULL) { interaction->Release(); } return hr; } HRESULT GetParamAndPerformNonInteraction( __in IInteractionObject* interaction, __out bool* interactionPerformed) { HRESULT hr = S_OK; DOUBLE* endTimes = NULL; ULONG nEndTimes = 0; CHECK_HR(GetDoubles(interaction, L"endTimes", &endTimes, &nEndTimes)); // It is expected for Non-interaction, there is only 1 element in endTimes array. assert(nEndTimes == 1); DWORD sleepTime = (DWORD)(endTimes[0]); printf("[Robot Controller] Perform non-interaction - no contact of the screen for %ums\n", sleepTime); Sleep(sleepTime); *interactionPerformed = true; Exit: if (endTimes != NULL) { CoTaskMemFree(endTimes); } return hr; } HRESULT GetParamAndPerformLineInteraction( __in IInteractionObject* interaction, __out bool* interactionPerformed) { HRESULT hr = S_OK; DOUBLE* xFrom = NULL; DOUBLE* yFrom = NULL; DOUBLE* xTo = NULL; DOUBLE* yTo = NULL; DOUBLE* startTimes = NULL; DOUBLE* endTimes = NULL; ULONG count = 0; ULONG n = 0; // Get all the interaction parameters used by a Line interaction. CHECK_HR(GetDoubles(interaction, L"xFrom", &xFrom, &n)); count = n; CHECK_HR(GetDoubles(interaction, L"yFrom", &yFrom, &n)); assert(count == n); CHECK_HR(GetDoubles(interaction, L"xTo", &xTo, &n)); assert(count == n); CHECK_HR(GetDoubles(interaction, L"yTo", &yTo, &n)); assert(count == n); CHECK_HR(GetDoubles(interaction, L"startTimes", &startTimes, &n)); assert(count == n); CHECK_HR(GetDoubles(interaction, L"endTimes", &endTimes, &n)); assert(count == n); printf("Interaction parameters:\n"); PrintVector(L"xFrom", xFrom, count); PrintVector(L"yFrom", yFrom, count); PrintVector(L"xTo", xTo, count); PrintVector(L"yTo", yTo, count); PrintVector(L"startTimes", startTimes, count); PrintVector(L"endTimes", endTimes, count); // Instruct robot to perform interaction based on the parameters. // This sample does not show how to control the robot as it is very specific to // the robot designer. This sample does not perform this interaction. *interactionPerformed = false; Exit: if (xFrom != NULL) { CoTaskMemFree(xFrom); } if (yFrom != NULL) { CoTaskMemFree(yFrom); } if (xTo != NULL) { CoTaskMemFree(xTo); } if (yTo != NULL) { CoTaskMemFree(yTo); } if (startTimes != NULL) { CoTaskMemFree(startTimes); } if (endTimes != NULL) { CoTaskMemFree(endTimes); } return hr; } HRESULT RunInteractions() { HRESULT hr = S_OK; // Hide the status UI before doing interaction. CHECK_HR(m_logo->HideStatusUI()); bool hasMoreIteration = true; while (hasMoreIteration) { // Usually a test needs multiple iterations of interactions. hr = QueryAndPerformInteraction(); if (hr == 0x80040001) { // The test has been completed. hasMoreIteration = false; hr = S_OK; } CHECK_HR(hr); } Exit: // Re-display the status UI for human to review the test status. m_logo->ShowStatusUI(); return hr; } HRESULT GetDoubles( __in IInteractionObject *interaction, __in PCWSTR paramName, __out_ecount(*length) DOUBLE** doubleArray, __out ULONG* length) { VARIANT var = {}; HRESULT hr = interaction->GetParameter(AutoBSTR(paramName), &var); if (SUCCEEDED(hr)) { hr = VariantToDoubleArrayAlloc(var, doubleArray, length); } VariantClear(&var); return hr; } void PrintVector( __in PCWSTR name, __in_ecount(count) const DOUBLE* vector, __in ULONG count) { wprintf(L"%s\n", name); for (ULONG i = 0; i < count; i++) { printf("%.2f ", vector[i]); } printf("\n"); } private: ILogoAutomation* m_logo; LogoEventHandler* m_logoEventHandler; ULONG m_registrationCookie; BSTR* m_testNames; ULONG m_nTests; VARIANT m_testNamesVar; }; int _tmain(int argc, _TCHAR* argv[]) { HRESULT hr = S_OK; CHECK_HR(CoInitializeEx(NULL, COINIT_APARTMENTTHREADED)); // Extra scope to make sure COM objects are released before calling CoUninitialize. { RobotController robotController; CHECK_HR(robotController.ConnectToLogo()); CHECK_HR(robotController.RunTests()); } Exit: CoUninitialize(); printf("\n[Robot Controller] Shutting down\n"); return hr; }
Robot API Configuration
Use the procedures in this section to configure the computer that are used with the robot API.
Prerequisites
To use the robot API, the following prerequisites must be met:
A server running the touch certification tool and a client computer to be used as the robot controller
A user account that will be used on both the server and the robot controller
The user account used on the server and the robot controller must be a member of the Distributed COM Users security group on both computer. You can add the user account to the group by using the Computer Management console or by using the command line.
Configure the Robot API on domain-joined Windows 8 computers
Use the following procedures to configure the Robot API to work on Windows 8 computers that are joined to the same domain.
Open TCP port 135 by using Windows Firewall
On the Start screen, type Windows Firewall, click Settings, and then click Windows Firewall in the search results.
In the Windows Firewall console, click Advanced Settings.
Right-click Inbound Rules, and then click New Rules.
On the Rule Type page of the New Inbound Rule Wizard, click Port, and then click Next.
On the Protocol and Ports page, click TCP, click Specific local ports, type 135, and then click Next.
On the Action page, click Allow the connection, and then click Next.
On the Profile page, select the Domain, Private, and Public check boxes, and then click Next.
On the Name page, in the Name box, specify a name for this rule, such as Robot API, and then click Finish.
Repeat these steps to create an outbound rule.
These steps should be completed on both the server and the robot controller.
You can also open the ports by using the command line by running the following commands:
netsh advfirewall firewall add rule name=”Robot API” dir=in action=allow protocol=TCP localport=135 profile=any
netsh advfirewall firewall add rule name=”Robot API” dir=out action=allow protocol=TCP localport=135 profile=any
Register the touch certification tool
Log on to the server.
From an elevated command prompt, type logo3.exe /RegServer and then press Enter.
Note
The RegServer parameter is case sensitive.
Repeat these steps on the robot controller.
Make the touch certification tool interactive by using dcomcnfg.exe
Log on to the server.
On the Start screen, type dcomcnfg.exe and then click dcomcnfg.exe from the search results.
Expand Component Services, expand Computers, expand My Computer, and then click DCOM Config.
Right-click Logo3, and then click Properties.
Click the Identity tab, and then click The interactive user.
Click OK.
Co-create the ILogoAutomation interface
Log on to the robot controller.
On the Start screen, type dcomcnfg.exe and then click dcomcnfg.exe from the search results.
Expand Component Services, expand Computers, expand My Computer, and then click DCOM Config.
Right-click Logo3, and then click Properties.
Click the Location tab, select the Run application on the following computer check box, and then type the name of the server.
Click OK.
Alternatively, you can call CoCreateEx() and specify the server name. If TCP port 135 is open and the touch certification tool is registered, no further action is required.
Once you have completed all of the procedures in this section, the touch certification tool should be working and the robot controller should be able to connect.
Important
The touch certification tool does not allow com instantiations so it must be running on the server in order for the robot controller to connect.
Configure the Robot API on Windows 8 computers that are not joined to a domain
To configure the Robot API on Windows 8 computers that are not joined to a domain, you must modify the Access Permissions and Launch and Activation Permissions to allow the user account access from the server.
Note
The client and server devices must be either in the same domain or in the same workgroup.
To modify the Access Permissions and Launch and Activation Permissions
Log on to the server.
On the Start screen, type dcomcnfg.exe and then click dcomcnfg.exe from the search results.
Expand Component Services, and then expand Computers.
Right-click My Computer, and then click Properties.
Click the COM Security tab.
Under the Access Permissions heading, click Edit Limits.
Add the user account, select the Allow check boxes for both Local Access and Remote Access, and then click OK.
Under the Launch and Activation Permissions heading, click Edit Limits.
Add the user account, select the Allow check boxes for Local Launch, Remote Launch, Local Activation, and Remote Activation, and then click OK.
Repeat these steps on the robot controller.
ARM-based devices
ARM-based devices are not designed to join a domain so you can follow the procedures in the section named “Configure the Robot API on Windows 8 computers that are not joined to a domain”.
Additionally, you should ensure that the Enable Distributed COM on this computer check box is selected in the Default Properties tab. It is selected by default.
Configure Windows 7, Windows XP, or Windows Server robot controllers
It is possible that some robot controllers only run by using an earlier version of the Windows operating system. Since the touch certification tool only runs on Windows 8, you cannot register the ILogoAutomation interface in the normal way. You must import the type library directly into the Windows registry.
To import the library directly to the registry
On the robot controller, create a temporary folder, such as C:\Logo.
Copy the logo3.tlb file into the temporary folder.
Save the contents of the Touch Certification Tool section into a registry file in the temporary folder.
Note
If you are using a folder other than C:\Logo as your temporary folder, you must update the registry file with the proper path.
From an elevated command prompt, run the registry file to import it into the registry
WOW not supported for /RegServer
Using an x86-version of the touch certification tool on an x64-based computer to register the ILogoAutomation interface is not supported and will not work. Always use the touch certification tool that matches your processor architecture.
Touch certification tool registry file
Here’s the registry file
Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\AppID\{BF7FBA21-4E96-403E-8223-6E57C0EBFA82}] @="Logo3" [HKEY_CLASSES_ROOT\AppID\logo3.exe] "AppID"="{BF7FBA21-4E96-403E-8223-6E57C0EBFA82}" [HKEY_CLASSES_ROOT\Interface\{2F900384-8ED7-417D-87EA-F2C9F419F356}] @="IInteractionObject" [HKEY_CLASSES_ROOT\Interface\{2F900384-8ED7-417D-87EA-F2C9F419F356}\ProxyStubClsid] @="{00020424-0000-0000-C000-000000000046}" [HKEY_CLASSES_ROOT\Interface\{2F900384-8ED7-417D-87EA-F2C9F419F356}\ProxyStubClsid32] @="{00020424-0000-0000-C000-000000000046}" [HKEY_CLASSES_ROOT\Interface\{2F900384-8ED7-417D-87EA-F2C9F419F356}\TypeLib] @="{58147694-B012-4922-8A48-48E3ABCC1B57}" "Version"="1.0" [HKEY_CLASSES_ROOT\Interface\{453D0AC0-AF7C-465B-AA61-2641C5F24366}] @="ILogoAutomation" [HKEY_CLASSES_ROOT\Interface\{453D0AC0-AF7C-465B-AA61-2641C5F24366}\ProxyStubClsid] @="{00020424-0000-0000-C000-000000000046}" [HKEY_CLASSES_ROOT\Interface\{453D0AC0-AF7C-465B-AA61-2641C5F24366}\ProxyStubClsid32] @="{00020424-0000-0000-C000-000000000046}" [HKEY_CLASSES_ROOT\Interface\{453D0AC0-AF7C-465B-AA61-2641C5F24366}\TypeLib] @="{58147694-B012-4922-8A48-48E3ABCC1B57}" "Version"="1.0" [HKEY_CLASSES_ROOT\CLSID\{8F34541E-0742-470D-9AA9-100E7984EA22}] @="InteractionObject Class" "AppID"="{BF7FBA21-4E96-403E-8223-6E57C0EBFA82}" [HKEY_CLASSES_ROOT\CLSID\{8F34541E-0742-470D-9AA9-100E7984EA22}\Programmable] [HKEY_CLASSES_ROOT\CLSID\{8F34541E-0742-470D-9AA9-100E7984EA22}\TypeLib] @="{58147694-B012-4922-8A48-48E3ABCC1B57}" [HKEY_CLASSES_ROOT\CLSID\{8F34541E-0742-470D-9AA9-100E7984EA22}\Version] @="1.0" [HKEY_CLASSES_ROOT\CLSID\{D371B1A6-30AF-4F43-8202-E13A3C93DA8C}] @="LogoAutomation Class" "AppID"="{BF7FBA21-4E96-403E-8223-6E57C0EBFA82}" [HKEY_CLASSES_ROOT\CLSID\{D371B1A6-30AF-4F43-8202-E13A3C93DA8C}\Programmable] [HKEY_CLASSES_ROOT\CLSID\{D371B1A6-30AF-4F43-8202-E13A3C93DA8C}\TypeLib] @="{58147694-B012-4922-8A48-48E3ABCC1B57}" [HKEY_CLASSES_ROOT\CLSID\{D371B1A6-30AF-4F43-8202-E13A3C93DA8C}\Version] @="1.0" [HKEY_CLASSES_ROOT\Interface\{25D2B24D-679D-4131-89A7-EEAF10F4CB85}] @="ILogoEventHandler" [HKEY_CLASSES_ROOT\Interface\{25D2B24D-679D-4131-89A7-EEAF10F4CB85}\ProxyStubClsid] @="{00020424-0000-0000-C000-000000000046}" [HKEY_CLASSES_ROOT\Interface\{25D2B24D-679D-4131-89A7-EEAF10F4CB85}\ProxyStubClsid32] @="{00020424-0000-0000-C000-000000000046}" [HKEY_CLASSES_ROOT\Interface\{25D2B24D-679D-4131-89A7-EEAF10F4CB85}\TypeLib] @="{58147694-B012-4922-8A48-48E3ABCC1B57}" "Version"="1.0" [HKEY_CLASSES_ROOT\TypeLib\{58147694-B012-4922-8A48-48E3ABCC1B57}] [HKEY_CLASSES_ROOT\TypeLib\{58147694-B012-4922-8A48-48E3ABCC1B57}\1.0] @="logo3Lib" [HKEY_CLASSES_ROOT\TypeLib\{58147694-B012-4922-8A48-48E3ABCC1B57}\1.0\0] [HKEY_CLASSES_ROOT\TypeLib\{58147694-B012-4922-8A48-48E3ABCC1B57}\1.0\0\win32] @="C:\\logo\\logo3.tlb" [HKEY_CLASSES_ROOT\TypeLib\{58147694-B012-4922-8A48-48E3ABCC1B57}\1.0\FLAGS] @="0" [HKEY_CLASSES_ROOT\TypeLib\{58147694-B012-4922-8A48-48E3ABCC1B57}\1.0\HELPDIR] @="C:\\logo"
DCOM debugging
If you are experiencing issues with DCOM, you can enable DCOM debugging to assist you in troubleshooting.
To enable DCOM debugging
On the Start screen, type regedit.exe, and then click regedit.exe from the search results.
Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Ole
Create a new DWORD value named CallFailureLoggingLevel and set it to 1
Once DCOM debugging is enabled, more troubleshooting information will be written to the event log on the server.
Latency Testing
At this time, latency testing is not supported with the robot API. You must use the specific latency hardware tools to perform latency testing.
Flow Control and Data Validation
The robot or the robot controller is responsible for general flow control of the tests. This allows those building the robot to deal with all errors from the robot’s capabilities. The test itself will not be looking for any errors or return information from the robot. Similarly, the API will not validate that the robot is capable of performing actions. Any validation should be done by the robot or the robot controller.
Send comments about this topic to Microsoft | https://docs.microsoft.com/en-us/previous-versions/windows/hardware/hck/jj906442(v=vs.85)?redirectedfrom=MSDN | CC-MAIN-2020-16 | en | refinedweb |
TL;DR – The Python random module contains a set of functions for generating random numbers. The module allows you to control the types of random numbers that you create.
Contents
Why use the Python random module?
The random module can perform a few different tasks: you can use it to generate random numbers, randomize lists, or choose elements from a list at random.
This module is perfect for generating passwords or producing test datasets. It can also be integrated into Python
for or
if loops to change the outcome of a function at random.
Generating Python random integers
The most basic and common use of the random module is to generate Python random integers. To accomplish this, you will need to import the random module and then use the
randint() Python function:
import random random.randint(0,10)
This will output a random number between
0 and
10, including end-points.
Alternatively, if you want a step size other than
1, you can use the
randrange() function. In this case, the syntax is:
random.randrange(start, stop[, step])
The
random.randrange() function uses a
step value of
1 by default. If you specify a
step, the range of potential outputs is determined using the Python
range() function.
Generating random floating values
If you want to generate a random floating value rather than an integer, use the Python
random.random() function:
import random random.random()
This will tell Python to generate a random number between
0 and
1, excluding
1.
If you want a random float number between specific start and end values, use the
random.uniform() function:
import numpy uniform = numpy.random.uniform(0, 100, size = (3, 5)) print(uniform)
This will tell Python to generate a random float number between
0 and
100, excluding
100.
Theory is great, but we recommend digging deeper!
Random functions for lists and sequences
If you have a list of numbers, values, or other elements, you can use the Python random module to randomly select one or more elements. To choose a single element at random, use the
random.choice() function:
import random myList = ["bmw", "volvo", "toyota", "chrysler"] print(random.choice(myList))
If you want to pick more than one element from a list or sequence, use the
random.sample() function:
import random myList = ["bmw", "volvo", "toyota", "chrysler"] print(random.sample(myList, 3))
In the case that you have a list or sequence and you want Python to randomize the order of elements in the list for you, use the
random.shuffle() function:
import random myList = ["bmw", "toyota", "volvo", "chrysler"] random.shuffle(myList) print(myList)
Creating arrays of random numbers
If you need to create a test dataset, you can accomplish this using the
randn() Python function from the Numpy library.
randn() creates arrays filled with random numbers sampled from a normal (Gaussian) distribution between
0 and
1.
The dimensions of the array created by the
randn() Python function depend on the number of inputs given. To create a 1-D array (that is, a list), enter only the length of the array desired. For example:
import numpy array = numpy.random.randn(3) print(array)
Similarly, for 2-D and 3-D arrays, enter the length of each dimension of the desired array:
import numpy array = numpy.random.randn(3, 5, 2) print(array)
It is possible to multiply the array generated by
randn() to get values outside of the default 0 to 1 range. Alternatively, you can use the
uniform() function to set upper and lower bounds on the random numbers generated:
import numpy array = numpy.random.uniform(0, 100, size = (3, 5)) print(array)
Python random: useful tips
- When using
random.random()to generate a random float number, you can multiply the result to generate a number outside the 0 to 1 range.
- If you want to be able to generate the same random number in the future, check the internal state of the random number generator using the
random.getstate(). You can reset the generator to the same state using the
random.setstate(). | https://www.bitdegree.org/learn/python-random | CC-MAIN-2020-16 | en | refinedweb |
table of contents
NAME¶SP, acs_map, boolcodes, boolfnames, boolnames, cur_term, numcodes, numfnames, numnames, strcodes, strfnames, strnames, ttytype - curses terminfo global variables
SYNOPSIS¶
#include <curses.h>
#include <term.h>
chtype acs_map[]; SCREEN * SP;;).. | https://manpages.debian.org/buster/ncurses-doc/boolcodes.3ncurses.en.html | CC-MAIN-2020-16 | en | refinedweb |
A module for Django that helps you to build pattern libraries
Django pattern library
A module for Django that helps you to build pattern libraries and follow the Atomic design methodology.
Objective
At the moment, the main focus is to allow developers and designers
use exactly the same Django templates in a design pattern library
and in production code.
There are a lot of alternative solutions for building
pattern libraries already. Have a look at Pattern Lab and
Astrum, for example.
But at Torchbox we mainly use Python and Django and
we find it hard to maintain layout on big projects in several places:
in a project's pattern library and in actual production code. This is our
attempt to solve this issue and reduce the amount of copy-pasted code.
Documentation
Documentation is located here.
How to install
Add
pattern_libraryinto your
INSTALLED_APPS:
INSTALLED_APPS = [ # ... 'pattern_library', # ... ]
Add
pattern_library.loader_tagsinto the
TEMPLATESsetting. For example:', ], 'builtins': ['pattern_library.loader_tags'], }, }, ]
Note that this module only supports the Django template backend out of the box.
Set the
PATTERN_LIBRARY_TEMPLATE_DIRsetting to point to a template directory with your patterns:
PATTERN_LIBRARY_TEMPLATE_DIR = os.path.join(BASE_DIR, 'project_styleguide', 'templates')
Note that
PATTERN_LIBRARY_TEMPLATE_DIRmust be available for
template loaders.
Include
pattern_library.urlsinto your
urlpatterns. Here's an example
urls.py:
from django.apps import apps from django.conf.urls import url, include urlpatterns = [ # ... Your URLs ] if apps.is_installed('pattern_library'): urlpatterns += [ url(r'^pattern-library/', include('pattern_library.urls')), ] | https://pythonawesome.com/a-module-for-django-that-helps-you-to-build-pattern-libraries/ | CC-MAIN-2020-16 | en | refinedweb |
Helper class to use files in APIs other than openmsx::File. More...
#include <LocalFileReference.hh>
Helper class to use files in APIs other than openmsx::File.
The openMSX File class has support for (g)zipped files (or maybe in the future files over http, ftp, ...). Sometimes you need to pass a filename to an API that doesn't support this (for example SDL_LoadWav()). This class allows to create a temporary local uncompressed version of such files. Use it like this:
LocalFileReference file(filename); // can be any filename supported // by openmsx::File my_function(file.getFilename()); // my_function() can now work on // a regular local file
Note: In the past this functionality was available in the openmsx::File class. The current implementation of that class always keep an open file reference to the corresponding file. This gave problems on (some versions of) windows if the external function tries to open the file in read-write mode (for example IMG_Load() does this). The implementation of this class does not keep a reference to the file.
Definition at line 30 of file LocalFileReference.hh.
Definition at line 25 of file LocalFileReference.cc.
Definition at line 19 of file LocalFileReference.cc.
References openmsx::filename.
Definition at line 14 of file LocalFileReference.cc.
Definition at line 64 of file LocalFileReference.cc.
References openmsx::FileOperations::rmdir(), and openmsx::FileOperations::unlink().
Returns path to a local uncompressed version of this file.
This path only remains valid as long as this object is in scope.
Definition at line 74 of file LocalFileReference.cc.
Referenced by openmsx::GlobalCommandController::source(). | http://openmsx.org/doxygen/classopenmsx_1_1LocalFileReference.html | CC-MAIN-2020-16 | en | refinedweb |
PROBLEM: An array consists of only 0,1 and 2 as its elements. You have to sort it.
Input: [0, 1, 1, 0, 1, 2, 1, 2, 0, 0, 0, 1]
Output: [0,0,0,0,0,1,1,1,1,1,2,2]
# Algorithm
Pseudo Code:
- We will calculate the number of 0s, 1s and 2s in the entire array.
- Thus, get the count of each individual tokens.
- Now, We’ll start from the beginning and place the number of 0s in the starting count_of_zero indices.
- We then place the number of 1s in the next count_of_ones indices.
- At last, we place the number of 2s in the count_of_twos indices.
Code:
#include <bits/stdc++.h> using namespace std; void sortArray(int arr[], int size) { int iterator, count_of_zeroes = 0, count_of_ones = 0, count_of_twos = 0; for (iterator = 0; iterator< size; iterator++) { switch (arr[iterator]) { case 0: count_of_zeroes++; break; case 1: count_of_ones++; break; case 2: count_of_twos++; break; } } iterator = 0; while (count_of_zeroes > 0) { arr[iterator++] = 0; count_of_zeroes--; } while (count_of_ones > 0) { arr[iterator++] = 1; count_of_ones--; } while (count_of_twos > 0) { arr[iterator++] = 2; count_of_twos--; } for (int iterator = 0; iterator< size; iterator++) cout << arr[iterator] << " "; } int main() { int arr[] = { 0, 1, 1, 0, 1, 2, 1, 2, 0, 0, 0, 1 }; cout << "Sorted array is { "; sortArray(arr, 12); cout << "} " << endl; return 0; }
Output:
Sorted array is { 0,0,0,0,0,1,1,1,1,1,2,2 }
Time Complexity: O(N)
Space Complexity: O(1)
This is how we sort an array of 0s 1s and 2s. This is identical to the problem of Dutch National Flag, just using numbers instead of the colors.
Report Error/ Suggestion | https://www.studymite.com/sort-an-array-of-0s-1s-and-2s/?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2020-16 | en | refinedweb |
Getting Started with Programming the Intel Edison
After my brief introduction to the Intel Edison it’s time to get more familar with the platform’s software aspects.
I’m going to show how you can start to develop and deploy your ideas, how you can read/write from sensors/actuators and how you can communicate with the Cloud. Giving you what you need to start tinkering and hacking IoT devices.
Installing and configuring the SDK
The first thing is to choose your preferred language for the project. To accommodate the needs of more developers, Intel made it easy to use many different programming languages and have provided several SDKs.
You can read about all the options in this article.
Intel Edison Board Installer
The latest version of the Intel Edison SDK is available through a unified installer that you can get here.
Make sure to have a recent version of Java JDK/JRE and continue the installation process.
This will install the appropriate driver for the board, updates the Yocto Linux image on the Edison and lets you choose your preferred IDE. The installer is available for Windows and Mac OS, Linux users need to install the preferred IDE separately.
Getting ready to develop
Assemble the development board, setup a serial terminal and connect the Edison to WiFi.
Make a note about the board IP address, Edison should expose itself via Zeroconf, but we all know that tech doesn’t always work.
Now we can configure our IDE.
Eclipse
If you are going to develop in C++, open Eclipse and select the IoT DevKit -> Create Target connection item.
You should see your board listed, else just enter a name and the ip address noted before.
Intel XDK
Start XDK and look at the bottom panel of the screen.
Click the IoT Device drop down menu and select re scan for device or enter the board ip address as shown below.
You should see a success message in the console.
Shell access
SSH is enabled on the board, so you can skip all the IDE fuss and do everything from the shell if you are more comfortable there.
Hello Edison
It’s time to say hello.
C++
In Eclipse click, IoT DevKit – >Create C++ project and select a blank template.
And choose the already defined target.
Add the following code:
#include <iostream> using namespace std; int main() { std::cout << "Hello, Edison!\n"; return 0; }
Run the code by clicking the green play button. Eclipse will build the project, deploy to the board and run it. On this first run, Eclipse will ask for the board password.
You an follow progress and the application output in the console at the bottom of the screen.
Javascript/Node JS
Open XDK, click on the Projects tab, and start a new project choosing the blank IoT template.
Add the following code:
console.log("Hello, Edison!")
Use the run button on the bottom toolbar. XDK will ask if you want to upload the updated project, click yes and check the output in the bottom console.
Python
In your favorite text editor, write the following code:
print "Hello, Edison!"
save as hello.py and run it with:
python hello.py
Summary
One of the great aspects to using the Edison is that there’s nothing new to learn. You can code in your current preferred language, use libraries of your choice, and do whatever you normally do on a Linux system.
The main difference is that you can run your project on a tiny device, ready to make wearable or internet
things.
But we are interested in making something more interesting, taking advantage of the platform’s I/O ability to make
things smart.
Dealing with Sensors and Actuators
One of my favorite aspects of Edison, is that even a software guy like me can deal with the hardware. Intel provides two useful libraries for this purpose,
lib rmaa and
lib upm.
The first provide an abstraction to the board, so that ports and other hardware features can be accessed through abstract classes without needing to know exact model numbers and data sheet details.
It’s time to make something exciting… blink a led! (OK, not that exciting).
Thanks to
lib mraa it’s simple:
C++
include <iostream> #include <unistd.h> #include <signal.h> #include "mraa.hpp" static int iopin = 13; int running = 0; void sig_handler(int signo) { if (signo == SIGINT) { printf("closing IO%d nicely\n", iopin); running = -1; } } int main(int argc, char** argv) { mraa::Gpio* gpio = new mraa::Gpio(iopin); // Select the pin where the led is connected if (gpio == NULL) { // Check for errors return MRAA_ERROR_UNSPECIFIED; } mraa_result_t response = gpio->dir(mraa::DIR_OUT); // Set "direction" of our operation, we use it as output here if (response != MRAA_SUCCESS) { mraa::printError(response); return 1; } while (running == 0) { // infinite loop just to test response = gpio->write(1); // set the output pin to "high" value, this will cause the led to turn on sleep(1); response = gpio->write(0); // set the output pin to "low" value, this will cause the led to turn on sleep(1); } delete gpio; // cleanups return 0; }
Javascript
var m = require('mraa'); var gpio = new m.Gpio(13); // Select the pin where the led is connected gpio.dir(m.DIR_OUT); // Set "direction" of our operation, we use it as output here var ledState = true; // Led state function blinkblink() // we define a function to call periodically { gpio.write(ledState?1:0); // if ledState is true then write a '1' (high, led on) otherwise write a '0' (low, led off) ledState = !ledState; // invert the ledState setInterval(blinkblink,1000); // call this function every second } blinkblink(); // call our blink function
Python
import mraa import time gpio = mraa.Gpio(13) # Select the pin where the led is connected gpio.dir(mraa.DIR_OUT) # Set "direction" of our operation, we use it as output here while True: gpio.write(1) # set the output pin to "high" value, this will cause the led to turn on time.sleep(0.2) gpio.write(0) # set the output pin to "low" value, this will cause the led to turn off time.sleep(0.2)
Simple, isn’t it?
Now let’s see how we read values from a sensor. In this example I’ll use a temperature sensor attached to the pin Aio 0.
Usually, to retrieve the temperature value from a sensor, you read raw values and then check the sensor data sheet, understand the meaning of the raw value and process the value before using it.
Here
Lib UPM comes to the rescue and we can use the class provided from the library to abstract all the low level details. I’ll use javascript, but as you have seen before, the same can be acheived in any language.
var groveSensor = require('jsupm_grove'); var tempSensor = null; var currentTemperature = null; var celsius = 0; function init() { setup() readRoomTemperature(); } function setup() { // Create the temperature sensor object using AIO pin 0 tempSensor = new groveSensor.GroveTemp(0); } function readRoomTemperature() { celsius = tempSensor.value(); console.log("Temperature: "+ celsius + " degrees Celsius"); } init();
Now we can combine the above examples and turn on a led only when a predefined temperature is reached.
var m = require('mraa'); var MAX_TEMP = 30; var groveSensor = require('jsupm_grove'); var tempSensor = null; var currentTemperature = null; var gpio = null; function init() { setup() setInterval(checkTemperature, 1000); } function setup() { // Create the temperature sensor object using AIO pin 0 tempSensor = new groveSensor.GroveTemp(0); gpio = new m.Gpio(13); // Select the pin where the led is connected gpio.dir(m.DIR_OUT); // Set "direction" of our operation, we use it as output here } function readRoomTemperature() { var celsius = tempSensor.value(); console.log("Temperature: "+ celsius + " degrees Celsius"); return celsius; } function checkTemperature() { var temp = readRoomTemperature(); if(temp>MAX_TEMP) gpio.write(1); else gpio.write(0); } init();
We can show a message on the LCD display with just few more lines of code, using classes provided by
Lib UPM.
// LibUpm requires var groveSensor = require('jsupm_grove'); var LCD = require('jsupm_i2clcd'); var myLcd; var currentTemperature = null; function init() { setup() setInterval(checkTemperature, 1000); } function setup() { // Create the temperature sensor object using AIO pin 0 tempSensor = new groveSensor.GroveTemp(0); myLcd = new LCD.Jhd1313m1 (6, 0x3E, 0x62); // setting up the grove lcd connected with i2c } function readRoomTemperature() { var celsius = tempSensor.value(); console.log("Temperature: "+ celsius + " degrees Celsius"); return celsius; } function checkTemperature() { var temp = readRoomTemperature(); var lcdMessage = "Room temp:" + temp + " C"; myLcd.setCursor(1,1); myLcd.write(lcdMessage); } init();
Browse the lib UPM docs to get an idea of supported sensors and actuators and you’ll understand how many things you can use in the same, simple, way.
But IoT is about the Internet, so let’s get connected.
One of the advantages of the full Linux stack on Edison is that you can use any existing standard library to access the web and all the needed tools to use REST API, xml and json etc are available in a project.
Web services
In JavaScript we can use
lib http to make API calls. I’m going to use this to query the Open Weather Map api and show the current weather on the LCD.
var myLcd; var LCD = require('jsupm_i2clcd'); var http = require('http'); // openweathermap apu uri var owmUrl = ""; // prepare the query var owmPath = "/data/2.5/weather?unit=metric&q=" // My lovely city name var yourCity = "Brescia,it"; function init() { setup() setInterval(checkWeather, 60000); } function setup() { myLcd = new LCD.Jhd1313m1 (6, 0x3E, 0x62); // setting up the grove lcd connected with i2c, the address is in the doc } function checkWeather() { // url building var url = owmUrl + owmPath + yourCity try { // api docs : // build the http request http.get(url, function(res) { var body = ''; // read the response of the query res.on('data', function(chunk) { body += chunk; }); res.on('end', function() { // now parse the json feed var weather = JSON.parse(body) // var id = weather.weather[0].id; // get the current weather code // show the message on the display lcdMessage = weather.weather[0].description; myLcd.setCursor(0,0); myLcd.write(lcdMessage); }); }).on('error', function(e) { // check for errors and eventually show a message lcdMessage = "Weather: ERROR"; myLcd.setCursor(0,0); myLcd.write(lcdMessage); }); } catch(e) { lcdMessage = "Weather: ERROR"; myLcd.setCursor(0,0); myLcd.write(lcdMessage); } }; init();
Conclusion
These brief examples could serve as a foundation to more complex applications integrating sensors, actuators and the internet.
In the next article we are going to build a complete project to show the possibility enabled by this platform and not a lot of code, giving anyone the ability to join the hype of IoT and, have fun in the process. | https://www.sitepoint.com/getting-started-with-programming-the-intel-edison/ | CC-MAIN-2018-09 | en | refinedweb |
The concepts of attribute types and attribute syntax were mentioned briefly in the previous chapter. Attribute types and the associated syntax rules are similar to variable and data type declarations found in many programming languages. The comparison is not that big of a stretch. Attributes are used to hold values. Variables in programs perform a similar task?they store information.
When a variable is declared in a program, it is defined to be of a certain data type. This data type specifies what type of information can be stored in the variable, along with certain other rules, such as how to compare the variable's value to the data stored in another variable of the same type. For example, declaring a 16-bit integer variable in a program and then assigning it a value of 1,000,000 would make no sense (the maximum value represented by a signed 16-bit integer is 32,767). The data type of a 16-bit integer determines what data can be stored. The data type also determines how values of like type can be compared. Is 3 < 5? Yes, of course it is. How do you know? Because there exists a set of rules for comparing integers with other integers. The syntax of LDAP attribute types performs a similar function as the data type in these examples.
Unlike variables, however, LDAP attributes can be multivalued. Most procedural programming languages today enforce "store and replace" semantics of variable assignment, and so my analogy falls apart. That is, when you assign a new value to a variable, its old value is replaced. As you'll see, this isn't true for LDAP; assigning a new value to an attribute adds the value to the list of values the attribute already has. Here's the LDIF listing for the ou=devices,dc=plainjoe,dc=org entry from Figure 2-1; it demonstrates the purpose of multivalued attributes:
#
The LDIF file lists two values for the telephoneNumber attribute. In real life, it's common for an entity to be reachable via two or more phone numbers. Be aware that some attributes can contain only a single value at any given time. Whether an attribute is single- or multivalued depends on the attribute's definition in the server's schema. Examples of single-valued attributes include an entry's country (c), displayable name (displayName), or a user's Unix numeric ID (uidNumber).
An attribute type's definition lays the groundwork for answers to questions such as, "What type of values can be stored in this attribute?", "Can these two values be compared?", and, if so, "How should the comparison take place?"
Continuing with our telephoneNumber example, suppose you search the directory for the person who owns the phone number 555-5446. This may seem easy when you first think about it. However, RFC 2252 explains that a telephone number can contain characters other than digits (0-9) and a hyphen (-). A telephone number can include:
a-z
A-Z
0-9
Various punctuation characters such as commas, periods, parentheses, hyphens, colons, question marks, and spaces
555.5446 or 555 5446 are also correct matches to 555-5446. What about the area code? Should we also use it in a comparison of phone numbers?
Attribute type definitions include matching rules that tell an LDAP server how to make comparisons?which, as we've seen, isn't as easy as it seems. In Figure 2-3, taken from RFC 2256, the telephoneNumber attribute has two associated matching rules. The telephoneNumberMatch rule is used for equality comparisons. While RFC 2552 defines telephoneNumberMatch as a whitespace-insensitive comparison only, this rule is often implemented to be case-insensitive as well. The telephoneNumberSubstringsMatch rule is used for partial telephone number matches?for example, when the search criteria includes wildcards, such as "555*5446".
The SYNTAX keyword specifies the object identifier (OID) of the encoding rules used for storing and transmitting values of the attribute type. The number enclosed by curly braces ({ }) specifies the minimum recommended maximum length of the attribute's value that a server should support.
All entries in an LDAP directory must have an objectClass attribute, and this attribute must have at least one value. Multiple values for the objectClass attribute are both possible and common given certain requirements, as you shall soon see. Each objectClass value acts as a template for the data to be stored in an entry. It defines a set of attributes that must be present in the entry and a set of optional attributes that may or may not be present.
Let's go back and reexamine the LDIF representation of the ou=devices,dc=plainjoe,dc=org entry:
#
In this case, the entry's objectClass is an organizationalUnit. (The schema definition for this is illustrated by two different representations in Figure 2-5.) The listing on the right shows the actual definition of the objectClass from RFC 2256; the box on the left summarizes the required and optional attributes.
Here's how to understand an objectClass definition:
An objectClass possesses an OID, just like attribute types, encoding syntaxes, and matching rules.
The keyword MUST denotes a set of attributes that must be present in any instance of this object. In this case, "present" means "possesses at least one value."
The keyword MAY defines a set of attributes whose presence is optional in an instance of the object.
The keyword SUP specifies the parent object from which this object was derived. A derived object possesses all the attribute type requirements of its parent. Attributes can be derived from other attributes as well, inheriting the syntax of its parent as well as matching rules, although the latter can be locally overridden by the new attribute. LDAP objects do not support multiple inheritance; they have a single parent object, like Java objects.
It is possible for two object classes to have common attribute members. Because the attribute type namespace is flat for an entire schema, the telephoneNumber attribute belonging to an organizationalUnit is the same attribute type as the telephoneNumber belonging to some other object class, such as a person (which is covered later in this chapter). | http://etutorials.org/Server+Administration/ldap+system+administration/Part+I+LDAP+Basics/Chapter+2.+LDAPv3+Overview/2.2+What+Is+an+Attribute/ | CC-MAIN-2018-09 | en | refinedweb |
NAME
Tk_SetGrid, Tk_UnsetGrid - control the grid for interactive resizing
SYNOPSIS
#include <tk.h>
Tk_SetGrid(tkwin, reqWidth, reqHeight, widthInc, heightInc)
Tk_UnsetGrid(tkwin)
ARGUMENTS
-_UnsetGrid cancels gridded geometry management for tkwin's toplevel window.
For each toplevel window there can be at most one internal window with gridding enabled. If Tk_SetGrid or Tk_UnsetGrid is invoked when some other window is already controlling gridding for tkwin's toplevel, the calls for the new window have no effect.
See the wm documentation for additional information on gridded geometry management.
KEYWORDS
grid, window, window manager | https://metacpan.org/pod/release/SREZIC/Tk-804.032/pod/pTk/SetGrid.pod | CC-MAIN-2018-09 | en | refinedweb |
Yet Another Kyoto Cabinet Binding
Project Description
License
Yet Another Kyoto Cabinet Python Binding Copyright (C) 2013 Yasunobu OKAMURA Kyoto Cabinet Copyright (C) 2009-2011 FAL Labs
python setup.py build python setup.py install
Basic use
import yakc d = yakc.KyotoDB('test.kch') d['a'] = '123'
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/yakc/0.1.2/ | CC-MAIN-2018-09 | en | refinedweb |
HY
-
566 Semantic Web
Ontology Learning
Μπαλάφα Κασσιανή
Πλασταρά Κατερίνα
Table of contents
1. Introduction
2. Data sources for ontology learning
3. Ontology Learning Process
4. Architecture
5. Methods for learning ontolo
gies
6. Ontology learning tools
7. Uses/applications of ontology learning
8. Conclusion
9. References
1. Introduction
1.1 Ontologies
Ontologies serve as a means for establishing a conceptually concise basis for
communicating
knowledge for many purposes. In recent years, we have seen a
surge of interest that deals with the discovery and automatic creation of complex,
multirelational knowledge structures.
Unlike knowledge bases ontologies have “all in one”:
formal or machine re
adable representation
full and explicitly described vocabulary
full model of some domain
consensus knowledge: common understanding of a domain
easy to share and reuse
1.2 Ontology learning
General
The main task of ontology learning is to automatica
lly learn complicated
domain ontologies; this task is usually solved by human only. It explores
techniques for applying knowledge discovery techniques to different data sources
(html, documents, dictionaries, free text, legacy ontologies etc.) in order to
support the task of engineering and maintaining ontologies. In other words is the
Machine learning of ontologies.
Technical description
The manual building of ontologies is a tedious task, which can easily result in a
knowledge acquisition bottleneck.
In addition, human expert modeling by hand is
biased, error prone and expensive. Fully automatic machine knowledge
acquisition remains in the distant future. Most systems are semi
-
automatic and
require human (expert) intervention and balanced cooperative m
odeling for
constructing ontologies.
Semantic Integration
The conceptual structures that define an underlying ontology provide the key to
machine
-
processable data on the Semantic Web.
Ontologies
serve as metadata
schemas, providing a controlled vocabula
ry of concepts, each with explicitly
defined and machine
-
processable semantics. Hence, the Semantic Web’s
success and proliferation depends on quickly and cheaply constructing
domain
-
specific ontologies. Although ontology
-
engineering tools have matured
ove
r the last decade, manual ontology acquisition remains a tedious,
cumbersome task that can easily result in a knowledge acquisition bottleneck.
Intelligent support tools for an ontology engineer take on a different meaning than
the integration architecture
s for more conventional knowledge acquisition.
In the figures below we can see how ontology learning is concerned in semantic
integration
Semantic Information Integration
Ontology Alignment and Transfo
rmations
?????? NO RELATION BETWEEN THESE FIGURES!!!
Ontology Engineering
2. Data sources for ontology learning
2.1 Natural languages
Natural language texts exhibit morphological, syntactic,
semantic, pragmatic and
conceptual constraints that interact in order to convey a particular meaning to the
reader. Thus, the text transports information to the reader and the reader
embeds this information into his background knowledge. Through the
und
erstanding of the text, data is associated with conceptual structures and new
conceptual structures are learned from the interacting constraints given through
language. Tools that learn ontologies from natural language exploit the
interacting constraint
s on the various language levels (from morphology to
pragmatics and background knowledge) in order to discover new concepts and
stipulate relationships between concepts.
2.1.2 Example
An example of extracting semantic information of natural text
in the form of
ontology is a methodology developed in Leipzig University of Germany. This
approach is focused on the application of statistical analysis of large corpora to
the problem of extracting semantic relations from unstructured text. It is a viable
method for generating input for the construction of ontologies, as ontologies use
well
-
defined semantic relations as building blocks. The method’s purpose is to
create classes of terms (
collocation
sets) and how to
postprocess
these
statistically generate
d collocation sets in order to extract
named relations
. In
addition, for different types of relations like
cohyponyms
or
instance
-
of
-
relations,
different extraction methods as well as additional sources of information can be
applied to the basic collocatio
n sets in order to verify the existence of a specific
type of semantic relation for a given set of terms.
The first step of this approach is to collect large amounts of unstructured text,
which will be processed in the following steps. The next step is
to create the
collocation sets, i.e. the classes of similar terms. The occurrence of two or more
words within a well defined unit of information (sentence, document) is called a
collocation. For the selection of meaningful and significant collocations, an
adequate collocation measure is defined based on probabilistic similarity metrics.
For calculating the collocation measure for any reasonable pairs of terms the
joint occurrences of each pair is counted. This problem is complex both in time
and storage. Ne
vertheless, the collocation measure is calculated for any pair with
total frequency of at least 3 for each component. This approach is based on
extensible ternary search trees, where a count can be associated to a pair of
word numbers. The memory overhead
from the original implementation could be
reduced by allocating the space for chunks of 100,000 nodes at once. Even
when using this technique on a large memory computer more than one run
through the corpus may be necessary, taking care that every pair is o
nly counted
once. The resulting word pairs above a threshold of significance are put into a
database where they can be accessed and grouped in many different ways.
Further on, except for the textual output of collocation sets, visualizing them as
graphs
is an additional type of representation. The procedure followed is:
A word is chosen and its collocates are arranged in the plane so that
collocations between collocates are taken into account. This results in graphs
that show homogeneity where words a
re interconnected and they show
separation where collocates have little in common. Polysemy is made visible
(see figure below). Line thickness represents the significance of the
collocation. All words in the graph are linked to the central word; the rest o
f
the picture is automatically computed, but represents semantic
connectedness as well.
The relations between the words are just presented, but not yet named. The
figure shows the collocation graph for
space
. Three different meaning contexts
can be recogni
zed in the graph:
o
real estate,
o
computer hardware, and
o
astronautics.
The connection between
address
and
memory
results from the fact that address
is another polysemous concept.
Collocation graph for
space
The final step is to identify the relations
between terms or collocation sets. The
collocation sets are searched and some semantic relations appear more often
than others. The following basic types of relations can be identified:
o
Cohyponymy
o
top
-
level syntactic relations, which translate to semantic
‘actor
-
verb’ and
often used properties of a noun
o
instance
-
of
o
special relations given by multiwords (
A
prep/det/conj
B
), and
o
unstructured set of words describing some subject area.
These types of relations may be classified according to the properties symm
etry,
anti
-
symmetry, and transitivity. Additional relations between collocation sets can
be identified with the user’s contribution, such as:
o
Pattern
-
based extraction (user defined) e.g. (
profession) ? (last name)
implies that ? Is in fact a
o
Co
mpound nouns. Semantic relation between the parts of a compound
word can be found in most cases.
Term properties
are be
derived with similar ways.
A combination of the results of each of the steps described above forms the
ontology of terms included in
the original text. The output of this approach may
be used for the automatic generation of semantic relations between terms in
order to fill and expand ontology hierarchies.
2.2 Ontology Learning from Semi
-
structured Data.
With the success of new stand
ards for document publishing on the web there will
be a proliferation of semi
-
structured data and formal descriptions of semi
-
structured data freely and widely available. HTML data, XML data, XML
Document Type Definitions (DTDs), XML
-
Schemata , and their l
ikes add
--
more
or less expressive
--
semantic information to documents. A number of
approaches understand ontologies as a common generalizing level that may
communicate between the various data types and data descriptions. Ontologies
play a major role fo
r allowing semantic access to these vast resources of semi
-
structured data. Though only few approaches do yet exist we belief that learning
of ontologies from these data and data descriptions may considerably leverage
the application of ontologies and, thu
s, facilitate the access to these data.
2.2.1 Example
An example of learning ontologies from both unstructured text and semi
-
structured text is the DODDLE system. This approach, which was implemented
in Shizuoka University of Japan, describes how to
construct domain ontologies
with taxonomic and non
-
taxonomic conceptual relationships exploiting a machine
readable dictionary and domain
-
specific texts. The taxonomic relationships come
from WordNet (an online lexical database for the English language) i
n interaction
with a domain expert, using the following two strategies: match result analysis
and trimmed result analysis. The non
-
taxonomic relationships come from domain
specific texts with the analysis of lexical co
-
occurrence statistics.
The DODDLE
(Domain Ontology Rapid Development Environment) system
consists of two components: the taxonomic relationship acquisition module using
WordNet and non
-
taxonomic relationship learning module using domain
-
specific
texts. An overview of the system and its co
mponents is depicted in figure 1.
Figure1: DODDLE overview
Taxonomic relationship acquisition module:
The taxonomic relationship acquisition module does spell match between
the input domain terms and WordNet. The spell match links these terms to
Wor
dNet. Thus the initial model from the spell match results is a
hierarchically structured set of all the nodes on the path from these terms
to the root of WordNet. However the initial model has unnecessary internal
terms (nodes) not to contribute to keeping
topological relationships among
matched nodes, such as parent
-
child relationship and sibling relationship.
So the unnecessary internal nodes can be trimmed from the initial model
into a trimmed model, as shown in Figure 2 process.
Figure 2: Trimming
process
In order to refine the trimmed model, two strategies are applied in
interaction with a user: match result analysis and trimmed result analysis.
o
Match result analysis:
Looking at the trimmed model, it turns out that it is divided into a
PAB (a PAth
including only Best spell
-
matched nodes) and a STM
(a Sub
-
Tree that includes best spell
-
matched nodes and other
nodes and so should be Moved) based on the distribution of best
-
matched nodes. On one hand, a PAB is a path that includes only
best
-
matched nod
es that have sense for a given domain specificity.
Because all nodes have already been adjusted to the domain in
PABs, PABs can stay there in the trimmed model. On the other
hand, a STM is such a sub
-
tree that an internal node is a root and
the subordinate
s are only best
-
matched nodes. Because internal
nodes have not been confirmed to have sense for a given domain,
an STM can be moved in the trimmed model. Thus DODDLE
identifies PABs and STMs in the trimmed model automatically and
then supports a user in co
nstructing a conceptual hierarchy by
moving STMs. Figure 3 illustrates the above
-
mentioned match
result analysis.
Figure 3: Match Result Analysis
o
Trimmed result analysis:
In order to refine the trimmed model, DODDLE uses trim result
analysis as w
ell as match result analysis. Taking some sibling
nodes with the same parent node, there may be many differences
about the number of trimmed nodes between them and the parent
node. When such a big difference comes up on a sub
-
tree in the
trimmed model, it
may be better to change the structure of the sub
-
tree. The system asks the user if the sub
-
tree should be
reconstructed or not. Figure 4 illustrates the abovementioned
trimmed result analysis.
Figure 4: Trimmed Result Analysis
Finally DODDLE II compl
etes taxonomic relationships of the input domain
terms with hand
-
made additional modification from the user.
Non
-
taxonomic relationship learning module
Non
-
taxonomic Relationship Learning almost comes from WordSpace,
which derives lexical co
-
occurrence i
nformation from a large text corpus
and is a multi
-
dimension vector space (a set of vectors). The inner product
between two word vectors works as the measure of their semantic
relatedness. When two words’ inner product is beyond some upper bound,
they are
candidates to have some non
-
taxonomic relationship between
them. WordSpace is constructed as shown in Figure 5.
Figure 5: Construction Flow of WordSpace
The main steps of the WordSpace construction process are: extraction of
high
-
frequency 4
-
grams, co
nstruction of collocation matrix, construction of
context vectors, construction of word vectors and construction of vector
representations of all concepts.
After these two main and parallel modules are concluded, all the resulting
concepts are compared
for similarity. The user defines a certain threshold for this
similarity and a concept pair with the similarity beyond it is extracted as
a similar
concept pair
. A set of the similar concept pairs becomes a concept specification
template. Both kinds of co
ncept pairs, those whose meaning is similar (with
taxonomic relation) and those who have something relevant with each other (with
non
-
taxonomic relation), are extracted as concept pairs with context similarity in
a mass. However, by using taxonomic informa
tion from TRA module with co
-
occurrence information, DODDLE distinguishes the concept pairs which are
hierarchically closer to each other than the other pairs as TAXONOMY. A user
constructs a domain ontology by considering the relation with each concept pa
ir
in the concept specification templates and by deleting an unnecessary concept
pair. Figure 6 shows the ontology editor (left window) and the concept graph
editor (right window).
Figure 6: The ontology editor
In order to evaluate how DODDLE is
doing in practical fields, case studies have
been done in a particular law called Contracts for the International Sale of Goods
(CISG). Although this case study was small scale the results were encouraging.
2.3 Ontology Learning from Structured Data
On
tologies have been firmly established as a means for mediating between
different databases. Nevertheless, the manual creation of a mediating ontology is
again a tedious, often extremely difficult, task that may be facilitated through
learning methods. T
he negotiation of a common ontology from a set of data and
the evolution of ontologies through the observation of data is a hot topic these
days. The same applies to the learning of ontologies from metadata, such as
database schemata, in order to derive
a common high
-
level abstraction of
underlying data descriptions
-
an important precondition for data warehousing or
intelligent information agents.
3. Ontology Learning Process
A general framework of the ontology learning process is shown in the
figure
below.
The ontology learning process
The basic steps in the engineering cycle are:
o
Merging existing structures or defining mapping rules between these
structures allows
importing
and
reusing
existing ontologies.
(
For instance,
Cyc’s ontological
structures have been used to construct a domain
-
specific ontology
o
Ontology
extraction
models major parts of the target ontology, with
learning support fed from Web documents.
o
The target ontology’s rough outline, which results from import, reuse, and
extra
ction, is
pruned
to better fit the ontology to its primary purpose.
o
Ontology
refinement
profits from the pruned ontology but completes the
ontology at a fine granularity (in contrast to extraction).
o
The target application serves as a measure for validating
the resulting
ontology.
Finally, the ontology engineer can begin this cycle again
—
for example, to include
new domains in the constructed ontology or to maintain and update its scope.
3.1 Ontology learning process example
A variation of the ontol
ogy learning process described in the previous session
was implemented in a user
-
centered system for ontology construction, called
Adaptiva, implemented in the University of Sheffield (UK). In this approach, the
user selects a corpus of texts and sketches
a preliminary ontology (or selects an
existing one) for a domain with a preliminary vocabulary associated to the
elements in the ontology (lexicalisations). Examples of sentences involving such
lexicalisation (e.g. ISA relation) in the corpus are automatic
ally retrieved by the
system. Retrieved examples are then validated by the user and used by an
adaptive Information Extraction system to generate patterns that discover other
lexicalisations of the same objects in the ontology, possibly identifying new
con
cepts or relations. New instances are added to the existing ontology or used
to tune it. This process is repeated until a satisfactory ontology is obtained.
Each of the above mentioned stages consists of three steps: bootstrapping,
pattern learning and
user validation, and cleanup.
o
Bootstrapping
.
The bootstrapping process involves the user specifying a
corpus of texts, and a seed ontology. The draft ontology must be
associated with a small thesaurus of words, i.e. the user must indicate at
least one ter
m that lexicalises each concept in the hierarchy.
o
Pattern Learning & User Validation
.
Words in the thesaurus are used by
the system to retrieve a first set of examples of the lexicalisation of the
relations among concepts in the corpus. These are then pres
ented to the
user for validation. The learner then uses the positive examples to induce
generic patterns able to discriminate between them and the negative
ones. Pattern are generalised in order to find new (positive) examples of
the same relation in the c
orpus. These are presented to the user for
validation, and user feedback is used to refine the patterns or to derive
additional ones. The process terminates when the user feels that the
system has learned to spot the target relations correctly. The final p
atterns
are then applied on the whole corpus and the ontology is presented to the
user for cleanup.
o
Cleanup
.
This step helps the user make the ontology developed by the
system coherent. First, users can visualize the results and edit the
ontologies directl
y. They may want to collapse nodes, establish that two
nodes are not separate concepts but synonyms, split nodes or move the
hierarchical positioning of nodes with respect to each other. Also, the user
may wish to 1) add further relations to a specific nod
e; 2) ask the learner
to find all relations between two given nodes; 3) refine/label relations
discovered in the between given nodes. Corrections are returned back to
the IE system for retraining.
This methodology focuses the expensive user activity on ske
tching the initial
ontology, validating textual examples and the final ontology, while the system
performs the tedious activity of searching a large corpus for knowledge
discovery. Moreover, the output of the process is not only an ontology, but also a
sys
tem trained to rebuild and eventually retune the ontology, as the learner
adapts by means of the user feedback. This simplifies ontology maintenance, a
major problem in ontology
-
based methodologies.
4. Architecture
The general architecture of the ontol
ogy learning process is shown in the
following figure.
Ontology learning architecture for the Semantic Web
The ontology engineer only interacts via the graphical interfaces, which
comprise two of the four components: the Ontology Engineering Workbench
and the Management Component. Resource Processing and the Algorithm
Library are the architecture’s remaining components. These components are
described below.
Ontology Engineering Workbench
This component is sophisticated means for manual modeling and re
fining
of the final ontology. The ontology engineer can browse the resulting
ontology from the ontology learning process and decide to follow, delete or
modify the proposals as the task requires.
Management component graphical user interface
The ontology
engineer uses the management component to select input
data
—
that is, relevant resources such as HTML and XML documents,
DTDs, databases, or existing ontologies that the discovery process can
further exploit. Then, using the management component, the engine
er
chooses from a set of resource
-
processing methods available in the
resource
-
processing component and from a set of algorithms available in
the algorithm library. The management component also supports the
engineer in discovering task
-
relevant legacy dat
a
—
for example, an
ontology
-
based crawler gathers HTML documents that are relevant to a
given core ontology.
Resource processing
Depending on the available input data, the engineer can choose various
strategies for resource processing:
o
Index and reduce HT
ML documents to free text.
o
Transform semistructured documents, such as dictionaries, into a
predefined relational structure.
o
Handle semistructured and structured schema data (such as DTDs,
structured database schemata, and existing ontologies) by following
different strategies for import, as described later in this article.
o
Process free natural text.
After first preprocessing data according to one of these or similar
strategies, the resource
-
processing module transforms the data into an
algorithm
-
specific
relational representation.
Algorithm library
An ontology can be described by a number of sets of concepts, relations,
lexical entries, and links between these entities. An existing ontology
definition can be acquired using various algorithms that work on
this
definition and the preprocessed input data. Although specific algorithms
can vary greatly from one type of input to the next, a considerable overlap
exists for underlying learning approaches such as association rules,
formal concept analysis, or clust
ering. Hence, algorithms can be reused
from the library for acquiring different parts of the ontology definition.
5. Methods for learning ontologies
Some methodologies used in the ontology learning process are described in
the following sections.
5.1 Association Rules
A basic method that is used in many ontology learning systems is the use of
association rules for ontology extraction. Association
-
rule
-
learning algorithms are
used for prototypical applications of data mining and for finding asso
ciations that
occur between items in order to construct ontologies (
extraction stage
). ‘Classes’
are expressed by the expert as a free text conclusion to a rule. Relations
between these ‘classes’ may be discovered from existing knowledge bases and a
model
of the classes is constructed (ontology) based on user
-
selected patterns in
the class relations. This approach is useful for solving classification problems by
creating classification taxonomies (ontologies) from rules.
A classification knowledge based
system using this method with experimental
results based on medical data was implemented in the University of New South
Wales, in Australia. In this approach, Ripple Down Rules (RDR) were used to
describe classes and their attributes. The form of RD Rules
is shown in the
following figure, which represents some rules for the class
Satisfactory
lipid profile previous raised LDL noted
. In the first rule there is a
condition Max(LDL) > 3.4 and in the second rule there is a condition Max(LDL) is
HIGH), where HIG
H is a range between 2 real number.
An example of a class which is a disjunction of two rules
The conclusions of the rules form the classes of the classification ontology. The
expert using this methodology is allowed to specify the correct conclusion
and
identify the attributes and values that justify this conclusion in case the system
makes an error.
The method applied in this approach includes three basic steps:
o
The first step is to discover class relation between rules. In this stage,
three basic
relations are taken into account:
1.
Subsumption/intersection: a class A subsumes/intersects with a class
B if class A always occurs when class B occurs, but not the other way
around.
2.
Mutual exclusivity: two classes are mutual exclusive if they never occur
together.
3.
Similarity: two classes are similar if they have similar conditions in the
rules they come from.
Based on these relations the first classes of rule conclusions are formed.
o
The second step is to specify some compound relations which appear
interes
ting using the three basic relations. This step is performed in
interaction with the expert.
o
The final step is to extract instances of these compound relations or
patterns and assemble them into a class model (ontology).
The key idea in this technique i
s that it seems reasonable to use heuristic
quantitative measures to group classes and class relations. This then enables
possible ontologies to be explored on a reasonable scale.
5.2 Clustering
Learning semantic classes
In the context of learning se
mantic classes, learning from syntactic contexts
exploits syntactic relations among words to derive semantic relations, following
Harris’ hypothesis. According to this hypothesis, the study of syntactic
regularities within a specialized corpus permits to i
dentify syntactic schemata
made out of combinations of word classes reflecting specific domain knowledge.
The fact of using specialized corpora eases the learning task, given that we have
to deal with a limited vocabulary with reduced polysemy, and limited
syntactic
variability.
In syntactic approaches, learning results can be of different types, depending on
the method employed. They can be distances that reflect the degree of similarity
among terms, distance
-
based term classes elaborated with the help of
nearest
-
neighbor methods degrees of membership in term classes, class hierarchies
formed by conceptual clustering or predicative schemata that use concepts to
constraint selection. The notion of distance is fundamental in all cases, as it
allows calculatin
g the degree of proximity between two objects
—
terms in this
case
—
as a function of the degree of similarity between the syntactic contexts in
which they appear. Classes built by aggregation of near terms can afterwards be
used for different applications, su
ch as syntactic disambiguation or document
retrieval. Distances are however calculated using the same similarity notion in all
cases, and our model relies on these studies regardless of the application task.
Conceptual clustering
Ontologies are organized
as multiple hierarchies that form an acyclic graph
where nodes are term categories described by intention, and links represent
inclusion. Learning through hierarchical classification of a set of objects can be
performed in two main ways: top
-
down, by incr
emental specialization of classes,
and bottom
-
up, by incremental generalization. The bottom
-
up approach due to its
smaller algorithmic complexity and its understandability to the user in view of an
interactive validation task is better.
The Mo’K workbench
A workbench that supports the development of conceptual clustering methods for
the (semi
-
) automatic construction of ontologies of a conceptual hierarchy type
from parsed corpora is the Mo’K workbench. The learning model proposed in that
takes parsed cor
pora as input. No additional (terminological or semantic)
knowledge is used for labeling the input, guiding learning or validating the
learning results. Preliminary experiments showed that the quality of learning
decreases with the generality of the corpus
. This makes somewhat unrealistic the
use of general ontologies for guiding such learning as they seem too incomplete
and polysemic to allow for efficient learning in specific domains.
5.3 Ontology Learning with Information Extraction Rules
The Figure
below illustrates the overall idea of building ontologies with learned
information extraction rules. We start with:
1. An initial, hand
-
crafted seed ontology of reasonable quality which contains
already the relevant types of relationships between ontology
concepts in the
given domain.
2. An initial set of documents which exemplarily represent (informally)
substantial parts of the knowledge represented formally in the seed ontology.
To take the pairs of (ontological sta
tement, one or more textual
representations) as positive examples for the way how specific ontological
statements can be reflected in texts. There are two possibilities to extract
such examples:
Based on the seed ontology, the system looks up the signature
of a
certain relation searches all occurrences of instances of the concept
classes Disease and Cure, respectively, within a certain maximum
distance, and regards these co
-
occurrences as positive examples for
relationship R. This approach presupposes that
the seed documents have
some “definitional” character, like domain specific lexica or textbooks.
The user goes through the seed documents with a marker and manually
highlights all interesting passages as instances of some relationship. This
approach
i
s mo
re work
-
intensive, but promises faster
learning and more
precise results. We employed
this approach already successfully in an
industrial
information extraction project
Employ a pattern learning algorithm to automatically construct information
ext
raction rules which abstract from the specific examples, thus creating
general statements which text patterns are an evidence for a certain
ontological relationship. In order to learn such information extraction rules,
we need some prer
equisites:
(a)
A sufficiently detailed representation of documents (in particular,
including word positions, which is not usual in conventional, vector
-
based learning algorithms, WordNet
-
synsets, and part
-
of
-
speech
tagging).
(b) A sufficiently powerful representation formalism for extraction patterns.
(c) A learning algorithm which has direct access to background knowledge
sources, like the already available seed onto
logy containing
statements about known concept instance, or like the WordNet
database of lexical knowledge linking words to their synonyms sets,
giving access to suband superclasses of synonym sets, et
c
.
Apply these learned information extraction rules to other, new text
documents to discover new or not yet formalized instances of relationship
R in the given application domain.
Compared to other ontology learning approaches this technique is not restr
icted
to learning taxonomy relationships, but arbitrary relationships in an application
domain.
A project that uses this technique is the FRODO ("A Framework for Distributed
Organizational Memories") project which is about methods and tools for building
an
d maintaining distributed Organizational Memories in a real
-
world enterprise
environment. It is funded by the German National Ministry for Research and
Education has started with five scientific researchers in January 2000.
6. Ontology learning tools
6.
1
TEXT
-
TO
-
ONTO
It develops a semi
-
automatic ontology learning from text. It tries to overcome the
knowledge acquisition bottleneck
.
It is based on a general architecture for
discovering conceptual structures and engineering ontologies from text.
Architecture
The process of semi
-
automatic ontology learning from text is embedded in an
architecture that comprises several core features described as a kind of pipeline.
The main components of the architecture are the:
Text & Processing Manage
ment Component
The ontology engineer uses that component to select domain texts exploited in
the further discovery process. The engineer can choose among a set of text (pre
-
) processing methods available on the
Text Processing Server
and among a set
of al
gorithms available at the
Learning & Discovering component
. The former
module returns text that is annotated by XML and XML
-
tagged is fed to the
Learning & Discovering component.
Text Processing Server
It contains a shallow text processor based on the c
ore system SMES
(Saarbr¨ucken Message Extraction System). SMES is a system that
performs syntactic analysis on natural language documents. It organized in
modules, such as tokenizer, morphological and lexical processing and chunk
parsing that use lexical
resources to produce mixed syntactic/semantic
information. The results of text processing are stored in annotations using
XML
-
tagged text.
Lexical DB & Domain Lexicon
SMES accesses a lexical database with more than 120.000 stem entries and
more than 12
.000 subcategorization frames that are used for lexical analysis and
chunk parsing. The domain
-
specific part of the lexicon associates word stems
with concepts available in the concept taxonomy and links syntactic information
with semantic knowledge that m
ay be further refined in the ontology.
Learning & Discovering comp
o
nent
Uses various discovering methods on the annotated texts e.g. term extraction
methods for concept acquisition.
Ontology Engineering Enviroment
-
ONTOEDIT
It supports the ontology eng
ineer in semi
-
automatically adding newly discovered
conceptual structures to the ontology. Internally stores modeled ontologies using
an XML serialization.
6.2
ASIUM
ASIUM overview
Asium is an acronym for “Acquisition of Semantic knowle
dge Using Machine
learning method". The main aim of Asium is to help the expert in the acquisition
of semantic knowledge from texts and to generalize the knowledge of the corpus.
It also provides the expert with a user interface which includes tools and
fu
nctionality for exploring the texts and then learning knowledge which is not in
the texts.
During the learning step, Asium helps the expert to acquire semantic
knowledge from the texts, like
subcategorization frames
and an
ontology.
The
ontology
represe
nts an acyclic graph of the concepts of the studied domain. The
subcategorization frames
represent the use of the verbs in these texts. For
example, starting from cooking recipe texts, Asium should learn an ontology with
concepts of "Recipients", "Vegetabl
es" and "Meat". It can also learn, in parallel,
the subcategorization frame of the verb "to cook" which can be:
to cook
:
Object
: Vegetable or Meat
in
: Recipients
Methodology
The overall methodology that is implemented by ASIUM is depicted in the
foll
owing figure.The input for Asium are syntactically parsed texts from a specific
domain. It then extracts these triplets: verb, preposition/function (if there is no
preposition), lemmatized head noun of the complement. Next, using factorization,
Asium will
group together all the head nouns occurring with the same couple
verb, preposition/function. These lists of nouns are called basic clusters. They
are linked with the couples verb, preposition/function they are coming from.
Asium then computes the similarit
y among all the basic clusters together. The
nearest ones will be aggregated and this aggregation is suggested to the expert
for creating a new concept. The expert defines a minimum threshold for
gathering classes into concepts. Only the distance computati
on is not enough to
learn concepts of one domain. The help of the expert is necessary because any
learned concepts can contain noise (mistakes in the parsing for example), some
sub
-
concepts are not identified or over
-
generalization occurs due to
aggregatio
ns. Similarity computation is computed between all basic clusters to
each other and next the expert validates the list of classes learned by Asium.
After this, Asium will have learned the first level of the ontology. Similarity is
computed again but among
all the clusters, both the old and the new ones in
order to learn the next level of the ontology.
The advantages of this method are twofold:
First, the similarity measure identifies all concepts of the domain and the
expert can validate or split them. Nex
t the learning process is, for one part,
based on these new concepts and suggests more relevant and more
general concepts.
Second, the similarity measure will offer the expert aggregations between
already validated concepts and new basic clusters in order
to get more
knowledge from the corpus.
The cooperative process runs until there are no more possible aggregations.
The output of the process are the subcategorization frames and the ontology
schema.
The ASIUM methodology
SYLEX
The preprocessing of
the free text is performed by Sylex. Sylex, the
syntactic parser of the French society Ingénia, is used in order to parse
source texts in French or English. This parser is a tool
-
box of about 700
functions which have to be used in order to produce some re
sults.
In ASIUM, the attachments between head nouns of complements and verbs
and the bounds are retrieved from the full syntactic parsing performed by
Sylex.
The file format that Asium uses to understand the parsing is the following:
----
(Sentence of
the original text)
Verbe:
(the verb)
kind of complement (Sujet(Subject), COD(Object), COI(Indirect
object), CC(position, manière, but, provenance, direction) (adjunct of
position, manner, aim, provenance, direction):
(head noun of the complement)
Bornes_mo
t:
(bounds of the noun in the sentence)
Bornes_GN:
(bound of the noun phrase in the sentence)
Prep:
(optional)
(the preposition)
The resulting parsed text is then provided to ASIUM for further elaboration.
The user interface
The user interface of ASI
UM allows the user to manipulate and view the
ontology in every stage of the learning process. The following figures show some
of the basic windows of the interface.
This window allows the expert to validate the concepts learned by Asium.
This window displays the list of all the examples covered for the learned
concept.
This window displays the ontology like it actually is in memory: i.e.
learned concepts and concepts to be proposed for this level. Each blue
circ
le represents a class. It can be labeled or not.
This window allows the expert to split a class into several sub
-
concepts.
The left list represents the list of nouns the expert wants to split into sub
-
concepts. The right lis
t contains all the nouns for one sub
-
concept.
Uses of ASIUM
The kind of semantic knowledge that is prodused by ASIUM can be very useful in
a lot of applications. Some of them are mention bellow:
Information Retrieval:
Verb subcategorization frames
can be used in order to tag texts. The
major part of the nouns occurring in the texts will then be tagged by their
concept. The search of the right text will be based on a query using
domain concepts instead of words. For example, if the user is intereste
d in
movie stars, he would not search for the noun "star" but for the concept
"Movie_stars" which is really distinct from "Space_Stars".
Information Extraction:
Such subcategorization frames together with an ontology allow the expert
to write "semantic"
extractions rules.
Text indexing:
After the learning of the ontology for one domain, the texts should be
enriched by the concepts. The ontology can then be use for indexing the
texts.
Texts Filtering:
As with information extraction, filtering should use ru
les based on
concepts and on the verbs used in the texts. The filtering quality should be
improved by this semantic knowledge.
Abstracts of texts:
The use of subcategorization frames and ontology concepts will allow the
texts to be tagged and then it will
certainly be a precious help for
extracting abstracts from texts.
Automatic translation:
Creation both in the language of the ontologies and the subcategorization
frames and next the use of a method in order to match the concepts of the
verbs frames in bot
h languages should improve translators.
Syntactic parsing improvement::
Subcategorization frames and concepts of a domain should improve a
syntactic parser by letting it choose the right verb attachment regarding
the ontology and then by letting it avoids
a lot of ambiguities.
7. Uses/applications of ontology learning
The ontology learning process and methods described in the previous section
can be used and applied in many domains concerning knowledge and
information extraction. Some uses
and applications are described in this section.
7.1 Knowledge sharing in multi agent systems
Discovering related concepts in a multi
-
agent system among agents with
diverse ontologies is difficult using existing knowledge representation languages
and
approaches. In this section an approach for identifying candidate relations
between expressive, diverse ontologies using concept cluster integration is
described. In order to facilitate knowledge sharing between a group of interacting
information agents (
i.e. a multi
-
agent system), a common ontology should be
shared. However, agents do not always commit
a priori
to a common, pre
-
defined global ontology. This research investigates approaches for agents with
diverse ontologies to share knowledge by automated
learning methods and
agent communication strategies. The goal is that agents who do not know the
relationships of their concepts to each other need to be able to teach each other
these relationships. If the agents are able to discover these concept relati
ons,
this will aid them as a group in sharing knowledge even though they have diverse
ontologies. Information agents acting on behalf of a diverse group of users need
a way of discovering relationships between the individualized ontologies of users.
These
agents can use these discovered relationships to help their users find
information related to their topic, or concept, of interest.
In this approach, semantic concepts are represented in each agent as
concept
vectors
of terms. Supervised inductive learn
ing is used by agents to learn their
individual ontologies. The output of this ontology learning is semantic concept
descriptions (SCD) in the form of interpretation rules. This concept representation
and learning is shown in the following figure.
Supe
rvised inductive learning produces ontology rules
The process of knowledge sharing between two agents, the Q (querying) and the
R (responding) agent, begins when the Q agent sends a concept based query.
The R agent interpreters this query and if related c
oncepts are found a response
is sent to the Q agent. After that, the Q agent takes the following steps to perform
the concept cluster integration:
1.
From the R agent response, determine the names of the concepts to
cluster.
2.
Create a new compound concept usin
g the above names.
3.
Create a new ontology category by combining instances associated with
the compound concept.
4.
Re
-
learn the ontology rules.
5.
Re
-
interpret the concept based query using the new ontology rules
including the new concept cluster description rule
s.
6.
If the concept is verified, store the new concept relation rule.
In this way, an agent learns from the knowledge provided by another agent.
This methodology was implemented in the DOGGIE (Distributed Ontology
Gathering Group Integration Environment) sys
tem, which was developed in the
University of Iowa.
7.2 Ontology based Interest Matching
Designing a general algorithm for interest matching is a major challenge in
building online community and agent
-
based communication networks. This
section prese
nts an information theoretic concept
-
matching approach to measure
degrees of similarity among users. A distance metric is used as a measure of
similarity on users represented by concept hierarchy. Preliminary sensitivity
analysis shows that this distance m
etric has more interesting properties and is
more noise tolerant than keyword
-
overlap approaches. With the emergence of
online communities on the Internet, software
-
mediated social interactions are
becoming an important field of research. Within an online
community, history of a
user’s online behavior can be analyzed and matched against other users to
provide collaborative sanctioning and recommendation services to tailor and
enhance the online experience. In this approach the process of finding similar
use
rs based on data from logged behavior is called
interest matching
.
Ontologies may take many forms. In the described method, an ontology is
expressed in a tree
-
hierarchy of concepts. In general, tree
-
representations of
ontologies are usually polytrees. H
owever, for the purpose of simplicity, here the
tree representation is assumed to be singly connected and that that all child
nodes of a node are mutually exclusive. Concepts in the hierarchy represent the
subject areas that the user is interested in. To f
acilitate ontology exchange
between agents, an ontology can be encoded in the DARPA Agent Markup
Language (DAML). The figure below illustrates a visualization of this sample
ontology.
An example of an ontology used
The root of the tree represents the
interests of the user. Subsequent sub
-
trees
represent classifications of interests of the user. Each parent node is related to a
set of children nodes. A directed edge from the parent node to a child node
represents a (possibly exclusive) sub
-
concept. For
example, in the figure,
Seafood
and
Poultry
are both subcategories of the more general concept of
Food
. However, in general, every user is to adopt the standard ontology, there
must be a way to
personalize the ontology
to describe each user. For each user,
each node has a weight attribute to represent the importance of the concept. In
this ontology, given the context of
Food
, the user tends to be more interested in
Seafood
rather than
Poultry
. The weights in the ontology are determined by
observing the beha
vior of the user. History of the user’s online readings and
explicit relevance feedback are excellent sources for determining the values of
the weights.
In this approach, a standard ontology is used to categorize the interests of
users. Using the standa
rd ontology, the websites the user visits can be classified
and entered into the standard ontology to personalize it. A form of weight for
each category can then be derived: if a user frequents websites in that category
or an
instance
of that
class
, it can
be viewed that the user will also be interested
in other instances of the class. With the weights, the distance metric can be used
to perform comparisons between interests of different users and finally
categorize them. The effectiveness of the ontology m
atching algorithm is to be
determined by deploying it in various instances of on
-
line communities.
7.3 Ontology learning for Web Directory Classification
Ontologies and ontology learning can also be used to create information
extraction tools
for collecting general information from the free text of web pages
and classifying them in categories. The goal is to collect indicator terms from the
web pages that may assist the classification process. These terms can be
derived from directory headings
of a web page as well as its content. The
indicator terms along with a collection of interpretation rules can result in a
hierarchy (ontology) of web pages. In this way, the Information Extraction and
Ontology Learning process can be applied to large web
directories both for
information storage and knowledge mining.
7.4 E
-
mail classification
KMi Planet
“KMi Planet” is a web
-
based news server for communication of stories between
members in Knowledge Media Institute. Its main goals are to classify an
in
coming story, obtain the relevant objects within the story and deduce the
relationships between them and to populate the ontology with minimal help from
the user.
Integrate a template
-
driven information extraction engine with an ontology engine
to supply
the necessary semantic content.
Two primary components are the story
library and the ontology library. The Story library contains the text of the stories
that have been provided to Planet by the journalists. In the case of KMi Planet it
contains stories wh
ich are relevant to our institute. The Ontology Library contains
several existing ontologies, in particular the KMi ontology. PlanetOnto
augmented the basic publish/find scenario supported by KMi planet, and
supports the following activities:
1
. Story sub
mission
. A journalist submits a story to KMi planet using e
-
mail text.
Then the story is formatted and stored.
2.
Story reading
. A Planet reader browses through the latest stories using a
standard Web browser,
3.
Story annotation
. Either a journalist or a
knowledge engineer manually
annotates the story using Knote (the Planet knowledge editor),
4.
Provision of customized alerts
. An agent called Newsboy builds user
profiles from patterns of access to PlanetOnto and then uses these profiles to
alert readers a
bout relevant new stories.
5.
Ontology editing
. A tool called WebOnto providesWeb
-
based visualisation,
browsing and editing support for the ontology. The “Operational Conceptual
Modelling Language," OCML is a language designed for knowledge modeling.
WebOn
to uses OCML and allows the creation of classes and instances in the
ontology, along with easier development and maintenance of the knowledge
models. In that point ontology learning is concerned.
6.
Story soliciting
. An agent called Newshound, periodically
solicits stories from
the journalists.
7.
Story retrieval and query answering
. The Lois interface supports integrated
access to the story archive
Two other tools have been integrated in the architecture:
MyPlanet: Is an extension to Newsboy and helps sto
ry readers to read only
the stories that are of interest instead of reading all stories in the archive.
It uses a manually predefined set of cue
-
phrases for each of “research areas
defined in the ontology. For example, for genetic algorithm
s one cue
-
phrase is
“evolutionary algorithms". Consider the example of someone interested in
research area Genetic Algorithms. A search engine will return all the stories that
talk about that research area. In contrast, my
-
Planet (by using the ontological
relations) will also find all Projects that have research area Genetic Algorithms
and then search for stories that talk about these projects, thus returning them to
the reader even if the story text itself does not contain the phrase “genetic
algorithms".
an IE tool : Is a tool which extracts information from e
-
mail text and it
connects with WebOnto to prove theorems using the KMi
-
planet ontology.
8. Conclusion
O
ntology learning could add significant leverage to the Semantic Web because
it pro
pels the construction of domain ontologies, which the Semantic Web needs
to succeed
. We have presented a collection of approaches and methodologies
for ontology learning that crosses the boundaries of single disciplines, touching
on a number of challenges.
All these methods are still experimental and awaiting
further improvement progress and analysis. So far, the results are rather
discouraging compared to the final goal that has to be achieved, fully automated,
intelligent and knowledge learning systems. T
he good news is, however, that
perfect or optimal support for cooperative ontology modeling is not yet needed.
Cheap methods in an integrated environment can tremendously help the
ontology engineer. While a number of problems remain within individual
disci
plines, additional challenges arise that specifically pertain to applying
ontology learning to the Semantic Web
. With the use of XML
-
based namespace
mechanisms, the notion of an ontology with well
-
defined boundaries
—
for
example, only definitions that are i
n one file
—
will disappear
. Rather, the
Semantic Web might yield a primitive structure regarding ontology boundaries
because ontologies refer to and import each other. However, what the semantics
of these structures will look like is not yet known.
In light
of these facts, the
importance of methods such as ontology pruning and crawling ??? will drastically
increase and further approaches are yet to come
.
9. References
[1] M.Sintek, M. Junker, Ludger van Est, A. Abecker,
Using Information
Extraction Rules
for Extending Domain Ontologies,
German Research Center for
Artificial Intelligence (DFKI)
[2] M.Vargas
-
Vera, J.Domingue, Y.Kalfoglou, E.Motta, S.Buckingham Shum,
Template
-
Driven Information Extraction for Populating Ontologies
, Knowledge
Media Institute (
UK)
[3] G.Bisson, C.Nedellec,
Designing clustering methods for ontology building
,
University of Paris
[4] A.Maedche, S.Staab,
The TEXT
-
TO
-
ONTO Ontology Learning Environment
,
University of Karlsruhe
[5] A.Maedche, S.Staab,
Ontology Learning for the Semantic
Web
, University of
Karlsruhe
[6] H.Suryanto,P.Compton,
Learning classification taxonomies from a
classification knowledge based system
, University of New South Wales
(Australia)
[7] Proceedings of the First Workshop on Ontology Learning OL'2000
Berlin, Ge
rmany, August 25, 2000
[8] Proceedings of the Second Workshop on Ontology Learning OL'2001
Seattle, USA, August 4, 2001
[9] ASIUM web page:
ion.UK/Presentation_Demo.html
[10] T. Yamaguchi,
Acquiring Conceptual Relationships from domain specific
texts,
Shizuoka University, Japan
[11] G. Heyer, M. Lauter,
Learning Relations using Collocations
, Leipzig
University, Germany
[12] C. Brewster, F. Ci
ravegna, Y. Wilks,
User
-
centered ontology learning for
knowledge management
, University of Sheffield, UK
[13] A. Williams, C. Tsatsoulis,
An instance based approach for identifying
candidate ontology relations within a multi agent system
, University of Iow
a
[14] W. Koh, L. Mui,
An information theoretic approach to ontology based interest
matching
, MIT
[15] M. Kavalec, V. Svatek,
Information extraction and ontology learning guided
byv web directory
, University of Prague
[16] C. Brewster, F. Ciravegna, Y. Wil
ks,
Knowledge acquisition for knowledge
management
, University of Sheffield, | https://www.techylib.com/el/view/elbowsspurgalled/hy-566_semantic_web | CC-MAIN-2018-09 | en | refinedweb |
40. Re: Admob - Google Ads?CowboyBlue Jan 28, 2011 3:48 PM (in response to CowboyBlue)
IUpdate);
}.
41. Re: Admob - Google Ads?CowboyBlue Jan 28, 2011 5:18 PM (in response to CowboyBlue)
one minor issue.
I cannot open the market from an ad. I can open urls but not the market. Hmmmm.
42. Re: Admob - Google Ads?CowboyBlue Jan 28, 2011 5:42 PM (in response to CowboyBlue) ) );
}
}
43. Re: Admob - Google Ads?Joe ... Ward Jan 28, 2011 6:54 PM (in response to CowboyBlue)).
44. Re: Admob - Google Ads?Luoruize001 Jan 30, 2011 4:25 AM (in response to funnyle)...
45. Re: Admob - Google Ads?Tatanium Jan 30, 2011 8:34 AM (in response to funnyle)
Does Adobe even track these forums?
Adobe needs to throw some weight over at Admob and get them to release an official API for A4A (Air 4 Android).
46. Re: Admob - Google Ads?NanoMInd Jan 30, 2011 2:30 PM (in response to Tatanium)
47. Re: Admob - Google Ads?swamp222 Jan 30, 2011 3:10 PM (in response to Luoruize001)
heya!
make sure there's ads available for your "location", I only got a banner once from my location (sweden)
I do more often get more ads when rendering my page @ (germany)
48. Re: Admob - Google Ads?NanoMInd Feb 4, 2011 7:00 AM (in response to funnyle)
This looks nice:
According to a post on the forum there will be support for the PlayBook soon; so this would work with AIR then.
But if Adobe could copy this service that would be even better
49. Re: Admob - Google Ads?llBuck$hotll Feb 4, 2011 7:12 AM (in response to swamp222).
50. Re: Admob - Google Ads?as4more Feb 15, 2011 5:07 PM (in response to funnyle)).
51. Re: Admob - Google Ads?as4more Feb 15, 2011 6:10 PM (in response to as4more).
52. Re: Admob - Google Ads?RobFacks Feb 23, 2011 6:09 PM (in response to funnyle)
Has anyone tried this?
53. Re: Admob - Google Ads?razorskyline Feb 24, 2011 12:52 AM (in response to RobFacks)
Yes, this example is great, I now have two applications in the market both with Admob.
54. Re: Admob - Google Ads?dwightepp Mar 16, 2011 9:47 AM (in response to razorskyline)
Is anyone else having difficulty getting the ads served? The page loads, admob says I
requested an ad, but they're not filling. Thoughts?
55. Re: Admob - Google Ads?p-wing47 Mar 22, 2011 3:05 PM (in response to dwightepp)).
56. Re: Admob - Google Ads?AdmobAndroid.com May 18, 2011 9:32 PM (in response to p-wing47).
57. Re: Admob - Google Ads?12345abcdefghi Jul 10, 2011 12:41 PM (in response to AdmobAndroid.com).
58. Re: Admob - Google Ads?boat5 Jul 10, 2011 2:25 PM (in response to 12345abcdefghi)
it still works for me. i see both admob and smato ads in his app. Sometimes admob does not have an ad to serve to you, is it possible this is what your seeing? Or did you hear there was a click fraud issue?
59. Re: Admob - Google Ads?12345abcdefghi Jul 10, 2011 2:49 PM (in response to boat5)
60. Re: Admob - Google Ads?boat5 Jul 10, 2011 5:55 PM (in response to 12345abcdefghi)...
61. Re: Admob - Google Ads?12345abcdefghi Jul 10, 2011 6:45 PM (in response to boat5)
Thanks again for your info. I wrote to carr. I hope he writes back.
62. Re: Admob - Google Ads?mola2alex Aug 9, 2011 1:45 PM (in response to funnyle)
Adobe, this is a must if you want AS3 to be successful in mobile, especially android.
63. Re: Admob - Google Ads?anthoang Aug 12, 2011 4:27 PM (in response to 12345abcdefghi).
64. Re: Admob - Google Ads?12345abcdefghi Aug 12, 2011 5:05 PM (in response to antho.
65. Re: Admob - Google Ads?vgjhav Aug 22, 2011 9:13 PM (in response to funnyle)
After a lot of trouble (account canned on ADMOB ) and research, I have gotten Ads to work in all my Android Apps. This will work on a lot of AD networks, but most will ban you click fraud. Only one network allows this method and they provide support for it too.
I have over 100 games apps. with this method implemented and working. Here is a link to one of them for you to see how it will look in game. I am using multiple ads in this to force the user to click and make me some money:
Does LeadBolt offer HTML integration for banner ads?
LeadBolt does allow banner ads to be integrated into your app using HTML, rather than using our SDK. To create a HTML banner ad after adding an app to the LeadBolt portal, simply click “Add Ad” and select “App Banner (HTML)” from the drop down box. The HTML snippet can then be added directly into your app’s HTML framework.
So far my eCPM is $6.15
I have created this guide to show my appreciation:
Publisher Code:
STEP I:
Get an Account:
STEP II:
Click on the “APPS” tab and “Create New APP” to create an AD. Remember to change content unlocker to HTML Banner. While in the process.
STEP III:
Get the HTML AD Code and keep it safe. That is all we need from the site. How simple was that?
AD HTML FILE:>
Action Script Code:
STEP I:
Credit: I found this on another site and would like to give credit to the author of pixelpaton.com
The only change you need to make is to enter your website html url where you have placed the AD HTML FILE in the space where I have put : "****ENTER COMPLETE HTML URL HERE****". Where ever you want the AD, place the following code:
// imports
import flash.events.Event;
import flash.events.LocationChangeEvent;
import flash.geom.Rectangle;
import flash.media.StageWebView;
import flash.net.navigateToURL;
import flash.net.URLRequest;
import flash.events.MouseEvent;
// setup variables
var _stageWebView:StageWebView;
var myAdvertURL:String = "****ENTER COMPLETE HTML URL HERE****";
//
{
// check that _stageWebView doersn't exist
if (! _stageWebView) {
_stageWebView = new StageWebView () ;
// set the size of the html 'window'
_stageWebView.viewPort = new Rectangle(0,0, 800, 100);
// add a listener for when the content of the StageWebView changes
_stageWebView.addEventListener(LocationChangeEvent.LOCATION_CHANGE,onLocationChange);
// start loading the URL;
_stageWebView.loadURL(myAdvertURL);
}
// show the ad by setting it's stage property;
_stageWebView.stage = stage;
}
function toggleAd(event:MouseEvent):void {
trace("toggling advert",_stageWebView);
// check that StageWebView instance exists
if (_stageWebView) {
trace("_stageWebView.stage:"+_stageWebView.stage);
if (_stageWebView.stage == null) {
//show the ad by setting the stage parameter
_stageWebView.stage = stage;
} else {
// hide the ad by nulling the stage parameter
_stageWebView.stage = null;
}
} else {
// ad StageWebView doesn't exist - show create it
}
}
function destroyAd(event:MouseEvent):void {
// check that the instace of StageWebView exists
if (_stageWebView) {
trace("removing advert");
// destroys the ad
_stageWebView.stage = null;
_stageWebView = null;
}
}
function onLocationChange(event:LocationChangeEvent):void {
// check that it's not our ad URL loading
if (_stageWebView.location != myAdvertURL) {
// destroy the ad as the user has kindly clicked on my ad
destroyAd(null);
// Launch a normal browser window with the captured URL;
navigateToURL( new URLRequest( event.location ) );
}
}
// setup button listeners
Hope this works and helps you. If you have questions, let me know. Enjoy.
66. Re: Admob - Google Ads?12345abcdefghi Aug 23, 2011 1:36 PM (in response to vgjhav)
Wow thanks for sharing. I will look into your solution!
67. Re: Admob - Google Ads?mola2alex Aug 24, 2011 5:57 AM (in response to vgjhav)
Thanks for sharing. I will check it out.
68. Re: Admob - Google Ads?mola2alex Aug 24, 2011 7:09 AM (in response to vgjhav)
69. Re: Admob - Google Ads?vgjhav Aug 24, 2011 11:01 AM (in response to mola2alex).
70. Re: Admob - Google Ads?mola2alex Aug 24, 2011 12:10 PM (in response to vgjhav).
71. Re: Admob - Google Ads?JoeCoo7 Aug 25, 2011 9:29 AM (in response to mola2alex).
72. Re: Admob - Google Ads?vgjhav Aug 25, 2011 10:33 AM (in response to JoeCoo7)
That would be awesome buddy... Do share. So you got it working via the
android ad and not smartphone?
73. Re: Admob - Google Ads?vgjhav Aug 25, 2011 8:34 PM (in response to JoeCoo7).
74. Re: Admob - Google Ads?JoeCoo7 Aug 26, 2011 3:44 AM (in response to vgjhav).
75. Re: Admob - Google Ads?vgjhav Aug 26, 2011 4:43 AM (in response to JoeCoo7)
Thanks a ton. Will give it a go tonight or Sunday. Thanks again.
76. Re: Admob - Google Ads?mola2alex Aug 30, 2011 12:26 PM (in response to vgjhav)
Is it possible to put the HTML code into AS3 to be read by the webview or attach the HTML file and referencing it when you publish the app or does it need to be hosted somewhere on the web for this to work?
77. Re: Admob - Google Ads?vgjhav Aug 30, 2011 8:20 PM (in response to mola2alex)
It needs to be hosted. But there are a lot ofgood free hosting companies.
Use any one and you will be good.
The other method never worked for me. So i am still hosting and using that
method.
78. Re: Admob - Google Ads?mola2alex Aug 31, 2011 9:18 AM (in response to vgjhav).
79. Re: Admob - Google Ads?vgjhav Aug 31, 2011 9:37 AM (in response to mola2alex)
Can you share the code too? I have put leadbolt content unlocking on my site
too, getting good conversions. Cant wait for their API to support native
adobe air for android.
Thanks. | https://forums.adobe.com/message/3436583 | CC-MAIN-2018-09 | en | refinedweb |
There could be many scenarios when you'd need to be able to dipslay a text in your WP7 application which doesn't fit on the screen. The easiest way to approach to solve this would be to utilize the ScrollViewer control as a host for a TextBlock. Something like that:
<Grid x:
<ScrollViewer>
<TextBlock x:
</ScrollViewer>
</Grid>
You would think that we're done but as usual the "devil is in the details". If you assign a long text to the TextBlock you will see that part of this text could be truncated and an empty space would be displayed when scrolling to the end:
The reason for this behavior is that any element that must be displayed beyond the area which is larger than 2048x2048 pixels would be clipped by the platform. I was told that this limitation had been dictated by a combination of hardware limitation and performance considerations. To workaround this limitation we'd need to break out the text into a separate blocks and create a TextBlock for each of this text blocks. So to make the life easier for myself and for you I created a custom control which I called ScrollableTextBlock which wraps this logic. So I created a custom control which derives from Control and implements Text property:
public class ScrollableTextBlock : Control
{
private StackPanel stackPanel;
public ScrollableTextBlock()
{
// Get the style from generic.xaml
this.DefaultStyleKey = typeof(ScrollableTextBlock);
}
public static readonly DependencyProperty TextProperty =
DependencyProperty.Register(
"Text",
typeof(string),
typeof(ScrollableTextBlock),
new PropertyMetadata("ScrollableTextBlock", OnTextPropertyChanged));
public string Text
{
get
{
return (string)GetValue(TextProperty);
}
set
{
SetValue(TextProperty, value);
}
}
private static void OnTextPropertyChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
ScrollableTextBlock source = (ScrollableTextBlock)d;
string value = (string)e.NewValue;
source.ParseText(value);
}
}
And here's the ParseText method which is also a part of this control:
private void ParseText(string value)
{
if (this.stackPanel == null)
{
return;
}
// Clear previous TextBlocks
this.stackPanel.Children.Clear();
// Calculate max char count
int maxTexCount = this.GetMaxTextSize();
if (value.Length < maxTexCount)
{
TextBlock textBlock = this.GetTextBlock();
textBlock.Text = value;
this.stackPanel.Children.Add(textBlock);
}
else
{
int n = value.Length / maxTexCount;
int start = 0;
// Add textblocks
for (int i = 0; i < n; i++)
{
TextBlock textBlock = this.GetTextBlock();
textBlock.Text = value.Substring(start, maxTexCount);
this.stackPanel.Children.Add(textBlock);
start = maxTexCount;
}
// Pickup the leftover text
if (value.Length % maxTexCount > 0)
{
TextBlock textBlock = this.GetTextBlock();
textBlock.Text = value.Substring(maxTexCount * n, value.Length - maxTexCount * n);
this.stackPanel.Children.Add(textBlock);
}
}
}
In the code above I measure the text length and if it's greater that maxTextCount I break the text into a separate blocks and create a new TextBlock with this text. After the TextBlock is created I add it to the StackPanel. To make a picture complete here's how the control's style looks like:
<Style TargetType="local:ScrollableTextBlock" >
<Setter Property="Foreground" Value="{StaticResource PhoneForegroundBrush}"/>
<Setter Property="Background" Value="Transparent"/>
<Setter Property="FontSize" Value="{StaticResource PhoneFontSizeMedium}"/>
<Setter Property="Padding" Value="0"/>
<Setter Property="Width" Value="200" />
<Setter Property="Height" Value="70" />
<Setter Property="Template">
<Setter.Value>
<ControlTemplate TargetType="local:ScrollableTextBlock">
<ScrollViewer x:
<StackPanel Orientation="Vertical" x:
</ScrollViewer>
</ControlTemplate>
</Setter.Value>
</Setter>
</Style>
As you can see, the StackPanel is located inside of the ScrollViewer control which provides the scrolling functionality. And after we execute the test project the result is that the text is not truncated:
The test project and the ScrollableTextBlock control is available for your peruse.?? | https://blogs.msdn.microsoft.com/priozersk/2010/09/08/creating-scrollable-textblock-for-wp7/?replytocom=3173 | CC-MAIN-2018-09 | en | refinedweb |
D (The Programming Language)/d2/Type Conversion
From Wikibooks, open books for an open world
< D (The Programming Language)(Redirected from D (The Programming Language)/d2/Lesson 5)
Lesson 5: Type Conversion[edit]
In this lesson, you will learn how the types of variables can be implicitly and explicitly converted.
Introductory Code[edit]
import std.stdio; void main() { short a = 10; int b = a; // short c = b; // Error: cannot implicitly convert // expression b of type int to short short c = cast(short)b; char d = 'd'; byte e = 100; wchar dw = 'd'; int f = d + e + dw; writeln(f); //300 float g1 = 3.3; float g2 = 3.3; float g3 = 3.4; int h = cast(int)(g1 + g2 + g3); writeln(h); //10 int i = cast(int)g1 + cast(int)g2 + cast(int)g3; writeln(i); //9 }
Concepts[edit]
Implicit Integral Conversions[edit]
An object of an integral type can be converted to another object of an integral type as long as the destination type is wider than the original type. These conversions are implicit:
bool ⇒
int
byte ⇒
int
ubyte ⇒
int
short ⇒
int
ushort ⇒
int
char ⇒
int
wchar ⇒
int
dchar ⇒
uint
Explicit Conversions[edit]
Casting is a way to tell the compiler to try to force an object to change type. In D, you do this by writing
cast(type).
Tips[edit]
- You cannot convert an integral value to a string (or the other way around) with a cast. There is a library function which you will learn later for that. | https://en.wikibooks.org/wiki/D_(The_Programming_Language)/d2/Lesson_5 | CC-MAIN-2016-07 | en | refinedweb |
Not sure what to tell you. I was able to read an example file with minimal changes. I never intended to write anything you could just insert into your homework. This is a simple program that reads one pose at a time and spits it back out.
Works for me. My assumptions were that there were only one pose per line and that one joint angle was three floats. A pose was made up of multiple angles.Works for me. My assumptions were that there were only one pose per line and that one joint angle was three floats. A pose was made up of multiple angles.Code:#include <iostream> #include <fstream> #include <vector> using namespace std; struct Vector3 { float x, y, z; }; struct joint_angle { int count; Vector3 orient; }; struct pose { float time; vector<joint_angle> angle; }; std::istream & operator>> (std::istream & stream, joint_angle & ja) { ja.count = 0; stream >> ja.orient.x >> ja.orient.y >> ja.orient.z; /*printf ("ja.orient.x===>%f",ja.orient.x); printf ("ja.orient.y===>%f",ja.orient.y); printf ("ja.orient.z===>%f",ja.orient.z);*/ return stream; } int simpleRead(ifstream &stream, pose &p); int main() { ifstream file("file.txt"); pose p; while (simpleRead(file, p)) { cout << p.time; for (int i = 0; i < p.angle.size(); i++) { cout << " " << p.angle[i].orient.x << " " << p.angle[i].orient.y << " " << p.angle[i].orient.z; } cout << "\n"; } return 0; } int simpleRead(ifstream &stream, pose &p) { // 1. Read time information stream >> p.time; if (stream.eof()) return 0; // 2. Read joint angle information joint_angle ja; vector<joint_angle> angles; char c; while ( stream >> skipws >> ja >> noskipws >> c ) { angles.push_back(ja); if (c == '\n' ) { break; } } p.angle = angles; return 1; } /* in file.txt: -0.01 0.03 0.04 -0.5 0.6 0.8 0.3 -0.005 -0.003 -0.008 -0.03 0.05 0.06 -0.005 -0.2 0.3 0.8 1.02 1.03 1.04 */
I do feel like I've been tricked into doing all the hard work... but maybe this will help you understand what I'm trying to say. | http://cboard.cprogramming.com/cplusplus-programming/152921-get-lines-cplusplus-2.html | CC-MAIN-2016-07 | en | refinedweb |
A Hammier Javascript
Ham is another altJS language, similar to CoffeeScript. What makes Ham different is that it is written as a PEG, and does not have significant whitespace. Ham looks very similar to Javascript at first, but offers (hopefully) many useful features.
Ham was written using the Canopy PEG Parser Generator, and Javascript. I am currently working towards self-hosting Ham but it is not quite there yet.
Ham is written in an MVC style manner, where model is the AST, view is the javascript translations (using ejs templates), and the controller is the tree translators. This makes Ham extremely easy to hack on, and fun!
Since Ham is extremely similar to Javascript, you can get almost perfect syntax hilighting for free by using the Javascript hilighters, which is a pretty neat side effect.
Ham supports python style list ranges and slicing.
var range = [1..5];range === [1, 2, 3, 4, 5]; // truerange[1:] === [2, 3, 4, 5]; // truerange[:4] === [1, 2, 3, 4]; // truerange[::2] === [1, 3, 5]; // true
Ham supports list comprehensions, similar in style to Haskell.
var cross = [x*y | x <- range, y <- range[::-1]];
Ham makes it fun to use lambda's.
var sum = |x, y| { return x + y; }// If the body of the lambda is a single expression,// then the `return` statement and semicolon can be dropped.var sum = |x, y| { x + y }// Lambda's are an easy way to iterate a list:[1, 2, 3].each(|| { console.log('repeating'); });// If the lambda takes no parameters, the `||` can be dropped.[1, 2, 3].each({ console.log('repeating');});// When invoking a function with a lambda as the _only_ parameter, the parentheses can be dropped[1, 2, 3].each {console.log('repeating');};
Some people would prefer to use Classical Inheritence instead of Javascript's prototypical inheritence, that's fine:
class Hamburger extends MeatMeal {eat: { console.log('om nom nom'); }};// Ham just uses Backbone style .extend() for inheritence, so this translates easily to:// var Hamburger = MeatMeal.extend({ ... });
Stolen from Coffeescript, is the prototype shortcut:
String::startsWith = |str| { this.substr(0, str.length) === str };
Would be nice to have some inference at compile time, with contracts at runtime for what couldn't be inferred.
var x:string = 3; // TypeError -> typeof "x" is string.var sum = |x:num, y:num| { x + y }; // we could infer the return type easily herevar idk = ||:string { "hello" }; // I'm not sold on the return type syntax here
I like python style imports, but I think it might be hard/impossible to reconcile it with CommonJS style require. Another option is to rewrite a CommonJS style require for the browser, similar to browserify.
import Backbone, _ from 'vendor/backbone'; // would work great for browser, but hard for CommonJS
I also sometimes find myself with a need for python style Decorators, so Ham will have some form of them.
@watch(notify_change)var the_ghost_man = 3;
Yeah, I haven't gotten around to unary operators yet. I've been focussing on the cool stuff for now.
I haven't implemented while or for loops yet, as I am still experimenting with syntax for them. I've been getting by
largely with the combination of ranges and list comprehensions with
.each.
npm install -g harm
Then write some Ham.js code, and
harm <filename> to run it. | https://www.npmjs.com/package/ham-script | CC-MAIN-2016-07 | en | refinedweb |
You can subscribe to this list here.
Showing
3
results of 3
Update of /cvsroot/squirrel-sql/sql12/fw/src/net/sourceforge/squirrel_sql/fw/datasetviewer/cellcomponent
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv818
Modified Files:
CellComponentFactory.java
Log Message:
show newlines in string fields; use cyan bkgnd to show editable in popup only
Index: CellComponentFactory.java
===================================================================
RCS file: /cvsroot/squirrel-sql/sql12/fw/src/net/sourceforge/squirrel_sql/fw/datasetviewer/cellcomponent/CellComponentFactory.java,v
retrieving revision 1.21
retrieving revision 1.22
diff -C2 -d -r1.21 -r1.22
*** CellComponentFactory.java 7 Apr 2004 02:32:57 -0000 1.21
--- CellComponentFactory.java 14 Apr 2004 18:26:46 -0000 1.22
***************
*** 7,10 ****
--- 7,12 ----
import javax.swing.JTextField;
import javax.swing.JTextArea;
+ import javax.swing.JLabel;
+ import java.awt.Component;
import java.sql.Types;
import java.sql.PreparedStatement;
***************
*** 218,221 ****
--- 220,227 ----
+ /**
+ * The base component of a DefaultTableCellRenderer is a JLabel.
+ * @author gwg
+ */
static private final class CellRenderer extends DefaultTableCellRenderer
{
***************
*** 229,235 ****
}
! public void setValue(Object value)
! {
// default behavior if no DataType object is to use the
// DefaultColumnRenderer with no modification.
--- 235,284 ----
}
! /**
! *
! * Returns the default table cell renderer - overridden from DefaultTableCellRenderer.
! *
! * @param table the <code>JTable</code>
! * @param value the value to assign to the cell at
! * <code>[row, column]</code>
! * @param isSelected true if cell is selected
! * @param hasFocus true if cell has focus
! * @param row the row of the cell to render
! * @param column the column of the cell to render
! * @return the default table cell renderer
! */
! public Component getTableCellRendererComponent(JTable table, Object value,
! boolean isSelected, boolean hasFocus, int row, int column) {
!
! JLabel label = (JLabel)super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column);
+ // Newlines are stripped from the text by the underlying document, so before
+ // actually displaying the text, make sure that the user sees that there are newlines
+ // in the text by displaying them as "\n".
+ if (label.getText().indexOf('\n') > -1) {
+ label.setText(label.getText().replaceAll("\n", "/\\n"));
+ }
+
+ // if text cannot be edited in the cell but can be edited in
+ // the popup, show that by changing the text colors.
+ if (_dataTypeObject != null &&
+ _dataTypeObject.isEditableInCell(value) == false &&
+ _dataTypeObject.isEditableInPopup(value) == true) {
+ // Use a CYAN background to indicate that the cell is
+ // editable in the popup
+ setBackground(Color.CYAN);
+ }
+ else {
+ // since the previous entry might have changed the color,
+ // we need to reset the color back to default value for table cells.
+ setBackground(table.getBackground());
+ }
+
+ return label;
+ }
+
+
+ public void setValue(Object value)
+ {
// default behavior if no DataType object is to use the
// DefaultColumnRenderer with no modification.
Update of /cvsroot/squirrel-sql/sql12/doc
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv434
Modified Files:
changes.txt
Log Message:
bug fix: newlines in strings; cyan background and newlines in strings enhance
Index: changes.txt
===================================================================
RCS file: /cvsroot/squirrel-sql/sql12/doc/changes.txt,v
retrieving revision 1.50
retrieving revision 1.51
diff -C2 -d -r1.50 -r1.51
*** changes.txt 7 Apr 2004 03:17:50 -0000 1.50
--- changes.txt 14 Apr 2004 18:25:38 -0000 1.51
***************
*** 27,30 ****
--- 27,35 ----
-
***************
*** 36,39 ****
--- 41,47 ----
-
Update of /cvsroot/squirrel-sql/sql12/doc
In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv32531
Modified Files:
quick_start.html
Log Message:
desc. of blob/clob handling, cyan bkgnd, newlines in strings, overall ContentsTab operation
Index: quick_start.html
===================================================================
RCS file: /cvsroot/squirrel-sql/sql12/doc/quick_start.html,v
retrieving revision 1.13
retrieving revision 1.14
diff -C2 -d -r1.13 -r1.14
*** quick_start.html 7 Apr 2004 03:14:48 -0000 1.13
--- quick_start.html 14 Apr 2004 18:24:14 -0000 1.14
***************
*** 30,35 ****
<TR><TD> <A HREF="#connecting">Connecting</A></TD></TR>
<TR><TD><A HREF="#executing">Executing SQL</A></TD></TR>
! <TR><TD><A HREF="#editingdata">Editing Data in the Contents Tab</A></TD></TR>
<TR><TD> <A HREF="#enablingediting">Enabling Editing</A></TD></TR>
<TR><TD> <A HREF="#howtoedit">How To Edit</A></TD></TR>
<TR><TD> <A HREF="#UsingPopup">Using the Popup Window</A></TD></TR>
--- 30,38 ----
<TR><TD> <A HREF="#connecting">Connecting</A></TD></TR>
<TR><TD><A HREF="#executing">Executing SQL</A></TD></TR>
! <TR><TD><A HREF="#ContentsTab">The Contents Tab and Editing Data</A></TD></TR>
! <TR><TD> <A HREF="#ViewingDataInContentsTab">Viewing Data in the Contents Tab</A></TD></TR>
<TR><TD> <A HREF="#enablingediting">Enabling Editing</A></TD></TR>
+ <TR><TD> <A HREF="#Setting_the_Editing_Mode">Setting the Format and Editing Mode</A></TD></TR>
+
<TR><TD> <A HREF="#howtoedit">How To Edit</A></TD></TR>
<TR><TD> <A HREF="#UsingPopup">Using the Popup Window</A></TD></TR>
***************
*** 357,363 ****</P>
! <A NAME="editingdata"><h2>Editing Data in the Contents Tab</h2></A>
! <P>SQuirreL allows you to easily change the data in a table. Data may
be changed just by typing the new values when you are viewing the table
in the Contents tab under the Objects view.
--- 360,367 ----</P>
! <P>
! <A NAME="ContentsTab"><h2>The Contents Tab and Editing Data</h2></A>
! <P>SQuirreL allows you to easily view and change the data in a single table. Data may
be changed just by typing the new values when you are viewing the table
in the Contents tab under the Objects view.
***************
*** 368,371 ****
--- 372,477 ----
editing in the Popup, so be sure to read the
<A HREF="#UsingPopup">Using the Popup Window</A> information.
+
+ <P>
+ <A NAME="ViewingDataInContentsTab"><h3>Viewing Data in the Contents Tab</h3></A>
+ After connecting to a database, you may view the data in a single table
+ by using the Contents Tab.
+ <P>
+".
+ <P>
+ The data in the table will be displayed in rows and columns
+ based on the format selected (see
+ <a href="#Setting_the_Editing_Mode">Setting the Format and Editing Mode</a>).
+
+ <P>
+.
+
+ <P>
+).
+
+ <P>
+ There are a few special cases for how data is displayed.
+ <UL>
+ <LI>
+ If a cell contains Newlines
+ (which is possible in the various VARCHAR and CLOB data types),
+ the cell background is colored Cyan and the Newline characters are
+ explicitly shown as "\n".
+ If the table is editable,
+ the Cyan background means that the cell contents cannot be edited
+ in the cell as displayed in the table, but can be edited using the
+ Popup window (see <a href="#UsingPopup">Using the Popup Window</a>).
+ </LI>
+ <LI>
+ BLOB (Binary Large OBject) and CLOB (Character Large Object) fields
+ are initially displayed as "<Blob>" and "<Clob>" respecively.
+ This is done for performance reasons, since those data types are
+ often quite large.
+ When you select the Blob or Clob field, the entire data is read in
+ and displayed in the cell.<BR>
+ Alternatively, you may use the Session Properties -> Format screen
+ to limit the amount of the Blob/Clob data read and displayed when
+ the Contents Tab is first displayed, in which case the number of
+ characters you have selected will be displayed in each Blob or
+ Clob cell. This may be necessary when the Blob or Clob contains
+ data needed to identify specific rows in the table.
+ When there is more Blob/Clob data for a cell than is displayed
+ in that cell, an elipsis ("...") is added to the end of the displayed
+ data so you know that the data has been truncated for display.
+ As before, when you select the cell, the whole Blob or Clob will
+ be read and displayed.
+ </LI>
+ </UL>
+
+ <P>
+".
</P>
***************
*** 391,400 ****
</P>
! <a name="Setting_the_Editing_Mode"><h4>Setting the Editing Mode</h4></a>
! <P>To set the default format for the Contents Tab for new sessions take
the <em>New Session Properties</em> option from the <em>File</em> and
click on the <EM>General</EM> tab. The <b>Table Contents</b> dropdown
! list gives three options for the format of the Comtemts Tab:</P>
<dl>
--- 497,506 ----
</P>
! <a name="Setting_the_Editing_Mode"><h4>Setting the Format and Editing Mode</h4></a>
! <P>To set the default format and edting mode for the Contents Tab for new sessions take
the <em>New Session Properties</em> option from the <em>File</em> and
click on the <EM>General</EM> tab. The <b>Table Contents</b> dropdown
! list gives the following options:</P>
<dl>
***************
*** 421,426 ****
</dl>
! <P>After you have connected with a database, you may change the default
! model for the data in the Contents tab by the <em>Session
Properties</em> option from the <em>Session</em> and clicking on the
<EM>General</EM> tab and changing the selection for <em>Table
--- 527,532 ----
</dl>
! <P>After you have connected with a database, you may change the format
! or editing mode for the data in the Contents tab by the <em>Session
Properties</em> option from the <em>Session</em> and clicking on the
<EM>General</EM> tab and changing the selection for <em>Table
***************
*** 438,442 ****
<li>Make sure the table is editable. See <a
!Setting the Editing
Mode</a> above.</li>
--- 544,548 ----
<li>Make sure the table is editable. See <a
!Setting the Format and Editing
Mode</a> above.</li>
***************
*** 447,452 ****Limitations</a>) you may select the cell,
but you will not be allowed to edit the data.</li>
! When a cell is editable, its background will change to Yellow to
let you know that you may be changing the contents of the database.
<li>Change the data in the cell to look the way that you want it
--- 553,568 ----Limitations</a>) you may select the cell,
but you will not be allowed to edit the data.</li>
! When a cell is being edited, its background will change to Yellow to
let you know that you may be changing the contents of the database.
+ Cells with a Cyan background may be edited only by using the
+ Popup window (see <a href="#UsingPopup">Using the Popup Window</a>).
+
+ <li>If you select the cell using the mouse, the background immediately turns Yellow
+ and the cursor is set to the place where you clicked in the cell.
+ If you select the cell by using the tab or enter keys, the
+ background remains white until you type the first character
+ to be added (or use the delete key). That character is added to
+ (or deleted from) the end of the text displayed in the cell,
+ and the cell background changes to Yellow.
<li>Change the data in the cell to look the way that you want it
***************
*** 473,477 ****
"<null>". To change that to a non-null value, just type the
new data. When you enter the first character of the data, the
! "<null>" will be replaced by that character.</P>
<P>To change the value of a nullable cell to NULL, just delete all of the data
--- 589,598 ----
"<null>". To change that to a non-null value, just type the
new data. When you enter the first character of the data, the
! "<null>" will be replaced by that character.<BR>
!.</P>
<P>To change the value of a nullable cell to NULL, just delete all of the data
***************
*** 533,537 ****
into the table cells in a printable form might significantly
delay the display of the table.
! In this case, the cell may display just the name of the data type, something like "<BLOB>",
while the Popup window would show the actual contents of that field.
</li>
--- 654,658 ----
into the table cells in a printable form might significantly
delay the display of the table.
! In this case, the cell may display just the name of the data type, something like "<Blob>",
while the Popup window would show the actual contents of that field.
</li> | http://sourceforge.net/p/squirrel-sql/mailman/squirrel-sql-commits/?viewmonth=200404&viewday=14 | CC-MAIN-2016-07 | en | refinedweb |
Garbage collection-Marcus exam
0
If somebody could give me a good explanation about the answer to this question,it would be great.Do the rules vary in any way when it concerns wrapper classes?I mean x,y are references to Integer objects right?Or do they just store the integer values?
0
According to me the answer should be 0.
public class BettyAck {
public static void main(String argv[]){
BettyAck b =new BettyAck();
}
public BettyAck() {
Integer x = new Integer(10); //obj1 created
findOut(x); //sets temp variable to null, original obj1 intact
Integer y = new Integer(99); //obj2 created
Integer z = y; //refrence pointing to obj2
z = null; //ref set to null
findOut(y); //sets temp variable to null, original obj2 intact
//here
no objects eligible for GC
}
public void findOut(Integer y){
y = null;
}
}
But after the constructor returns, 2 objects will be eligible for GC.
Hope this helps.
Rancher
0
Marcus
SCWCD: Online Course, 50,000+ words and 200+ questions
0
Mind to help me understand this better?
Initially I thought it would be 3 objects.
1. temp (y) in findout()
2. z
3. temp (y) in findout()
The variable y in findout() was created 2 times (different object), am I right? Or is it because the second findout execution actually 'reuse' temp (y)?
Sorry to trouble you with silly question.
Thanks.
CH
0
The thing is that there are only two objects, the 1st one is the one that was refered by variable x (which is made null) and the other refered by variable y. So when
Integer z = y;
is executed both variable y and z refer to the same object. Then both are made null one by one, so three reference variables are null(ed). But actually they altogether refered to only two objects. And now these two objects are elligible for garbage collection.
Hope this helps.
Kind Regards,
Anish Doshi.
Sun Certified Programmer for Java platform 1.4
0
the answer is zero,
explanation is simple, basic garbage collection principle
1.an object is eligible to be garbage collection only if all the references variables pointing to them r set to null.
2. when we pass an object as a parameter to a method 'a copy of its reference is send'
so in ur code two objects r created , which r still having reference variables x and y pointing to them
0
when sending a reference of an object as a parameter to a method, only a COPY of that reference is sent.
That is why the answer is 0.
- Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth. - What truth? - That there is no spoon!!!
0
But what about Integer Y in the program?
It is explicitely made null and the findout() method is not called using it.
Isn't it not eligible for garbage collection?
The 'y' in the BettyAck() and in the findOut() are not the same.
Let me demonstrate what really happens:
y (in the BettyAck()) --> new Integer(99)
After calling findOut()
y (in the BettyAck()) --> new Integer(99) <-- y (in the findOut())
Now setting y (in the findOut()) to null
y (in the BettyAck()) --> new Integer(99)
y (in the findOut()) --> Null
As you clearly see the object new Integer(99) is still being referenced by y (in the BettyAck())
Hope this clarifies you.
- Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth. - What truth? - That there is no spoon!!!
0
Thanks for your clarification and I would like to ride on your illustration
Actually, I am quite puzzled with the y in FindOut(). When FindOut() was executed the first time, it create a variable holding y.
Then, when FindOut() was called the second time, would it create a new y or 'reuse' the y that was created previously?
Hope you can help me out here.
Thanks.
0
Then, when FindOut() was called the second time, would it create a new y or 'reuse' the y that was created previously?
'y' in the findOut() is a local variable, it exists as long as the findOut() is executing when you exit this method then all its variables are gone. Each time you call findOut() a new 'y' is created (This 'y' has nothing to do 'y' in the BettyAck().
- Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth. - What truth? - That there is no spoon!!!
0
Thanks for the explanation. Yet, I still need to trouble you. Sorry about that.
Since y in FindOut() is created everytime FindOut() is called, then shouldn't it be 3 objects available for garbage collection?
Initially I thought it would be 3 objects.
1. y in findout() -- First execution
2. z
3. y in findout() -- Second execution
Am I right to say so?
CH
0
...shouldn't it be 3 objects available for garbage collection?
Not at all
Take this for example:
1) 'x','y' are references for object 'A' (remember 'A' is an object while 'x' and 'y' are only references)
x --> A <-- y
2) Now, let us set the reference y to NULL (y = Null)
x--> A <-/- y --> Null
3) What happened here is that 'y' lost its reference to the object 'A'
4) 'A' of coarse is not EGC because it is still being referenced by 'x'
Note: objects are always created by the new clause.
I still need to trouble you. Sorry about that.
There is no trouble... we are all here to help. If you still have any other question i'll be more than happy to answer it (if i can of coarse)
- Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth. - What truth? - That there is no spoon!!!
0
This is getting more and more intersting. I am enjoying this forum now!!!
First, let me try to understand the problem presented in this thread.
As I understand, a variable passed to any function is by value. Am I right to say that this simply means that a new object will be created to hold the value? My understanding is that it can't be a copy of the reference passed in, as this would simply changed the value of the original object. Is this the reason why I saw some mock exam questions where a value is not changed eventhough a called function actually did some changes to it?
If my understand is correct, does that mean a new object (y in our case) would be created whenever FindOut() is executed?
I've to admit that I am very new to Java, so I am still very vague in term of how and when object and reference are created. Thanks for pointing out to me that object is creating with the new keyword. I think I will re-read the byte story presented here in JavaRanch to refresh my mind on object and reference.
Vicken, would appreciate if you could knock the confusion out of my mind.
Cheers
Rancher
0
0
A reference passed into a method is passed as if it were a copy of a pointer pointer rather than the actual object. Thus if that reference is assigned to a null it makes no difference to any other copy of that pointer. Thus the code within the method findOut makes no difference to any other references. Although reference z is assigned to null reference y still points to the object so no objects are eligible for garbage collection.
SCWCD: Online Course, 50,000+ words and 200+ questions
0
And yet some more:
1) Objects are never passed as an arguments to methods. I repeat, only a copy of the objects reference is sent.
2) Sending a copy of the reference to a method will NOT change the value of the object.
Now after you have done reading 'byte story' and still have further questions then you just go ahead and drop them here, the ranchers here will knock your questions out... and that is Guaranteed.
Note: I recommend, you do exactly what Marcus suggested. The answers he provided for his questions are simple and clear, however if you for any reason don't understand them then simply quote that part and we will try to clear things up.
[ December 04, 2003: Message edited by: Vicken Karaoghlanian ]
- Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth. - What truth? - That there is no spoon!!!
0
This code is from SL-275, it will not compile however, because MyDate class is not available. You can trace it, and see the output posted at the buttom.
This code outputs the following:
Int value is: 11
MyDate: 22-7-1964
MyDate: 4-7-1964
The MyDate object is not changed by the changeObjectRef method; however, the changeObjectAttr method changes the day attribute of the MyDate object.
- Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth. - What truth? - That there is no spoon!!!
0
Thanks Marcus and Vicken for going great length in explaining this to me. I understand where did I go wrong in this question.
Integer is a wrapper class, which means this is not Java primitive value. So calling FindOut() is pass by reference, not by value as I argued previously. The Java primitive is int. Sorry for posting silly questions when I am not even sure about what Java primitive value is.
z is not GC because it is a reference!!! Not an object.
Thanks guys. I think I better start reading my Java revision books again.
Also thanks to the author of the article Pass-By Value here in JavaRanch. It's great and easy to understand.
Cheers.
| http://www.coderanch.com/t/243905/java-programmer-OCPJP/certification/Garbage-collection-Marcus-exam | CC-MAIN-2016-07 | en | refinedweb |
XMLWF(1) UNIX Programmer's Manual. WELL-FORMED DOCUMENTS A well-formed document must adhere to the following rules: + The file begins with an XML declaration. For instance, <?xml version="1.0" standalone="yes"?>. NOTE: xmlwf does not currently check for a valid XML declaration. + Every start tag is either empty (<tag/>) or has a corresponding end tag. + There is exactly one root element. This element must con- tain all other elements in the document. Only comments, white space, and processing instructions may come after the close of the root element. + All elements nest properly. + All attribute values are enclosed in quotes (either single or double). MirOS BSD #10-current 24 January 2003 1 XMLWF(1) UNIX Programmer's Manual XMLWF(1) output file. -d output-dir Specifies a directory to contain transformed represen- tations of the input files. By default, -d outputs a canonical representation (described below). You can select different output formats using -c and -m. The output filenames will be exactly the same as the input filenames equival pos- titions. Requires -d to specify an output file. -n Turns on namespace processing. (describe namespaces) -c disables memory-mapping and uses normal file IO calls instead. Of course, memory-mapping is automatically turned off when reading from standard input. Use of memory-mapping can cause some platforms to report substantially higher memory usage for xmlwf, but this appears to be a matter of the operating system reporting memory in a strange way; there is not a leak MirOS BSD #10-current 24 January 2003 2 XMLWF(1) UNIX Programmer's Manual XMLWF(1) out- put options (-d, -m, -c, ...). -v Prints the version of the Expat library being used, including some information on the compile-time confi- guration exter- nal. MirOS BSD #10-current 24 January 2003 3 XMLWF(1) UNIX Programmer's Manual XMLWF(1) stan- dard output. The errors should go to standard error, not standard output. There should be a way to get -d to send its output to stan- dard, dis- tribute and/or modify this document under the same terms as Expat itself. MirOS BSD #10-current 24 January 2003. | http://www.mirbsd.org/htman/i386/man1/xmlwf.htm | CC-MAIN-2016-07 | en | refinedweb |
Applications Accessories: The Point
Introduction
Characteristics of a Point.
Based on this, a Point object can be represented on the
Windows coordinate system as follows:
Both the X and the Y
properties of the Point structure are of type int. In some
cases, you will want to use either integer or decimal characteristics. To
support this, the .NET Framework provides the PointF
structure. Its X and Y properties are of
type float.
We mentioned that a special point named the origin has
coordinates (0, 0):
To indicate that this is the point you are referring to,
the Point and the PointF structures are
equipped with a static field named Empty. In this property,
both the X and the Y values are set to 0:
Here is an example of accessing it:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace WindowsFormsApplication1
{
public partial class Exercise : Form
{
public Exercise()
{
InitializeComponent();
}
private void btnPoint_Click(object sender, EventArgs e)
{
Point origin = Point.Empty;
MessageBox.Show(string.Format("P({0}, {1})", origin.X, origin.Y), "Origin");
}
}
}
To help you find out whether a point is empty, the
Point and the PointF structures are
equipped with a Boolean property named IsEmpty. You can
enquire about this property using a conditional statement.
To create or identify a point, you can use one of three
constructors of the Point or the PointF
structure. One of these constructors uses the following syntax:
public Point(int x, int y)
This constructor takes a left and a top arguments. Here
is an example of creating a point using it:
using private void btnPoint_Click(object sender, EventArgs e)
{
Point pt = new Point(6, -2);
MessageBox.Show(string.Format("P({0}, {1})", pt.X, pt.Y), "Coordinate");
}
The Point and the PointF structures are equipped with
methods to perform various operations such as adding or subtracting points,
etc. | http://www.functionx.com/vcsharp2010/classes/point.htm | CC-MAIN-2016-07 | en | refinedweb |
Opened 6 years ago
Closed 5 years ago
#12106 closed (fixed)
FileFields ignore initial data when required=False
Description
If you create a form containing a FileField where required=False, then the field will ignore any initial value passed in.
from django import forms from django.core.files import File class MyForm(forms.Form): file_field = forms.FileField(required=False) test_file = File(open("test.txt")) form = MyForm(data={}, initial={"file_field": test_file}) form.is_valid() # Force form validation. print form.cleaned_data["file_field"] # prints None
Given that an initial value was supplied, the cleaned_data should now contain the initial value. However, it contains None, as the initial value has been silently discarded.
Attachments (1)
Change History (4)
Changed 6 years ago by etianen
comment:1 Changed 6 years ago by russellm
- Needs documentation unset
- Needs tests set
- Patch needs improvement unset
- Triage Stage changed from Unreviewed to Accepted
comment:2 Changed 5 years ago by etianen
comment:3 Changed 5 years ago by russellm
- Resolution set to fixed
- Status changed from new to closed
If you believe an issue has been resolved, then mark it closed. Trac is completely open for all to modify for exactly this reason. If it turns out you are mistaken, the original reporter can always reopen. In this case, you *are* the original reporter, so that's not likely to happen :-)
Looking over the Django source code, it would appear that this issue has been fixed. Many thanks!
Would it be okay to mark this as fixed? | https://code.djangoproject.com/ticket/12106 | CC-MAIN-2016-07 | en | refinedweb |
Top Apps
- Audio & Video
- Business & Enterprise
- Communications
- Development
- Home & Education
- Games
- Graphics
- Science & Engineering
- Security & Utilities
- System Administration
PortableApps.com: Portable Software/USB
Portable software for USB, portable, and cloud drives995,828 weekly downloads
Classic Shell
Start menu and Windows enhancement software2,735 weekly downloads
Dragon Web Browser
Dragon Web Browser is a fast, lightweight and complex web browser
epo
"epo" is an advanced archiving system on Windows platforms. It tightly integrates into Explorer through its shell namespace extension and offers very easy-to-use archiving features.
Open RAF
This project aims to present you with a powerful, fast, stable and easily extensible real-time audio filtering software for Windows, Linux and Mac OSX.1 weekly downloads | http://sourceforge.net/directory/natlanguage%3Ajapanese/os%3Avista/license%3Amit/?sort=update | CC-MAIN-2016-07 | en | refinedweb |
This week's challenge is to work with the Java3D API. This API provides not only wrappers for 3D Graphics, as AWT does with Graphics and Graphics2D, but tools for math and other utilities as well. Some of the features include multithreaded scenes, use for visualization and gaming, virtual reality view-based model, 3d-sound, and support for 3d math, including vector math.
Ideas:
-Make a basic 3D Image or Animation
-Make a basic 3D Game
-Use the API for math and physics modeling
Resources:
-Sun Website
-Sun Tutorial
-Java3D.org Tutorial
Week #36- Java3D APIPage 1 of 1
4 Replies - 10293 Views - Last Post: 15 November 2010 - 02:43 PM
#1
Week #36- Java3D API
Posted 07 November 2010 - 01:32 PM
Replies To: Week #36- Java3D API
#2
Re: Week #36- Java3D API
Posted 07 November 2010 - 01:55 PM
This looks pretty cool, to be honest I didn't know that this existed. I think I will give this week a go and try out some math or physics modeling, it has always LOOKED interesting to me.
#3
Re: Week #36- Java3D API
Posted 10 November 2010 - 09:00 PM
I might try this out eventhough i hate working with graphix
#4
Re: Week #36- Java3D API
Posted 14 November 2010 - 11:24 AM
Has anyone got Java3d loading on windows? I've tried their installer and their zip files but if I try to import their core classes it always breaks. I haven't used Java anything for 7/8 years but I'm able to compile and run classes that don't use the Java3d libraries.
:
This post has been edited by mocker: 15 November 2010 - 02:38 PM
#5
Re: Week #36- Java3D API
Posted 15 November 2010 - 02:43 PM
Figured I'd post what I got. Combined a few tutorials to make a moveable cube on top of a simple 3d map. I have the camera set to update on keypress but couldn't figure out the correct transform to make it move to where the cube is. Most of the code is implemented and commented out for the actual camera movement if you can see what I'm doing wrong.
import java.applet.Applet; import java.awt.BorderLayout; import java.awt.Frame; import java.awt.GraphicsConfiguration; import com.sun.j3d.utils.applet.MainFrame; import com.sun.j3d.utils.geometry.ColorCube; import com.sun.j3d.utils.universe.*; import javax.media.j3d.*; import javax.vecmath.*; import com.sun.j3d.utils.behaviors.keyboard.*; import java.awt.event.*; import java.util.Enumeration; public class SimplebehaviorApp extends Applet { private SimpleUniverse u; private BoundingSphere bounds; private ViewingPlatform ourView; public class SimpleViewbehavior extends behavior{ private TransformGroup targetTG; private ViewingPlatform targetVP; private Transform3D rotation = new Transform3D(); private double angle = 0.0; private TransformGroup chaseTG; private Vector3d camVec; // create Simplebehavior SimpleViewbehavior(ViewingPlatform targetViewP, TransformGroup chasedTG){ this.targetVP = targetViewP; this.chaseTG = chasedTG; this.targetTG = targetViewP.getViewPlatformTransform(); camVec = new Vector3d(); } // initialize the behavior // set initial wakeup condition // called when behavior beacomes live public void initialize(){ // set initial wakeup condition this.wakeupOn(new WakeupOnAWTEvent(KeyEvent.KEY_PRESSED)); } // behave // called by Java 3D when appropriate stimulus occures public void processStimulus(Enumeration criteria){ // decode event // do what is necessary angle += 0.1; Transform3D newrot = new Transform3D(); chaseTG.getTransform(newrot); // System.out.println(newrot.toString()); Vector3d translate = new Vector3d(); Vector3d up = new Vector3d(0, 1, 0); Vector3d camV = new Vector3d(); newrot.get(translate); newrot.get(camV); camV.y = camV.y + 3d; camV.z = camV.z - 3d ; System.out.println(translate.toString()); //System.out.println(camVec.toString()); //Commented out attempts at making the camera chase the target // ------------------------- // rotation.lookAt(camV, translate, up); //rotation.set(camVec); //rotation.invert(); /* Vector3d ourVec = new Vector3d(); Vector3d up = new Vector3d(0, 1, 0); newrot.get(ourVec); ourVec.add(camVec); Point3d cam = new Point3d(); Point3d man = new Point3d(); rotation.setTranslation(ourVec); newrot.transform(man); rotation.transform(cam); rotation.lookAt(cam, man, up); //rotation.invert(); */ //rotation.lookAt(Point3d eye, Point3d center, Vector3d up); //enable this when chase cam actually transforms correctly // targetTG.setTransform(rotation); this.wakeupOn(new WakeupOnAWTEvent(KeyEvent.KEY_PRESSED)); } } // end of class SimpleViewbehavior //create 3d land to travel over Shape3D createLand(){ LineArray landGeom = new LineArray(44, GeometryArray.COORDINATES | GeometryArray.COLOR_3); float l = -50.0f; for(int c = 0; c < 44; c+=4){ landGeom.setCoordinate( c+0, new Point3f( -50.0f, 0.0f, l )); landGeom.setCoordinate( c+1, new Point3f( 50.0f, 0.0f, l )); landGeom.setCoordinate( c+2, new Point3f( l , 0.0f, -50.0f )); landGeom.setCoordinate( c+3, new Point3f( l , 0.0f, 50.0f )); l += 10.0f; } Color3f c = new Color3f(0.1f, 0.8f, 0.1f); for(int i = 0; i < 44; i++) landGeom.setColor( i, c); return new Shape3D(landGeom); } public BranchGroup createSceneGraph(SimpleUniverse su) { // Create the root of the branch graph BranchGroup objRoot = new BranchGroup(); objRoot.addChild(createLand()); TransformGroup objRotate = new TransformGroup(); objRotate.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE); objRoot.addChild(objRotate); objRotate.addChild(new ColorCube(0.4)); //Create a KeyNavigator to move the cube around, and a SimpleViewbehavior for the camera KeyNavigatorbehavior myRotationbehavior = new KeyNavigatorbehavior(objRotate); SimpleViewbehavior myViewRotationbehavior = new SimpleViewbehavior(this.ourView,objRotate); myRotationbehavior.setSchedulingBounds(new BoundingSphere()); objRoot.addChild(myRotationbehavior); myViewRotationbehavior.setSchedulingBounds(new BoundingSphere()); objRoot.addChild(myViewRotationbehavior); // Let Java 3D perform optimizations on this scene graph. objRoot.compile(); return objRoot; } // end of CreateSceneGraph method of SimplebehaviorApp // Create a simple scene and attach it to the virtual universe public SimplebehaviorApp() { setLayout(new BorderLayout()); GraphicsConfiguration config = SimpleUniverse.getPreferredConfiguration(); Canvas3D canvas3D = new Canvas3D(config); add("Center", canvas3D); // SimpleUniverse is a Convenience Utility class SimpleUniverse simpleU = new SimpleUniverse(canvas3D); this.u = simpleU; ourView = u.getViewingPlatform(); // This will move the ViewPlatform back a bit so the // objects in the scene can be viewed. Transform3D locator = new Transform3D(); locator.setTranslation(new Vector3f(0, 3f, -3f)); locator.lookAt(new Point3d(0d, 3d, -6d), new Point3d(0d, 0d, 5d), new Vector3d(0d, 1d, 0d)); locator.invert(); this.ourView.getViewPlatformTransform().setTransform(locator); BranchGroup scene = createSceneGraph(this.u); simpleU.addBranchGraph(scene); } // end of SimplebehaviorApp (constructor) public static void main(String[] args) { System.out.print("SimplebehaviorApp.java \n- a demonstration of creating a simple"); System.out.println("moveable cube on top of a map."); System.out.println("Use the arrow keys to rotate and move the orb. The green face is the front\n."); System.out.println("This is modified from the tutorials at The Java 3D API Tutorial at "); System.out.println(""); Frame frame = new MainFrame(new SimplebehaviorApp(), 256, 256); } // end of main (method of SimplebehaviorApp) } // end of class SimplebehaviorApp
This post has been edited by mocker: 15 November 2010 - 02:44 PM
Page 1 of 1 | http://www.dreamincode.net/forums/topic/198872-week-%2336-java3d-api/ | CC-MAIN-2016-07 | en | refinedweb |
One of the main benefits of the addition of generics to C# is the ability to easily create strongly typed collections using types in the System.Collections.Generics namespace. For example, you can create a variable of type List<int>, and the compiler will check all accesses to the variable – ensuring that only ints are added to the collection. This is a big usability improvement over the untyped collections available in version 1 of C#.
Unfortunately, strongly typed collections have drawbacks of their own. For example, suppose you have a strongly typed List<object> and you want to append all the elements from a List<int> to your List<object>. You would like to be able to write code like this:
List<int> ints = new List<int>();
ints.Add(1);
ints.Add(10);
ints.Add(42);
List<object> objects = new List<object>();
// doesn’t compile ‘ints’ is not a IEnumerable<object>
objects.AddRange(ints);
In this case, you would – treating an instantiation of a generic type (in this case IEnumerable<int>) as a different instantiation of that same type(in this case IEnumerable<object>).
C# doesn’t support variance for generic types, so when encountering cases like this you will need to find a workaround in your code. If you do encounter this kind of problem, there are a couple of techniques you can use to workaround the problem. For the simplest cases, like the case of a single method like AddRange in the example above, you can declare a simple helper method to do the conversion for you. For example, you could write this method:
// Simple workaround for single method
// Variance in one direction only
public static void Add<S, D>(List<S> source, List<D> destination)
where S : D
{
foreach (S sourceElement in source)
destination.Add(sourceElement);
}
...
// does compile
Add<int, object>(ints, objects);
This example shows the some characteristics of a simple variance workaround. The helper method takes 2 type parameters, for the source and destination, and the source type parameter S has a constraint which is the destination type parameter D. This means that the List<> being read from must contain elements which are convertible to the element type of the List<> being inserted into. This allows the compiler to enforce that int is convertible to object. Constraining a type parameter to derive from another type parameter is called a ‘naked type parameter constraint’.
Defining a single method to workaround variance problems is not too bad. Unfortunately variance issues can become quite complex quite quickly. The next level of complexity is when you want to treat an interface of one instantiation as an interface of another instantiation. For example, you have an IEnumerable<int>, and you want to pass it to a method which:
static void PrintObjects(IEnumerable<object> objects)
{
foreach (object o in objects)
Console.WriteLine(o);
}
...
List<int> ints = new List<int>();
// would like to do this, but can’t ...
// ... ints is not an IEnumerable<object>
PrintObjects(ints);
The workaround for the interface case, is to create a wrapper object which does the conversions for each member of the interface. This would look something like this:
// Workaround for interface
// Variance in one direction only so type expressinos are natural
public static IEnumerable<D> Convert<S, D>(IEnumerable<S> source)
return new EnumerableWrapper<S, D>(source);
private class EnumerableWrapper<S, D> : IEnumerable<D>
List<int> ints = new List<int>();
// would like to do this, but can’t ...
// ... ints is not an IEnumerable<object>
PrintObjects(Convert<int, object>(ints));
Again, notice the ‘naked type parameter constraint’ on the wrapper class and helper method. This machinery is getting pretty complicated, but the code in the wrapper class is pretty which will wrap all read operations on an IList<> in a type safe manner, but wrapping of write operations cannot be done so simply.
Here’s part of a wrapper for dealing with variance on the IList<T> interface which shows the problems that arise with variance in both the read and write directions:
private class ListWrapper<S, D> : CollectionWrapper<S, D>, IList<D>
public ListWrapper(IList<S> source) : base(source)
this.source = source;
public void Insert(int index, D item)
if (item is S)
this.source.Insert(index, (S)item);
else
throw new Exception("Invalid type exception");
public int IndexOf(D item)
if (item is S)
return this.source.IndexOf((S) item);
return -1;
The Insert method on the wrapper has a problem – it takes as an argument a D, but it must insert it into an IList<S>. Since D is a base type of S, not all D’s are S’s, so the Insert operation may fail. This example has an analogue with variance with arrays. When inserting an object into an object[], a dynamic type check is performed because the object[] may in fact be a string[] at runtime. For example:
object[] objects = new string[10];
// no problem, adding a string to a string[]
objects[0] = "hello";
// runtime exception, adding an object to a string[]
objects[1] = new object();
Back to our IList<> example, the wrapper for the Insert method can simply throw when the actual type doesn’t match the desired type at runtime. So again, you could imagine that the compiler would automatically generate the wrapper for the programmer. There are cases where this policy isn’t the right thing to do however. The IndexOf method searches the collection for the item provided, and returns the index in the collection if the item is found. However, if the item is not found, the IndexOf method simply returns -1, it doesn’t throw. This kind of wrapping cannot be provided by an automatically generated wrapper.
So far we’ve seen the two simplest workarounds for generic variance issues. However, variance issues can get arbitrarily complex – for example treating a List<IEnumerable<int>> as a List<IEnumerable<object>>, or treating a List<IEnumerable<IEnumerable<int>>> as a List<IEnumerable<IEnumerable<object>>>.
Generating these wrappers to work around variance problems in your code can introduce a significant overhead in your code. Also, it can introduce referential identity issues, as each wrapper does not have the same identity as the original collection which can lead to subtle bugs. When using generics, you should choose your type instantiation to reduce mismatches between components which are tightly coupled. This may require some compromises in the design of your code. As always, design involves tradeoffs between conflicting requirements, and the constraints of the types system in the language should be considered in your design process.
There are type systems which include generic variance as a first class part of the language. Eiffel is the prime example of this. However, including generics variance as a first class part of the type system would dramatically increase the complexity of the type system of C#, even in relatively straightforward scenarios which don’t involve variance. As a result, the C# designers felt that not including variance was the right choice for C#.
Here’s the full source code for the examples discussed above.
using System;
using System.Collections.Generic;
using System.Text;
using System.Collections;
static class VarianceWorkaround
{
public EnumerableWrapper(IEnumerable<S>
public bool MoveNext()
return this.source.MoveNext();
public void Reset()
this.source.Reset();
private IEnumerable<S> source;
// Variance in both directions, causes issues
// similar to existing array variance
public static ICollection<D> Convert<S, D>(ICollection<S> source)
return new CollectionWrapper<S, D>(source);
private class CollectionWrapper<S, D>
: EnumerableWrapper<S, D>, ICollection<D>
public CollectionWrapper(ICollection<S> source)
: base(source)
// variance going the wrong way ...
// ... can yield exceptions at runtime
public void Add(D item)
this.source.Add((S)item);
throw new Exception(@"Type mismatch exception, due to type hole introduced by variance.");
public void Clear()
this.source.Clear();
// ... but the semantics of the method yields reasonable
// semantics
public bool Contains(D item)
return this.source.Contains((S)item);
return false;
// variance going the right way ...
public void CopyTo(D[] array, int arrayIndex)
foreach (S src in this.source)
array[arrayIndex++] = src;
public int Count
get { return this.source.Count; }
public bool IsReadOnly
{
get { return this.source.IsReadOnly; }
public bool Remove(D item)
return this.source.Remove((S)item);
private ICollection<S> source;
public static IList<D> Convert<S, D>(IList<S> source)
return new ListWrapper<S, D>(source);
where S : D
// variance the wrong way ...
// ... can throw exceptions at runtime
public void RemoveAt(int index)
this.source.RemoveAt(index);
public D this[int index]
get
return this.source[index];
set
if (value is S)
this.source[index] = (S)value;
else
throw new Exception("Invalid type exception.");
private IList<S> source;
}
namespace GenericVariance
class Program
foreach (object o in objects)
static void AddToObjects(IList<object> objects)
// this will fail if the collection provided is a
// wrapped collection
objects.Add(new object());
static void Main(string[] args)
VarianceWorkaround.Add<int, object>(ints, objects);
PrintObjects(VarianceWorkaround
.Convert<int, object>(ints));
AddToObjects(objects); // this works fine
AddToObjects(VarianceWorkaround
static void ArrayExample()
// runtime exception, adding an object to a string[] | http://blogs.msdn.com/b/peterhal/archive/2005/07/29/445123.aspx | CC-MAIN-2016-07 | en | refinedweb |
operator new in custom namespace
Discussion in 'C++' started by dirk@dirkgregorius.de,35
- William F. Robertson, Jr.
- Jul 29, 2003
- Replies:
- 8
- Views:
- 563
- Neil Cerutti
- Dec 22, 2005
Stream operator in namespace masks global stream operatormrstephengross, May 9, 2007, in forum: C++
- Replies:
- 3
- Views:
- 527
- James Kanze
- May 10, 2007
Custom Controls: Import a custom namespace and use its functions withinuser, Jul 18, 2007, in forum: ASP .Net
- Replies:
- 1
- Views:
- 531
- Kevin Spencer
- Jul 19, 2007
What are the key differences between operator new and operator new[]?xmllmx, Feb 3, 2010, in forum: C++
- Replies:
- 6
- Views:
- 597
- xmllmx
- Feb 3, 2010 | http://www.thecodingforums.com/threads/operator-new-in-custom-namespace.501536/ | CC-MAIN-2016-07 | en | refinedweb |
XML Documents and Data
.NET Framework 3.5
The XML classes in the System.Xml namespace provide a comprehensive and integrated set of classes, allowing you to work with XML documents and data. The XML classes support parsing and writing XML, editing XML data in memory, data validation, and XSLT transformation.
Show: | https://msdn.microsoft.com/en-us/library/2bcctyt8(v=vs.90).aspx | CC-MAIN-2016-07 | en | refinedweb |
Retrieving and Modifying Data in ADO.NET.
In This Section
- Connecting to a Data Source in ADO.NET
Describes how to establish a connection to a data source and how to work with connection events.
- Connection Strings in ADO.NET
Contains topics describing various aspects of using connection strings, including connection string keywords, security info, and storing and retrieving them.
- Connection Pooling
Describes connection pooling for the .NET Framework data providers.
- Commands and Parameters
Contains topics describing how to create commands and command builders, configure parameters, and how to execute commands to retrieve and modify data.
- DataAdapters and DataReaders
Contains topics describing DataReaders, DataAdapters, parameters, handling DataAdapter events and performing batch operations.
- Transactions and Concurrency
Contains topics describing how to perform local transactions, distributed transactions, and work with optimistic concurrency.
- Retrieving Identity or Autonumber Values
Provides an example of mapping the values generated for an identity column in a SQL Server table or for an Autonumber field in a Microsoft Access table, to a column of an inserted row in a table. Discusses merging identity values in a DataTable.
- Retrieving Binary Data
Describes how to retrieve binary data or large data structures using CommandBehavior.SequentialAccess to modify the default behavior of a DataReader.
- Modifying Data with Stored Procedures
Describes how to use stored procedure input parameters and output parameters to insert a row in a database, returning a new identity value.
- Retrieving Database Schema Information
Describes how to obtain available databases or catalogs, tables and views in a database, constraints that exist for tables, and other schema information from a data source.
- DbProviderFactories
Describes the provider factory model and demonstrates how to use the base classes in the System.Data.Common namespace.
- Data Tracing in ADO.NET
Describes how ADO.NET provides built-in data tracing functionality.
- Performance Counters in ADO.NET
Describes performance counters available for SqlClient and OracleClient.
- Asynchronous Programming
Describes ADO.NET support for asynchronous programming.
- SqlClient Streaming Support
Discusses how to write applications that stream data from SQL Server without having it fully loaded in memory. | https://msdn.microsoft.com/en-us/library/ms254937(v=vs.110).aspx | CC-MAIN-2016-07 | en | refinedweb |
/* * @ */ /* * TokenRecord.h * TokendMuscle */ #ifndef _TOKENRECORD_H_ #define _TOKENRECORD_H_ #include "Record.h" #include <string> class TokenRecord : public Tokend::Record { NOCOPY(TokenRecord) public: TokenRecord(const std::string &objectID); virtual ~TokenRecord(); std::string objid() const { return mObjectID; } private: std::string mObjectID; // we don't need full MscObjectInfo, since MscToken only needs objid }; #endif /* !_TOKENRECORD_H_ */ | http://opensource.apple.com/source/Tokend/Tokend-37563/MuscleCard/TokenRecord.h | CC-MAIN-2016-07 | en | refinedweb |
David Findley's Blog
My little home in the cloud.
Certificate error with Web Platform Installer
A friend of mine was having an issue getting the Web Platform Installer to work on his Windows Server 2008 R2 box. He said there was some sort of cert error and asked me to try on my local machine to see if I got the cert error. I tried it and I did get a cert error on Windows 7 64bit. I happened to notice that that url simply redirects to . Out of curiosity I dropped to a command line and tried to run .\WebPlatformInstaller.exe /? to see if there were any command line options. It gave an error that said invalid URI. So we tried running it with the product list url like: "WebPlatformInstaller.exe" . This seems to get around the expired cert that is on go.microsoft.com.
ASP.NET MVC – Multiple buttons in the same form
I keep seeing this question in forums and on twitter so I thought I’d post all the various ways you can handle this and what the pros and cons are.
ASP.NET MVC ModelState should work like TempData
I prefer to have the actions that forms post to just process the posted data and then redirect to a different action for viewing the results. So in order to pass validation errors back to the form action I need ModelState to work like TempData does. In fact it seemed that before ModelState was added that one of the most common scenarios for using TempData was passing validation error messages between actions so I’m not sure why it doesn't already work like this. I’m using RC2 so its doubtful this will change before RTM. :(
Interesting finds 3/13/2009
As soon as I get some time I need to look at these a little closer:
Merry Christmas Indeed!
Janice went all out this year and got me an Ibanez JS1000 (Joe Satriani series) guitar, a Line 6 POD X3 Live effects board and a pair of Roland CM-30 amplified monitors. My fingers are all tore up now since I've been out of practice for some time now. But it sure is fun to get back to some jamming. The JS1000 is pretty light and has easy action. Combined with the POD X3 I can get quite a variety of amazing sounds. I even got the X3 hooked up to my MacBook Pro and finally was able to try out Garage Band. I was able to lay down the rhythm track for Crushing Day (what I could remember from back in the day) and then play the lead part over it with no lag. The roland CM-30s are nice because I can run my Alesis QS8 and the POD X3 into them at the same time. This is probly the best setup I've ever had.
LINQ - The Uber FindControl
With a simple extension method to ControlCollection to flatten the control tree you can use LINQ to query the control tree:
A Quick Fix for the Validator SetFocusOnError BugThe ASP.NET validators have this nice property called "SetFocusOnError" that is supposed to set the focus to the first control that failed validation. This all works great until your validator control is inside a naming container. I ran into this recently when using validators in a DetailsView. Take this simple example:
<%@ Page protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) DataBind(); } </script> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title></title> </head> <body> <form id="_frm" runat="server"> <asp:DetailsView <Fields> <asp:TemplateField <EditItemTemplate> <asp:TextBox <asp:RequiredFieldValidator </EditItemTemplate> </asp:TemplateField> </Fields> <FooterTemplate> <asp:ValidationSummary <asp:Button </FooterTemplate> </asp:DetailsView> </form> </body> </html>If you run this page and do a view source you'll see that the FirstNameTextBox gets rendered like this:
<input name="dv1$FirstNameTextBox" type="text" id="dv1_FirstNameTextBox" />If you just do a post back without entering a value to cause the validator to fail it will output this line of java script in an attempt to set the focus to the invalid element:
WebForm_AutoFocus('FirstNameTextBox');See anything wrong with this? It would seem that the validators just use the string value you typed in for the ControlToValidate property rather than doing a FindControl and using the UniqueID. This is exactly what happens and I verified it with reflector. The Validate method on BaseValidator does this:
if ((!this.IsValid && (this.Page != null)) && this.SetFocusOnError) { this.Page.SetValidatorInvalidControlFocus(this.ControlToValidate); }If you follow the call to SetValidatorInvalidControlFocus you'll see that it never resolves the full UniqueID of the control that its going to set focus to.Ok, so this sucks. How do I work around it. My solution was to simply ditch using the SetFocusOnError property and implement the focus logic myself which is actually pretty easy. I overrode Validate method on my Page like this:
public override void Validate(string group) { base.Validate(group); // find the first validator that failed foreach (IValidator validator in GetValidators(group)) { if (validator is BaseValidator && !validator.IsValid) { BaseValidator bv = (BaseValidator)validator; // look up the control that failed validation Control target = bv.NamingContainer.FindControl(bv.ControlToValidate); // set the focus to it if (target != null) target.Focus(); break; } } }
If your using C# 3 this is even easier using LINQ:
public override void Validate(string group) { base.Validate(group); // get the first validator that failed var validator = GetValidators(group) .OfType<BaseValidator>() .FirstOrDefault(v => !v.IsValid); // set the focus to the control // that the validator targets if (validator != null) { Control target = validator .NamingContainer .FindControl(validator.ControlToValidate); if (target != null) target.Focus(); } }
I hope this saves someone the headache of tracking this down.
VS.NET Macro To Group and Sort Your Using Statements
I try to follow a coding standard for organizing my using statements. System.* goes at the top and then other namespaces grouped together like this:
Fix ReturnUrl When Sharing Forms Authentication with Multiple Web Applications
Scenario: You have two web applications and login.mydomain.com. The login site provides a centralized login application and www contains any number of web applications that should use the auth ticket issued by the login site.
A VS.NET Macro to Generate Machine Keys.
I needed to create a new machine key for an asp.net site. I found a couple of command line utils out on the web that would create a new key but I thought it would be easier to just have it avail in VS.NET. So, I threw together this little macro that will generate the machine key and insert it. Just run the macro while you have you web.config open in VS.NET. If you already have a machinekey it will find it and replace it. If not it will just add it right after the <system.web> node. It should do the proper indents and everything too. | http://weblogs.asp.net/dfindley | CC-MAIN-2016-07 | en | refinedweb |
Using CodeMirror to add C# syntax highlighting to an editable HTML Textarea
I wanted to display some C# code in an html <textarea> control that was displayed in an ASP MVC 3 application using @Html.TextArea():
@using (Html.BeginForm()) { @Html.TextArea("sampleCode", (string)ViewBag.sampleCode, new { }.
Demo
Here's an idea of how it works. This is an editable textbox with C# code syntax highlighting, which dynamically updates as you type.
Adding CodeMirror to a textbox
Adding CodeMirror is really easy:
- Grab the CodeMirror package
- Add the necessary CSS and JavaScript references in your page
- Call CodeMirror.fromTextArea() on your textarea element
Getting the CodeMirror package
Normally you'd grab that from, but my c# syntax changes were just merged in and aren't in the download package yet, so you can just grab the latest zip from github:.
Add the necessary CSS and JavaScript references in your page first script reference brings in the main CodeMirror library
- The second script refernce uses one of the common supported modes - in this case I'm using the c-like syntax mode
- The next is the main codemirror CSS reference
- The last reference is for the default CodeMirror theme - there are 4 included color themes, and you can add your own with your own custom CSS file
Call CodeMirror.fromTextArea() on your textarea element>
Styling the CodeMirror box
CodeMirror creates a new element with the class "CodeMirror" so you need to apply your CSS styles to the CodeMirror class, not the ID or class of your original element.
Putting it all together>
Adding new syntax
I mentioned earlier that when I first found CodeMirror, I noticed that it didn't have C# support. Fortunately, I noticed that it did have a C-Like syntax mode, and adding C# syntax support was really easy.
- I looked up the C# 4 keywords on MSDN here.
- I did some find-replace-fu to turn it into a space delimited list.
- I added a few lines to the end of clike.js, following the format of the previous C++ and Java samples.
CodeMirror.defineMIME("text/x-java", { name: "clike", atAnnotations: true, keywords: keywords(") }); CodeMirror.defineMIME("text/x-csharp", { name: "clike", atAnnotations: true, atStrings: true, keywords: keywords("abstract as base bool break byte case catch char checked class const continue decimal" + " default delegate do double else enum event explicit extern false finally fixed float for" + " foreach goto if implicit in int interface internal is lock long namespace new null object" + " operator out override params private protected public readonly ref return sbyte sealed short" + " sizeof stackalloc static string struct switch this throw true try typeof uint ulong unchecked" + " unsafe ushort using virtual void volatile while add alias ascending descending dynamic from get" + " global group into join let orderby partial remove select set value var yield") });
The key is to find a similar language which is already supported and modify it. | http://weblogs.asp.net/jongalloway/using-codemirror-to-add-c-syntax-highlighting-to-an-editable-html-textarea | CC-MAIN-2016-07 | en | refinedweb |
On Monday 30 June 2003 12:17 am, Mika Fischer wrote: > * David <dbree@duo-county.com> [2003-06-30 06:45]: > > After running various tests - even editing a copy of the file to where > > it only contained "#!/bin/bash" I finally renamed the file substituting > > a dash for each dot and bingo! it worked.. > > > > It seems that run-parts (at least mine (v 1.15 ) ignores any file with > > a dot in the name.. > > That's a feature nor a bug. :) See also the manpage: > ---- > If the --lsbsysinit option is not given then the names must > consist entirely of upper and lower case letters, digits, > underscores, and hyphens. > ---- > > > ) There are several variations to these directories, but this is the sequence that applies to my situation... ifeq ($(strip $(ALL_PATCH_DIR)),) ALL_PATCH_DIR = $(shell if [ -d /usr/src/kernel-patches/all/ ]; \ then echo /usr/src/kernel-patches/all/ ;\ fi) endif ... and... ALL_PATCH_APPLY = $(ALL_PATCH_DIR)/apply ... later ... if [ -n "$(VERSIONED_ALL_PATCH_DIR)" -a \ -d $(VERSIONED_ALL_PATCH_APPLY) ]; then \ run-parts --verbose $(VERSIONED_ALL_PATCH_APPLY); \ else \ true; \ fi* (debian-2-4-21 was my addition to make run-parts work) | https://lists.debian.org/debian-user/2003/07/msg00006.html | CC-MAIN-2016-07 | en | refinedweb |
Extract Human Task Event payload detail in SOA Mediator
By kyap on Jan 27, 2014
Oracle SOA/BPM Human Task provides a very powerful feature to generate Business Event upon a task assignment or completion. The associated Business Event will then be sent to the Oracle Event Delivery Network and it can be captured by a Mediator component, which in turn executes others services or processes, like creating a Microsoft Exchange Task with Reminder as nicely described from this blog entry.
When the information that you need from the Business Event is only related to the Task metadata (taskid, assignee, taskURL…) then the configuration is pretty straight forward as it is possible to extract each desired element easily from the XSL mapping tools. And there are plenty of blogs out there showing the step by step configuration for this purpose.
However, if you need to extract a specific ‘value’ from the Task payload, the picture suddenly becomes more complicated as the ‘payload’ element in the Business Event has a xsd:anyType type and it is not possible to “browse” the payload detail, hence mapping them out easily from the XSL Mapping tool in Jdeveloper for further processing.
To address to the problem there are 2 options:
- (Easy but Restrictive) Map the specific values to be catch using the Human Task Mapped Attributes (or directly in the systemMessageAttributes in the execData). When doing so those values can not only be used for queries from BPM Workspace, and they will be mapped as the Task metadata so you can easily link them up during the XSL transformation. Nevertheless, every time you want to extract additional values, the Human Task has to be modified and the associated process needs to be redeployed. It is a valid option if you know exactly what you need upfront.
- (Relatively difficult but flexible) Fully utilize XSL Transformation capability to extract the “hidden” values from the payload element. This option requires more XML/XSL knowledge and not necessarily easy to comprehend. However it provides much more flexibility and it can have a completely independent design lifecycle from the original business process. This is the option we will explore in this post. If you want to get before hand the source code of the example below, you can download from here.
First of all, consider the following BPM Process with a very simple Human Task taking 2 Data arguments defined by the Business Object FirstBO and SecondBO. Please note a third Business Object ThirdBO is encapsulated within SecondBO.
Now let’s enable the Human Task to generate a Business Event upon the task assignment. To do this, open the Human Task, go to Event and check the box OnAssigned (Please note the other available option but there are not key in our current demonstration)
From the Business Process perspective we are all set. Now let’s consider the a separate SOA Composite Application which only contains a Mediator to capture the associated business event and an File Adaptor to extract write data into a file.
Once we link the Mediator to the File Adaptor, we can then generate a XSL stylesheet to transform the Human Task Event into service call to write data into a file.
Now the challenge is to extract the Payload information within the stylesheet. This can be done via the XSL source mode with 3 major steps:
- Add all the namespace of the objects within the payload. Those namespaces can be retrieved either directly from the BO definition or the Human Task configuration page
- Create XSL variables, pointing to a specific argument of the Human Task with the Payload, using the namespaces defined above. In our case we have 2 arguments so we create 2 variables
- Finally, you can now manually extract each desired value from the XSL variable by using the XPath expression
xmlns:p1=""
xmlns:p2=""
xmlns:p3=""
<xsl:variablep1:firstPayload"
select="/tns:taskAssignedMessage/task:task/task:payload/p1:FirstBO"/>
<xsl:variable
<imp1:FirstBO.one>
<xsl:value-of
</imp1:FirstBO.one>
<imp1:FirstBO.two>
<xsl:value-of
</imp1:FirstBO.two>
<imp1:FirstBO.three>
<xsl:value-of
</imp1:FirstBO.three>
<imp1:SecondBO.one>
<xsl:value-of
</imp1:SecondBO.one>
<imp1:SecondBO.two>
<xsl:value-of
</imp1:SecondBO.two>
<imp1:SecondBO.three.ThirdBO.one>
<xsl:value-of
</imp1:SecondBO.three.ThirdBO.one>
<imp1:SecondBO.three.ThirdBO.two>
<xsl:value-of
</imp1:SecondBO.three.ThirdBO.two>
Now you can deploye both projects into your BPM domain, and you can initiate the SampleProcess to start the testing:
After the execution, monitor the process thread in FMW Control and you should see the execution on the Mediator upon the Human Task initialization.
By clicking on the Mediator link you will be able to see the payload retrieved by the Mediator (Business Event):
And of course the transformed payload using the XSL Stylesheet we specified above:
Voilà ! Hope this entry is useful to you. | https://blogs.oracle.com/fmwinaction/entry/extract_human_task_event_payload | CC-MAIN-2016-07 | en | refinedweb |
Django was originally developed right in the middle of the United States – quite literally, as. Because many developers have at best a fuzzy understanding of these terms, we’ll define them briefly.
Internationalization refers to the process of designing programs for the potential use of any locale. This includes marking text (such as UI elements and error messages) for future translation, abstracting the display of dates and times so that different local standards may be observed, providing support for differing time zones, and generally making sure that the code contains no assumptions about the location of its users. You’ll often see “internationalization” abbreviated I18N. (The more than 50 different localization files. If you’re not a native English speaker, there’s a good chance that Django is already translated into your primary language.
The same internationalization framework used for these localizations is available for you to use in your own code and templates.
To use this framework,.
The three steps for internationalizing your Django application are:
We’ll cover each one of these steps in detail.
Translation strings specify “This text should be translated.” These strings can appear in your Python code and templates. It’s your responsibility to mark translatable strings; the system can only translate strings it knows about.
Specify a translation string by using the function ugettext(). It’s convention to import this as a shorter alias, _, to save typing..py placeholders (the month and the day) with their positions swapped.
For this reason, you should use named-string interpolation (e.g., %(day)s) instead of positional interpolation (e.g., %s or %d) whenever you have more than a single parameter. If you used positional interpolation, translations wouldn’t be able to reorder placeholder text.... u"Hello %s" % ugettext_lazy("people") # This will not work, since you cannot insert a unicode object # into a bytestring (nor can you insert our unicode proxy there) . Field names and table names should be marked for translation (otherwise, they won’t be translated in the admin interface). This means writing explicit verbose_name and verbose_name_plural options in the Meta class, though, rather than relying on Django’s default determination of verbose_name and verbose_name_plural by looking at the model’s class name:
from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): name = models.CharField(_('name'), help_text=_('This is the help text')) class Meta: verbose_name = _('my thing') verbose_name_plural = _('mythings')).
Translation in Django templates uses two template tags and a slightly different syntax than in Python code. To give your template access to these tags, put {% load i18n %} toward the top of your template.>
It’s not possible to mix a template variable inside a string within {% trans %}. If your translations require strings with variables (placeholders), use {% blocktrans %}:
{% blocktrans %}This string %}. Example:
{% blocktrans count list|length as counter %} There is only one {{ name }} object. {% plural %} There are {{ counter }} {{ name }} objects. {% endblocktrans %}
Internally, all block and inline translations use the appropriate ugettext / ungettext call.
Each RequestContext has access to three). a couple of helper functions. # .....py makemessages, that automates the creation and upkeep of these files. To create or update a message file, run this command:
django-admin:
When creating JavaScript translation catalogs (which we’ll cover later in this chapter,) you need to use the special ‘djangojs’ domain, not -e js.
No gettext?
If you don’t have the gettext utilities installed, django-admin.py django-admin makemessages works, see the “gettext on Windows” section below.py makemessages will have created a .po file containing the following snippet – a message:
#: path/to/python/module.py:23 msgid "Welcome to my site." msgstr ""
A quick explanation:!
To reexamine all source code and templates for new translation strings and update all message files for all languages, run this:
django-admin.py makemessages -a. other translator finds a translation.
If all you want to do is run Django with your native language, and a language file is available for your language, all you need to do is set LANGUAGE_CODE.
If you want to let each individual user specify which.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', )
(For more on middleware, see Chapter 17.)
LocaleMiddleware tries to determine the user’s language preference by following this algorithm: – maybe the validation messages, too.
Technical message IDs are easily recognized; they’re all upper case. You don’t translate the message ID as with other messages, HttpRequest. Feel free to read this value in your view code. Here’s a simple example:
def hello_world(request):..
All message file repositories are structured the same way. They are:
To create message files, you use the same django-admin.py makemessages django-admin.py compilemessages to produce the binary django.mo files that are used by gettext.
You can also run django-admin.py compilemessages --settings=path.to.settings to make the compiler process all the directories in your LOCALE_PATHS setting. app-specific translations. But using app-specific translations and project translations could produce weird problems with makemessages: makemessages, django-admin.py makemessages on the project level will only translate strings that are connected to your explicit project and not strings that are distributed independently.:
Here’s example HTML template code:
<form action="/i18n/setlang/" method="post"> <input name="next" type="hidden" value="/next/page/" /> <select name="language"> {% for lang in LANGUAGES %} <option value="{{ lang.0 }}">{{ lang.1 }}</option> {% endfor %} </select> <input type="submit" value="Go" /> </form>
Adding translations to JavaScript poses some problems:
Django provides an integrated solution for these problems: It passes the translations into JavaScript, so you can call gettext, etc., from within JavaScript.
The main solution to these problems is the javascript_catalog view, which sends out).
You create and update the translation catalogs the same way as the other
Django translation catalogs – with the django-admin.py makemessages tool. The only difference is you need to provide a -d djangojs parameter, like this:
django-admin.py makemessages -d djangojs -l de
This would create or update the translation catalog for JavaScript for German. After updating translation catalogs, just run django-admin.py compilemessages the same way as you do with normal Django translation catalogs.
If you know gettext, you might note these specialties in the way Django does translation::
You may also use gettext binaries you have obtained elsewhere, so long as the xgettext --version command works properly. Some version 0.14.4 binaries have been found to not support this command. Do not attempt to use Django translation utilities with a gettext package if the command xgettext --version entered at a Windows command prompt causes a popup window saying “xgettext.exe has generated errors and will be closed by Windows”.
The final chapter focuses on security – how you can help secure your sites and your users from malicious attackers. | http://www.djangobook.com/en/2.0/chapter19.html | CC-MAIN-2016-07 | en | refinedweb |
Using Caches in Multi-Tier Applications: Best Practices
by Andrei Cioroianu
Learn how the strategic use of caching technology can improve the performance of your multitier applications, as well as how to keep multiple caches in synch across clustered environments.
Published July 2005
Multi-tier architectures help make complex enterprise applications manageable and scalable. Yet, as the number of servers and tiers increases, so does the communication among them, which can degrade overall application performance.
Using caching technology at strategic points across the multi-tier model can help reduce the number of back-and-forth communications. Furthermore, although cache repositories require memory and CPU resources, using caches can nonetheless lead to overall performance gains by reducing the number of expensive operations (such as database accesses and Web page executions). However, ensuring that caches retain fresh content and invalidate stale data is a challenge, and keeping multiple caches in synch, in clustered environments, is even more so..
Caching Frameworks
For a standalone application, it's fairly easy to implement your own caching mechanisms using the Java Collections Framework, to create a repository for frequently-used objects. For example, the Java Map data structure (available in the java.util package), works fine as long as you have a reasonably small number of objects that don't consume too much memory.
However, creating your own cache for persistent objects or Web content, especially for a large, distributed application, is orders of magnitude more difficult than for a self-contained small application, because you must:
Caching frameworks solve these problems, and many others. Let's take a quick look at several caching frameworks available from Oracle, in the context of the typical Web-based application architecture (Figure 1).
Oracle Caching Frameworks Overview
Oracle Web Cache, Web Object Cache, Java Object Cache, and Oracle
TopLink are not mutually exclusive, but rather complement each otherthey are used in different tiers of an enterprise application, as shown in
Figure 1.
Web browsers connect through the Oracle Web Cache (perhaps via a proxy
cache) to the Web server that obtains its dynamic content from a JSP
container. Servlets and JSPs use Web Object Cache while the
application's business logic can cache frequently used objects with the
help of Java Object Cache. The business logic tier can access an
Oracle Database via Oracle TopLink, which caches data objects.
A Web caching framework facilitates quick retrieval of content in a
Web application environment. Retrieving static content is fairly simplein fact, the HTTP protocol specification (RFC 1945) defines some inherent mechanisms for minimizing the communication between Web browsers and servers, by caching content on HTTP clients, or by using proxy caches.
HTTP caching works well for static content, but it cannot handle dynamic content that is personalized for each user. For caching dynamic content, you need a solution that caches page fragments and assembles documents from those fragments on the fly, which is the functionality provided by Oracle Web Cache, as follows:
Oracle Web Cache is an HTTP-level cache maintained externally to the application. It leverages the built-in caching capabilities of a generic Web server (that is, it is a reverse proxy cache) that supports caching static contentHTML, GIF, or JPG files for examplebut it can also cache dynamic content, including application data (such as SOAP responses).
Oracle Web Cache supports partial-page caching using a standard called Edge Side Includes (ESI), an XML-based language for defining template pages. (An ESI template page is used by an ESI processor, such as Oracle Web Cache, to assemble HTML documents from both cacheable and non-cacheable fragments.)
Unlike proxies and Web browser caches, Oracle Web Cache is designed to run on servers where Web applications and Web administrators can control the cache using APIs and tools. Oracle Web Cache is very fast, but you can't process the cached content (using Java code) before it's delivered. If you need to do that, you can use Oracle Web Object Cache (WOC).
Oracle Java Object Cache (JOC), which is a feature of Oracle Application Server Containers for J2EE (OC4J) 10g, is an easy-to-use, general caching framework that manages Java objects within a process, across processes, and on disk. The application specifies the cache's capacity and the maximum number of objects that can be kept in the cache.
JOC lets you define namespaces called regions, group objects within a region, store objects in the cache, retrieve the cached objects, and replace them at any time. You can specify a time-to-live for each object, and you can use event listeners to be notified when a cached object is invalidated. Since cached objects can be accessed concurrently, you should not modify them directly. Rather, you must create a private copy of a cached object, modify it, and then replace the cached object with the modified copy.
Oracle Web Object Cache (WOC), also an OC4J 10g feature, is a Java framework for caching Web content, serializable Java objects, and XML objects (DOM trees). It's an application-level cache for Java objects, running on the
same JVM that runs your Servlets and JSP pages.
WOC provides a Java API and a JSP tag library that you can use to manage the cached dynamic content and objects in the Web tier of your J2EE applications. JOC is the default cache repository of WOC, however, you can plug-in other cache repositories if you like.
Oracle TopLink provides caching and mapping frameworks; caching data retrieved from the database improves the application's
performance, while mapping relational data to objects, reduces the amount of
handwritten-code needed to query and update the database.
Oracle TopLink provides a Java API that you can use to construct
database-independent queries, which the framework translates into SQL
statements that take advantage of the features provided by each database
server. After executing a query, TopLink retrieves data from the result
set, and stores this data into objects, which are cached.
To update the database, you use the Oracle TopLink API to retrieve clones of the objects from the cache, and then you simply update the clones' properties using their get() and set() methods. Oracle TopLink does the heavy lifting, updating cached objects and generating the SQL statements that store the new data into the database.
Clustering Caches
When you scale an application across multiple servers or nodes, as
in distributed and clustered environments, you can also scale most of the
Oracle caches as well. Figure 2 shows the architecture of an application
that is deployed on multiple J2EE servers and is accessed through multiple
Oracle Web Cache nodes.
Oracle Web Cache Cluster functions as a single logical cache that partitions cached content across all nodes that comprise the cluster. A regular page will be cached on a single node, while "popular" content is automatically replicated across the cluster. The support for partitioning as well as replication results in better
performance and increased reliability for the Web Cache Cluster.
Oracle JOC as well as Oracle TopLink can
synchronize multiple caches running on different J2EE servers.
Another good example of an appropriate use of caching is Oracle BPEL Process Manager, which provides a framework for designing, deploying, monitoring, and administering BPEL processes. Oracle BPEL Process Manager can be deployed in a J2EE cluster, with an Oracle Web Cache as a load-balancing front-end to the cluster, routing requests to various nodes in the cluster. In addition, BPEL Manager uses a cache for the DOM trees that are saved in a "dehydration store" which essentially preserves the state of the long-running processes while they wait for asynchronous callbacks.
Suppose two concurrent usersuser A and user Bare trying to update the same piece of data using a Web-based interface. Let's say User A submits the changed information first, and the application stores the information into the database. At this point, it is very possible that user B sees stale data in his Web browser, and changes to this data can override the modifications made by User A. Even if the application prevents concurrent users from accessing the same data, someone may see stale content if the user clicks the browser's Back button. These problems can lead to inconsistent information or loss of data if the application developer ignores them.
In the sections below, I outline a few strategies that ensure freshness of content served, thereby avoid the stale-data problem.
Using No-Cache Headers Web browsers and proxies must cache static pages, JavaScripts, CSS files and images in order to minimize the network traffic. Caching dynamic content, however, can have undesired side effects, especially in the case of Web forms that contain data extracted from a database.
Fortunately, it is very easy to disable the HTTP caching, using the "Pragma: no-cache" and "Cache-Control: no-cache" headers defined by the HTTP/1.0 and HTTP/1.1 standards respectively. These headers can be set, for example, with a simple filter:
package caches; java.io.IOException;
public class NoCacheFilter implements Filter {
private FilterConfig config;
public void init(FilterConfig config)
throws ServletException {
this.config = config;
}
public void doFilter(ServletRequest request,
ServletResponse response,
FilterChain chain)
throws IOException, ServletException {
HttpServletResponse httpResponse
= (HttpServletResponse) response;
httpResponse.addHeader("Pragma", "no-cache");
httpResponse.addHeader("Cache-Control", "no-cache");
chain.doFilter(request, response);
}
public void destroy() {
}
}
The filter can be configured in the web.xml file of your application for all JSP pages, for a subset of them, or just for the Web pages that use JSF and ADF Faces, as in the following example:
<filter>
<filter-name>NoCacheFilter</filter-name>
<filter-class>caches.NoCacheFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>NoCacheFilter</filter-name>
<servlet-name>FacesServlet</servlet-name>
</filter-mapping>
Caching Dynamic Content As already mentioned, Oracle offers two complementary Web caching frameworks: Web Object Cache (WOC) and Oracle Web Cache. You should use WOC only when your cached content must be post-processed for each request, before delivery, using Java code. In most cases, however, you have page fragments or whole pages, which once generated, don't need any kind of post-processing.
Web Cache is the ideal solution for these, even if the cached content depends on request parameters or cookies. Web Cache will maintain a different content version for each set of parameters and will substitute cookies that are used for personalization or as session IDs. You just have to configure Web Cache properly and to identify the cacheable and non-cacheable fragments, marking them with the ESI tags. In JSP pages, you can use the Java Edge Side Includes (JESI) tag library, which generates the ESI markup.
JESI has two usage models: "control/include" and "template/fragment." When choosing control/include, you set the caching attributes of each page with <jesi:control> and you include content fragments with <jesi:include>. The <jesi:control> tag lets you specify whether the dynamic content generated by a JSP is cacheable or not. If it is cacheable, you can also specify an expiration time expressed in seconds as in the following example:
<%@taglib prefix="jesi"
uri="" %>
<jesi:control
<jesi:include
<br>
<jesi:include
<br>
<jesi:include
The control attributes of a cacheable page have no effect on the included pages, which must use the <jesi:control> tag too for specifying their expiration times. Web Cache will invoke the container page and the included pages separately, which means that these pages do not share the JSP request and response objects as in the case of <jsp:include>. Therefore, the container page and the included pages cannot communicate through attributes and beans stored in the JSP request scope.
The "template/fragment" usage model allows you to maintain all cacheable and non-cacheable fragments as well as the markup that glues them within the same JSP page. Web Cache will invoke a page that uses <jesi:template> and <jesi:fragment> multiple times for obtaining the template content and the fragments separately. This model is a bit more difficult to use, but it has the advantage that you don't have to split the content of your pages into multiple files.
Web Cache accepts requests for invalidating the cached content over the HTTP protocol. These requests use an XML-based format, but you don't have to build them yourself since Web Cache provides a Java API and JESI tags that allow you to specify the content from the cache that must be invalidated. You can also perform this operation manually, using the administration tools.
On Data Versioning and Locking Strategies
Browser caches, proxies, WOC and Web Cache improve the Web performance of your applications, but generate stale-data and stale-content problems, which can be minimized by using the NoCacheFilter filter and the cache-invalidation features of the frameworks.
These problems cannot be completely solved in the Web tier because an application running on a server cannot notify Web browsers when the content becomes stale. The best thing you can do is to make sure that stale data is not written into the database, using the optimistic-locking feature of Oracle TopLink, which supports data versioning.
Now that you have an understanding of some of the various caching approaches, let's take a look at some examples of saving versioned objects into a database, retrieve these objects, and update or delete them. You'll also see the SQL statements that are generated and executed by TopLink.
Working with Persistent Objects Oracle TopLink adds persistent objects to a shared session cache when a database query is executed or when a unit of work successfully commits a transaction. The identity maps that hold the cached objects can use strong, weak, or soft references, which determine if and when these objects are garbage-collected. In addition to the objects created by Oracle TopLink when a query is executed, the registerObject() method of UnitOfWork returns object clones that you use in your code to modify the properties of a persistent object.
Web frameworks, such as JSF, create and manage bean instances too. In order to use Oracle TopLink together with JSF, you need a method that copies the properties of a view bean created by JSF to an object clone returned by an Oracle TopLink method and vice versa. Apache's Commons BeanUtils provides such a method, which should be wrapped to make it easier to use:
package caches;
import org.apache.commons.beanutils.PropertyUtils;
import java.lang.reflect.InvocationTargetException;
public class MyUtils {
public static void copy(Object source, Object dest) {
try {
PropertyUtils.copyProperties(dest, source);
} catch (IllegalAccessException x) {
throw new RuntimeException(x);
} catch (InvocationTargetException x) {
throw new RuntimeException(
x.getTargetException());
} catch (NoSuchMethodException x) {
throw new RuntimeException(x);
}
}
}
Using Optimistic Locking There are two ways to prevent concurrent units of work (transactions) from modifying the same persistent object:
With pessimistic locking, a database row is locked while somebody uses a Web form to update it. One of the problems with pessimistic locking is that rows can be locked for an indefinite amount of timefor example, a user leaves a page without clicking the submit button, the browser crashes, or there's a network problem.
With optimistic locking, Oracle TopLink verifies if the row's data
was modified since it was retrieved from the database. One way to do that is to compare all or some of the original object's fields with the row's data. This solution ensures that nobody else modified the object during the unit of work, which works well if the object is modified in the business logic tier without the involvement of any user interface.
When a Web-based interface is used to update the object, a unit of work can't wait until the user clicks a submit button for the same reasons that make pessimistic locking impractical. The only solution is to use a version field that is incremented each time the row is updated. The application gets the current version when it reads the object, and then it passes the version to the Web browser as a hidden form field. This can be done very easily with JSF:
<h:inputHidden
When the Web browser submits the from-data, JSF stores it into a view bean together with the version from the hidden form field. Then, Oracle TopLink compares the version of the modified data with the row's version. If they don't coincide, Oracle TopLink throws an OptimisticLockException. Otherwise, the row is updated and the version is incremented. The following examples show how this works, using a simple bean that has an id property acting as a primary key, a version property, and a single data property:
package caches;
public class MyBean {
private String id;
private int version;
private String data;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
...
}
In your own applications, you can use as many data properties as you need, and you can rename the id and version properties as you like.
Saving New Objects To store a bean into a database with Oracle TopLink, you have to acquire a client session and a unit of work. Then, you create a new bean instance and you register it to the unit of work with the registerObject() method, which returns a clone that can be used for editing.
After copying the view bean's properties to the clone with MyUtils.copy(), you call the commit() method, which saves all changes of the clone into the database. This means that Oracle TopLink inserts the view bean's properties into the database. In addition, Oracle TopLink sets the version of the clone object and saves the new version number. Therefore, you have to update the version property of the view bean:
public void insert(MyBean viewBean) {
ClientSession session = server.acquireClientSession();
UnitOfWork uow = session.acquireUnitOfWork();
MyBean newBean = new MyBean();
MyBean beanClone = (MyBean) uow.registerObject(newBean);
MyUtils.copy(viewBean, beanClone);
uow.commit();
viewBean.setVersion(beanClone.getVersion());
}
Oracle TopLink generates and executes a single INSERT statement:
INSERT INTO MyTable (id, version, data)
VALUES ('someID', 1, 'someData')
Retrieving Objects You can use Oracle TopLink's query API to read a persistent object from the database. The ReadObjectQuery method lets you select a single object, while ReadAllQuery allows you to obtain a collection of objects. In both cases, an expression builder can be used to define the WHERE clause of the SELECT statement. In this example, the id field must be equal to the id parameter:
private MyBean read(Session session, String id) {
ReadObjectQuery query
= new ReadObjectQuery(MyBean.class);
ExpressionBuilder myBean = new ExpressionBuilder();
query.setSelectionCriteria(
myBean.get("id").equal(id));
return (MyBean) session.executeQuery(query);
}
The read() method is used to select an object from the database, but it's also helpful when you want to update or delete an existing object. The select() method acquires a client session and calls read():
public MyBean select(String id) {
ClientSession session = server.acquireClientSession();
return read(session, id);
}
Oracle TopLink tries to get the persistent object from the cache. If the cache doesn't contain the object, the following SELECT statement is used to retrieve the persistent object from the database:
SELECT id, version, data FROM MyTable
WHERE (id = 'someID')
Updating Existing Objects When you want to update a persistent object, you can use the same read() method that you use to select the object. Then, you acquire a unit of work and obtain a clone of the persistent object. Now you can modify the clone's properties, using for example MyUtils.copy().
As explained earlier, commit() throws an OptimisticLockException if the object's version isn't the same as the version of the row from the database. In this case, you can refresh the properties of the bean with refreshObject():
public void update(MyBean viewBean) {
ClientSession session = server.acquireClientSession();
MyBean cachedBean = read(session, viewBean.getId());
UnitOfWork uow = session.acquireUnitOfWork();
MyBean beanClone = (MyBean) uow.registerObject(cachedBean);
MyUtils.copy(viewBean, beanClone);
try {
uow.commit();
viewBean.setVersion(beanClone.getVersion());
} catch (OptimisticLockException x) {
Object staleBean = x.getObject();
Object freshBean = session.refreshObject(staleBean);
MyUtils.copy(freshBean, viewBean);
throw x;
}
}
Oracle TopLink executes the following UPDATE statement:
UPDATE MyTable SET data = 'modifiedData', version = 2
WHERE ((id = 'someID') AND (version = 1))
At the next update, the SQL statement looks like this:
UPDATE MyTable SET data = 'changedData', version = 3
WHERE ((id = 'someID') AND (version = 2))
If an OptimisticLockException occurs, you can signal the error to the user with the JSF API:
Oracle Java Object Cache
Oracle TopLink
Oracle Web Cache
OC4J JSP Tag Libraries and Utilities Reference (JESI, WOC, ...)
Oracle BPEL Process Manager
RFC 2068: HTTP/1.1 Specification
ESI Specification
JSR 128: JESI - JSP Tag Library for ESI
Caching In on the Enterprise Grid
Overview of Oracle TopLink Caching and Locking
Building Database-driven Applications with JSF
import oracle.toplink.exceptions.OptimisticLockException;
import javax.faces.application.FacesMessage;
import javax.faces.context.FacesContext;
...
try {
myDAO.update(myBean);
}
catch (OptimisticLockException x) {
FacesContext context
= FacesContext.getCurrentInstance();
FacesMessage message
= new FacesMessage(x.getMessage());
message.setSeverity(FacesMessage.SEVERITY_FATAL);
context.addMessage(null, message);
}
This code shows the exception's message in the Web page, which is useful for verifying that optimistic locking works. Before deploying the application on a production server, you would have to replace the exception's message with an error message that users can understand.
Deleting Objects It is very easy to delete persistent objects. You just have to get them as in the previous examples, and then you call deleteObject():
public void delete(MyBean viewBean) {
ClientSession session = server.acquireClientSession();
UnitOfWork uow = session.acquireUnitOfWork();
MyBean cachedBean = read(uow, viewBean.getId());
uow.deleteObject(cachedBean);
uow.commit();
}
Here is the DELETE statement executed by Oracle TopLink:
DELETE FROM MyTable
WHERE ((id = 'someID') AND (version = 3))
Summary
Caches allow you to deploy complex enterprise applications with large user bases on commodity hardware. Managing caches, however, is not a trivial task. Therefore, instead of building your own caching mechanisms, you should use reliable caching frameworks that are easy-to-use.
This article presented some of the frameworks developed by Oracle, explaining where and when to use each of them. Java Object Cache is a general solution that can be used in the business logic tier, but it is also useful in a JSP container as a cache repository for Web Object Cache. Specialized solutions, such as Oracle TopLink and Web Cache, provide many additional benefits in the persistence and presentation tiers, such as object-relational mapping and Web monitoring. Keep in mind that you don't have to use all these frameworks within the same application. In many cases, only one or two of them provide enough performance gains.
Send us your comments | http://www.oracle.com/technology/pub/articles/cioroianu_caches.html | crawl-002 | en | refinedweb |
<Blog x:
How do you choose which restaurant do you want to eat at in a mall with 100+ restaurants?
Competition in the food industry is very competitive and these businesses are looking at different methods of getting the edge! Some companies have started using Bluetooth as an advertisement medium. The concept is very simple; you walk past a restaurant and your phones Bluetooth is turned on. The restaurant has a system which can detect Bluetooth devices in close proximity and then transmit an advertisement to this device. Now you can get notified of specials that this restaurant is having for the day!
Using InTheHand, this might not be such a difficult system to implement!
Include the correct namespaces
using InTheHand.Net;
using InTheHand.Net.Bluetooth;
using InTheHand.Windows.Forms;
Detect the primary Bluetooth radio
BluetoothRadio br = BluetoothRadio.PrimaryRadio;
br.Mode = RadioMode.Discoverable;
Find a device close by
SelectBluetoothDeviceDialog sbdd = new SelectBluetoothDeviceDialog();
sbdd.ShowAuthenticated = true;
sbdd.ShowRemembered = true;
sbdd.ShowUnknown = true;
if (sbdd.ShowDialog() == DialogResult.OK)
{
// MORE CODE NEEDED HERE
}
This function creates a dialog box from which you can select the desired Bluetooth device to send the specials too. A fully automated system would need to bypass this step and automatically detect new devices.
Finally we need to send the specials.jpg using OBEX File Push
System.Uri uri = new Uri("obex://" + sbdd.SelectedDevice.DeviceAddress.ToString() + "/" + System.IO.Path.GetFileName(@"C:\specials.jpg"));
ObexWebRequest request = new ObexWebRequest(uri);
request.ReadFile(@"C:\specials.jpg");
ObexWebResponse response = (ObexWebResponse)request.GetResponse();
response.Close();
This code is very basic and still needs some adjustments...
Also have a look at my previous article on using Bluetooth OBEX Listener
You've been kicked (a good thing) - Trackback from DotNetKicks.com
This sounds very cool. I would very much like to see more, especially the auto discovery.
Tnx, I will do a post on the auto discovery...
I may just be an old curmudgeon, but if my phone starts going haywire because it's been BLuetoothed by a restaurant, I'd NEVER EVER eat there.
Spam by another name...
true, but in menly there are huge boards that warn you to turn off you bluetooth if you don't want the adverts?
And remeber, the advert can also be a text messsage or a animated gif
The following code can be used to enumerate the available bluetooth devices
BluetoothClient bc = new InTheHand.Net.Sockets.BluetoothClient();
BluetoothDeviceInfo[] array = bc.DiscoverDevices();
foreach (BluetoothDeviceInfo bti in array)
{
//bti.DeviceAddress
}
Nice one Rudi, please keep this up, I would like to explore using it in respective OBA RAP/SharePoint solution offerings and also include it in one of my future community presentations.
Thank you for the nice comments...
If I look at my blogs stats I can see that bluetooth is a very popular topic...
I am going to convert the 2 combined articles into 1 comprehensive codeproject article...
Will post the link!
Would be very interesting to see it being used in a OBA RAP/Sharepoint solution!!! | http://dotnet.org.za/rudi/archive/2007/10/02/advertise-using-bluetooth.aspx | crawl-002 | en | refinedweb |
<Blog x:
For the last week I was back in WinForms land... There is nothing making you appreciate the new stuff in WPF more than living without it for a week!!!
Let me start by giving you some background... I had to make a fairly simple data logger for a client... the only problem is that I only had WinForms to use... I constantly found myself wondering why the hell thy don't have a content model!!!
So, what is this content model and why should I care? In WinForms you have a button... very simple control with a specific purpose... Allow people to press it and perform a action based on this click! As you would expect, the button has a text property. This represents the text on the buttons face... Ok, now I need a picture inside the button... Ok, so lets add a image property... That will work... but what if I need to add a circle to this button... or even better, another button (To create a split button)... now the wheels start coming off!!! Yes, some of this is possible in WinForms, but it just feels like a afterthought! WPF is designed with this type of scenarios in mind! This is were the content model start coming in...
Any control derived from ContentControl has a property called Content. Content is of the type object which implies that the content can be of any type... This is huge... You can potentially place anything inside a ContentControl! So, lets look at a few examples, lets create a button
<Button />
This just creates a very basic button, lets start slow:
<Button Content="Hallo World!" />
As expected, this creates a button with a string inside "Hallo World!". We will come back to this example because their are more underneath the surface... Have a look at the following button
<Button>
Hallo World
</Button>
Ok, so this creates exactly the same results as the previous button! What can we deduct from this? We added the content here as if it was the child of the button... how does it know that it is content and not a child? The button's source code possible looks something like this:
[ContentProperty("Content")]
public class Button : ButtonBase
{
...
}
Ok, so what's next... Let's try adding a visual element as my content
<Button>
<Ellipse Width="20" Height="20" Fill="Black" />
</Button>
This is nice... now I can add shapes to my button... or can I? We have now hit one of the "limitations" of ContentControl... Content can only be a single object! This is easy to bypass... just add a panel and you object as children. Lets try adding 2 circles
<Button>
<StackPanel>
<Ellipse Width="20" Height="20" Fill="Black" />
<Ellipse Width="20" Height="20" Fill="Black" />
</StackPanel>
</Button>
So now we can add endless objects inside a ContentControl... (Remember that this is not "free" and their is a performance hit if you add to many controls). Lets look at the string example again. We added a object of type string as my content... how did it know how to display it? We will look at the rules of how this gets resolved in more detail later... but for now, all I will say is that if it is not a visual element, then it calls the ToString() of this object to render it!
Lets make it a little more complicated, here is my button
<Button x:
All I did is give my button a name... the next step will happen in the code behind... I created a very simple Person class
public class Person
{
public string Name { get; set; }
public string Surname { get; set; }
}
And then add the following in the Loaded event
Person me = new Person() { Name = "Rudi", Surname = "Grobler" };
MyButton.Content = me;
What will be displayed now? Same rules still apply... ToString() returned WpfApplication1.Person. Ok, lets override the ToString()
public override string ToString()
{
return Name + " " + Surname;
}
Now my button shows "Rudi Grobler". Great, so if it is not a visual element, then it uses the ToString() of the object!
For what ever reason, I want a circle in between the name and surname (Try this with WinForms?).
<Button x:
<Button.ContentTemplate>
<DataTemplate>
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Name}" />
<Ellipse Width="15" Height="15" Fill="Black" />
<TextBlock Text="{Binding Surname}" />
</StackPanel>
</DataTemplate>
</Button.ContentTemplate>
</Button>
WOW!!!
Lets break this down... Now the ToString() gets ignored, why? DataTemplate!!!
What is a DataTemplate?
A data template is a piece of UI that you'd like to apply to a arbitrary .NET object when it is rendered.
Ok, what have we learned about the ContentControl so far?
1) If the ContentControl has a DataTemplate defined, use it
2) If the content is a visual element (derived from UIElement), render it using UIElement.OnRender()!
3) DataTemplate with matching DataType found (up the visual tree), use it
4) Use the objects ToString()
ContentControl is amazing...
So, what else does a ContentControl have? It has a property called HasContent... Why does it have a property called HasContent, if I can just as easily check by using Content == null?
This is the WPF way... This allows easily checking for content from XAML... Now you can animate based on if a control has content, etc?
What if my Content is a collection of .NET managed objects? Have a look at the ItemsControl... Dr WPF has a excellent series covering the ItemsControl in detail (or has he put it, from A to Z)
ItemsControl - 'A' is for Abundance
ItemsControl - 'B' is for Bet You Can't Find Them All
ItemsControl - 'C' is for Collection
ItemsControl- 'D' is for DataTemplate
ItemsControl- 'P' is for Panel
ItemsControl- 'I' is for Item Container
ItemsControl- 'R' is for Rob has a Customer
If you need to see the power of the content model, read 'R' is for rob has a Customer and check out the split button he created!
You've been kicked (a good thing) - Trackback from DotNetKicks.com
I agree the Content Model is simple beautiful
Haven't delved into the WPF world yet, though I have played with Silverlight.
This is an interesting look at some of the mechanisms involved, thanks.
I believe you mean "Why WPF Rocks..."
You're basically stating/asking why was the bicycle invented first when a car is vastly superior!?!
think evolution and necessity
Hi Brent, tnx for the english lesson :)
ahura mazda, the idea is actually to highlight were WPF made huge improvements!
Your Story is Submitted - Trackback from DotNetShoutout | http://dotnet.org.za/rudi/archive/2008/04/07/why-wpf-rock-the-content-model.aspx | crawl-002 | en | refinedweb |
a convention followed by the caller and callee. In addition to that, the CLR manages its own stack of frames to mark transitions in the stack, for example unmanaged to native calls, security asserts, and uses the information to mark the addresses of GC roots that are active in the call stack. These are stored
We'll use this set of types in our examples below:
using System;
using System.Runtime.CompilerServices;
class Foo
{
[MethodImpl(MethodImplOptions.NoInlining)]
public int f(string s, int x, int y)
{
Console.WriteLine("Foo::f({0},{1},{2})", s, x, y);
return x*y;
}
[MethodImpl(MethodImplOptions.NoInlining)]
public virtual int g(string s, int x, int y)
{
Console.WriteLine("Foo::g({0},{1},{2})", s, x, y);
return x+y;
}
}
class Bar : Foo
{
[MethodImpl(MethodImplOptions.NoInlining)]
public override int g(string s, int x, int y)
{
Console.WriteLine("Bar::g({0},{1},{2})", s, x, y);
return x-y;
}
}
delegate int Baz(string s, int x, int y);
Furthermore, we'll imagine the following variables are in scope for examples below:
Foo f = new Foo();
Bar b = new Bar();
The CLR's jitted code uses the fastcall Windows calling convention. This permits the caller to supply the first two arguments (including this in the case of instance methods) in the machine's ECX and EDX registers. Registers are significantly faster than using the machine's stack, which is where the remaining arguments are supplied, in right-to-left order (using the push instruction).
fastcall
ECX
EDX
push | http://www.wrox.com/WileyCDA/Section/id-291453.html | crawl-002 | en | refinedweb |
40. Empty model tests
This example is for Django's SVN release, which can be significantly different from previous releases. Get old examples here: 0.96, 0.95.
These test that things behave sensibly for the rare corner-case of a model with no fields.
Model source code
from django.db import models class Empty(models.Model): pass
Sample API usage
This sample code assumes the above model has been saved in a file mysite/models.py.
>>> from mysite.models import Empty >>> m = Empty() >>> m.id >>> m.save() >>> m2 = Empty() >>> m2.save() >>> len(Empty.objects.all()) 2 >>> m.id is not None True >>> existing = Empty(m.id) >>> existing.save() | http://www.djangoproject.com/documentation/models/empty/ | crawl-002 | en | refinedweb |
This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: Django 1.0
Managing files¶
This document describes Django’s file access APIs.: chevy.jpg> >>> car.photo.name u'cars/chevy.jpg' >>> car.photo.path u'/media/cars/chevy.jpg' >>> car.photo.url u''
This object -- car.photo in the example -- is a File object, which means it has all the methods and attributes described below.
The File object¶
Internally, Django uses a django.core.files.File any time it needs to represent a file. This object is a thin wrapper around Python's built-in file object with some Django-specific additions.
Most of the time you'll simply.storage import default_storage >>> from django.core.files.base import ContentFile >>> path = default_storage.save('/path/to/file', ContentFile('new content')) >>> path u'/path/to/file' >>> default_storage.size(path) 11 >>> default_storage.open(path).read() 'new content' >>> default_storage.delete(path) >>> default_storage.exists(path) False
See File storage API for the file storage API.
The built-in filesystem storage class¶
Django ships with a built-in FileSystemStorage class (defined in django.core.files.storage) which implements basic local filesystem file storage. Its initializer takes two arguments:
For example, the following code will store uploaded files under /media/photos regardless of what your MEDIA_ROOT setting is:
from django.db import models from django.core.files.storage import FileSystemStorage fs = FileSystemStorage(location='/media/photos') class Car(models.Model): ... photo = models.ImageField(storage=fs)
Custom storage systems work the same way: you can pass them in as the storage argument to a File. | http://docs.djangoproject.com/en/dev/topics/files/%3Ffrom=olddocs | crawl-002 | en | refinedweb |
thinkbayes2 import Pmf, Cdf, Suite, Joint import thinkplot
For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm. Suppose you learn that someone is 170 cm tall. What is the probability that they are male?
Run this analysis again for a range of observed heights from 150 cm to 200 cm, and plot a curve that shows P(male) versus height. What is the mathematical form of this function?
To represent the likelihood functions, I'll use
norm from
scipy.stats, which returns a "frozen" random variable (RV) that represents a normal distribution with given parameters.
from scipy.stats import norm dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3))
Write a class that implements
Likelihood using the frozen distributions. Here's starter code:
class Height(Suite): def Likelihood(self, data, hypo): """ data: height in cm hypo: 'male' or 'female' """ return 1
# Solution goes here
Here's the prior.
suite = Height(['male', 'female']) for hypo, prob in suite.Items(): print(hypo, prob)
And the update:
suite.Update(170) for hypo, prob in suite.Items(): print(hypo, prob)
Compute the probability of being male as a function of height, for a range of values between 150 and 200.
# Solution goes here
# Solution goes here
If you are curious, you can derive the mathematical form of this curve from the PDF of the normal distribution.
Suppose I choose two residents of the U.S. at random. A is taller than B. How tall is A?
What if I tell you that A is taller than B by more than 5 cm. How tall is A?
For adult male residents of the US, the mean and standard deviation of height are 178 cm and 7.7 cm. For adult female residents the corresponding stats are 163 cm and 7.3 cm.
Here are distributions that represent the heights of men and women in the U.S.
dist_height = dict(male=norm(178, 7.7), female=norm(163, 7.3))
hs = np.linspace(130, 210) ps = dist_height['male'].pdf(hs) male_height_pmf = Pmf(dict(zip(hs, ps)));
ps = dist_height['female'].pdf(hs) female_height_pmf = Pmf(dict(zip(hs, ps)));
thinkplot.Pdf(male_height_pmf, label='Male') thinkplot.Pdf(female_height_pmf, label='Female') thinkplot.decorate(xlabel='Height (cm)', ylabel='PMF', title='Adult residents of the U.S.')
Use
thinkbayes2.MakeMixture to make a
Pmf that represents the height of all residents of the U.S.
# Solution goes here
# Solution goes here
Write a class that inherits from Suite and Joint, and provides a Likelihood function that computes the probability of the data under a given hypothesis.
# Solution goes here
Write a function that initializes your
Suite with an appropriate prior.
# Solution goes here
suite = make_prior(mix) suite.Total()
thinkplot.Contour(suite) thinkplot.decorate(xlabel='B Height (cm)', ylabel='A Height (cm)', title='Posterior joint distribution')
Update your
Suite, then plot the joint distribution and the marginal distribution, and compute the posterior means for
A and
B.
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here | https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/height.ipynb | CC-MAIN-2021-10 | en | refinedweb |
Almost all programming languages offer some features to repeat a certain task. These features are known as loops. For example, if you want to print student names out of the list, you can use a loop instead of using a print statement every time for each student. In this article, we are going to discuss the do-while loop in python.
A do while loop in python will run or execute a code while the condition remains true. It has a simple syntax.
Syntax
#Syntax of do while loop do { loop } while (condition);
The flowchart of do while loop in python
To get the things better, you can follow the flowchart of the loop.
Here, you can see that the loop executes the code while the condition is true and it will stop execution when the condition turns False. If the condition doesn’t turn false, the loop will be a never-ending or infinite loop.
This type of loop is called do while loop in other programming language. But in Python it is generally termed as While loop.
A simple Python do while loop
Well, we are now good with some definitions and working of the loop along with its flowchart. Now, let’s take a simple do while loop example to understand things better.
#Simple example of do while loop i = 1 while True: print(i) i = i+1 if (i >4): break
1 2 3 4
Even though python doesn’t explicitly have the do while loop, you can easily emulate as shown above.
Keep the things clear about loop termination –
- When it encounter a condition which is true/false depending on loop.
- When we use the break statement.
Let’s put it all together
Till now, we have seen the working of do-while loops in python. Now, let’s take up a problem statement, and using the do-while loop we can code the scenario. Interesting? let’s roll!!!.
We are going create a magic number game –
- The number should be automatically generated.
- User will get 3 attempts to guess the number.
- If the guessed number is right, then message will be displayed.
import random magic_num = random.randint(0,10) attempts= 0 while attempts < 3: print("Guess a number between 1 and 10: ") guess = int(input()) attempts = attempts +1 if guess == magic_num: break print('YAY, You have guessed a magic number!')
Guess a number between 1 and 10: 5 Guess a number between 1 and 10: 9 Guess a number between 1 and 10: 1 YAY, You have guessed a magic number!
Yes, we have guessed the magic number in out third attempt. Not bad!.
In this code, we have imported the random module to generate a random number between 1 – 10. Then we have defined a variable to store the random number. Then we have defined a do-while loop, which will count the user attempts. If the attempts go over 3 then the loop will not execute.
Key points
- Python doesn’t explicitly have a do while loop, but we can emulate it easily.
- The loop will execute the code and then checks the condition.
- It will execute if the condition is true.
- Loop will be terminated if the condition turns false.
- A simple syntax and easily applicable loop like others.
Ending note
You can see Do while loops easily and directly in other programming languages. But in python, you can use while loop directly, which will get your job done. You can follow the syntax, flowchart, and examples to get things better and easier at your end.
That’s all for now, Happy Python!!!
More read: Stack overflow | https://hackanons.com/2021/02/do-while-loop-in-python.html | CC-MAIN-2021-10 | en | refinedweb |
Convert Text to Speech using Python
Want to share your content on python-bloggers? click here.
In this article we will discuss how to convert text to speech using Python.
Table of contents
- Introduction
- Basic text to speech conversion
- Changing voice
- Changing speech rate
- Changing volume
- Saving speech as mp3 file
- Conclusion
Introduction
The text-to-speech (TTS) conversion along with speech synthesis became increasingly popular with the growth of programming communities.
There are currently several Python libraries that allow for this functionality and are continuously maintained and have new features added to them.
To continue following this tutorial we will need the following Python library: pyttsx3.
If you don’t have it installed, please open “Command Prompt” (on Windows) and install it using the following code:
pip install pyttsx3
Basic text to speech conversion using Python
The basic functionality of this library is very simple to use. All we are required to do is import the library and initialize the speech engine, have the text in the string format, and execute the text to speech process:
import pyttsx3 engine = pyttsx3.init() engine.say('This is a test phrase.') engine.runAndWait()
What you will hear at default settings is a female voice that pronounced the phrase quite fast. For the cases when you either want to change the voice or the speech rate or the volume, the library provides a lot of flexibility.
The engine instance of the class we initialized has the .getProperty() method which will help us adjust the current default settings to the ones we want.
Now you can start to explore more features and learn more about how to convert text to speech using Python.
Changing voice
The The engine instance of the class we initialized in the previous section has pyttsx3 library has two types of voices included in the default configuration: male voice and female voice.
These can be retrieved by simply running:
voices = engine.getProperty('voices') print(voices)
What you should get in return is a list that has the local memory locations of each voice. Now we want to try each of them, and we simply run the text to speech basic usage code through a loop:
for voice in voices: engine.setProperty('voice', voice.id) engine.say('This is a test phrase.') engine.runAndWait()
An observation we can get is that the male voice is stored in the list at index 0 and female voice is stored in the list at index 1.
To set the voice as a permanent setting, the engine instance of the class we initialized has the .setProperty() method. It will allow us to specify which of the two voices the code should use.
Let’s say I want to permanently change the voice to male’s (remember it’s at index 0):
engine.setProperty('voice', voices[0].id)
Now every phrase you will try to run through using the initialized engine will always have the male voice.
Changing speech rate
After we changed the voice, we may want to adjust the speech rate of how fast each phrase is being said.
Using the known .getProperty() method we will first find out what the current speech rate is:
rate = engine.getProperty('rate') print(rate)
For the default settings the rate showed to be 200 (which should be in words per minute).
When I listened to the engine initially I thought it was too fast, so I would like to decrease the words per minute rate to let’s say 125. Similarly to setting the voice, we will use setProperty() method to work with the speech rate and test it:
engine.setProperty('rate', 125) engine.say('This is a test phrase.') engine.runAndWait()
You should hear a significantly slower speech that it more comfortable to listen.
In another case, if you feel that the speech rate is too low you can always adjust it and generally just keep trying different values until you find the one that you are satisfied with.
Changing volume
Similarly to the speech rate adjustment, we can alter the volume of the voice we set.
Using the known .getProperty() method we will first find out what the current volume is:
volume = engine.getProperty('volume') print(volume)
For the default settings the rate showed to be 1.0 (which is the maximum we can have and the range is between 0 and 1).
You can basically choose any value between 0 and 1 to see how the volume changes. Similarly to setting the speech rate, we will use setProperty() method to work with the volume and test it:
engine.setProperty('volume', 0.5) engine.say('This is a test phrase.') engine.runAndWait()
Here we set the volume to be half of what it was before and notice the difference when we listen to the test phrase.
Such a setting allows for great flexibility with adjustments depending on the narrative based on the use of your text to speech conversion.
Saving speech as mp3 file using Python
Another wonderful functionality provided in this library is the ability to simply store our text to speech conversions as mp3 files which can be listened later in any audio player.. You can of course alter the destination by specifying it in the output file path.
Conclusion
In this article we discussed how to convert text to speech using Python.
By working through this code, you should be able to convert full texts to speech with the required adjustments.
I also encourage you to check out my other posts on Python Programming.
You can learn more about the pyttsx3 library here.
Feel free to leave comments below if you have any questions or have suggestions for some edits.
The post Convert Text to Speech using Python appeared first on PyShark.
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2020/10/convert-text-to-speech-using-python/ | CC-MAIN-2021-10 | en | refinedweb |
?.
2..
3. Explain Java Exception Hierarchy?
Java Exceptions are hierarchical and inheritance is used to categorize different types of exceptions.
Throwable is the parent class of Java Exceptions Hierarchy and it has two child objects –
Error.
Runtime Exceptions are caused by bad programming, for example trying to retrieve an element from the Array. We should check the length of array first before trying to retrieve the element otherwise it might throw
ArrayIndexOutOfBoundException at runtime.
RuntimeException is the parent class of all runtime exceptions.
4..
5..
6. What is difference between Checked and Unchecked Exception in Java?
- Checked Exceptions should be handled in the code using try-catch block or else method should use throws keyword to let the caller know about the checked exceptions that might be thrown from the method. Unchecked Exceptions are not required to be handled in the program or to mention them in throws clause of the method.
Exceptionis the super class of all checked exceptions whereas
RuntimeExceptionis the super class of all unchecked exceptions. Note that RuntimeException is the child class of Exception.
- Checked exceptions are error scenarios that requires to be handled in the code, or else you will get compile time error. For example, if you use FileReader to read a file, it throws
FileNotFoundExceptionand we must catch it in the try-catch block or throw it again to the caller method. Unchecked exceptions are mostly caused by poor programming, for example NullPointerException when invoking a method on an object reference without making sure that it’s not null. For example, I can write a method to remove all the vowels from the string. It’s the caller responsibility to make sure not to pass null string. I might change the method to handle these scenarios but ideally the caller should take care of this.
7. What is difference between throw and throws keyword in Java?
throws keyword is used with method signature to declare the exceptions that the method might throw whereas throw keyword is used to disrupt the flow of program and handing over the exception object to runtime to handle it.
8. How to write custom exception in Java?
We can extend
Exception class; } }
9.
10...
12. What happens when exception is thrown by main method?
When exception is thrown by main() method, Java Runtime terminates the program and print the exception message and stack trace in system console.
13..
14..
15..
Thank You for the post pankaj,
could you please update the indentation/numbering.
Thank You.
May i know all these questions are enough for Interview
Hi Sir,
Thanks for advance,
This is your statement, Java Exception handling framework is used to handle runtime errors only, compile time errors are not handled by exception handling framework.
Could you please share me who will handle this compile time errors(compiletime exceptions).
Compile-time errors are programmer responsibility. For example, you typed “STRING” rather than “String” and getting a compile-time error. Do you think this is something Java can fix? NO.
consider following code snipped
try{
int a = 10/0; // statement 1
}catch(ArithmeticException ae){
// handle code
}
My question is – exception arise at statement 1 & Exception object created,but how java internally identify that exception is ArithmeticException only, not other one ?
Good set of questions Pankaj. Thanks for the effort you put in.
HI Pankaj,
Thanks for providing this very useful tutorial, i learned a lot from it.
It is mentioned that catching “NullpointerException” is perfectly fine.
But i think it’s a bad practice to catch such Runtime Exceptions, they should be avoided with proper programming logics rather catching.
what do you think?
Thanks,
-Dash
NullPointerException comes from poor programming. It’s better to have a null check rather than catching NullPointerException. I have never mentioned that it’s a good practice to catch NPE. However, you might see that in some of the sample codes, that is just to explain some concepts and shouldn’t be considered as best practice.
Hi All,
Question 5 ; One overriding rule w.r.t exception: If child class method throws any checked exception compulsory parent class method should throw the same checked exception or its parent. It needs not to exact same. But there are no restrictions for unchecked exceptions. The code will be fine like below…
package com.journaldev.exceptions;
import java.io.IOException;
public class TestException4 {
public void start() throws Exception{
}
public void foo() throws NullPointerException{
}
}
class TestException5 extends TestException4{
public void start() throws IOException{
}
public void foo() throws RuntimeException{
}
}
Thanks,
Murali
Please explain 5 question
Hi,
First sample code in section 5 looked incorrect to me (catch(IOException | SQLException | Exception ex)). Types in multi-catch block must be disjoint. IO and SQL exceptions are subclasses of Exception in this case!
Beautiful tutorials.We love to study java from this tutorials all time!! Keep it Up!!
Hi,
Do you really think than question 6 has the right explanation. [ Rest of the questions I haven’t read ]
This is totally wrong, How you are going to handle “checked exceptions in the try catch block”. Unchecked exceptions not meant to be handled using try-catch.?
Or it’s some kind of typo mistake.
I didn’t get your point, let’s say you have a foo() method that is throwing FileNotFoundException and NullPointerException. Now I am calling foo() method in my program, since FileNotFoundException is a checked exception, I will have to either use try-catch block to handle it or use throws clause to throw it back to caller.
For NullPointerException, I don’t have to do anything at all. I am not required to use catch block or add it to throws clause. However better programming would be to make sure that if I am passing any input to foo() method then make sure it’s not null.
Good explanation, must watch
this is the best way of student activity.super good but students not utilise this proper way.
Very important tutorial
Thanks a lot Veranga. !)
Each try block must be followed by catch OR finally.
When we have try with resources we are allowed skip catch and finally blocks.
Yes you are right, you can have try-finally block too.
Hi Pankaj,
Could you please explain Progarm F, i couldnt understand it..
same is applicable to try block resources, they are also final i.e. you cant re assign new value (object) to resources.
Very informative.plz add few more questions
I will when I get more, but if you got some new and tricky questions then feel free to comment here.
can u re-explain throw and throws in detail ??
and also msg me the ans on my id please..
It’s simple. ‘throw’ keyword is use to send exception to the caller program. ‘throws’ keyword is used in method declaration to let caller program know that the method may throw these xxx exceptions, so make sure you handle them.
my question is about QN:E, As per answer start() signature is different in subclass (which is creating the problem) and what about foo() which is also having same issue?
If a method in parent class throws checked exception, child class can not change the signature. The child can ignore it or add more unchecked exceptions.
If parent class method does not throw checked exception, the child class method cannot throw it either.
start() method is throwing checked exceptions, so the rule apply. foo() method is throwing un-checked exceptions and the rule doesn’t apply on them.
Had great.
Yes you are right, updated the post. Thanks!!.
Yes, thats what I meant with multi-catch block.
Great man… such a descriptive way of explanation..
Thanks
Hats off to you Pankaj ..Super ..good work….Never stop your blogging habbit…
Great post “Hats Off To YOU”….. I have read many offline/online books n tutorials but ur understanding to
subject is so deep that it reflects in ur posts….not only this topic but every of ur post …. exceptional knowledge and exceptional writing……. 🙂
Regards
Sonal
Thanks Sonal, your appreciation means a lot.
finally() method is executed by Garbage Collector before the object is destroyed is probably wrong , Ideally it should be finalize()
Yes you are right, it was a typo. Thanks for pointing out, I have updated the post.
Hi,
Thank you so much for the great work that you are doing for us which is benefiting us in building our career. In fact Its our duty to appreciate people like you because you deserve it!
Thanks for the nice comments Rajiv. 🙂{
}
Good one…very informative post | https://www.journaldev.com/2167/java-exception-interview-questions-and-answers | CC-MAIN-2021-10 | en | refinedweb |
Custom Validator in Fluent Validation
Fluent validation is an excellent validation framework for and written in .NET. It is easy to use and supports the most common validation scenarios out of the box; I highly recommend that you use it in your projects. There are times, however, that you want to add your own validators in order to support your business rules. In this post, we will be implementing a custom generic validator with arguments.
The Validator
We will be creating a validator that checks if the value of a certain property is in a supplied enumerable. The final syntax will look like:
RuleFor(u => u.Continent).In(new[] { "Africa", "Europe", "Asia", "North America", "South America", "Antartica", "Australia" });
In that snippet, the property
Continent is a
string. The intent is for validation to fail if the
Continent value is not in the supplied enumerable. In the snippet, we used a
string array, though we will be creating the validator in such a way that we can pass any
IEnumerable<T> in.
The Validator Class
We will create a new class that would encapsulate the encapsulation logic. For an introduction to creating custom validators, see this page on codeplex. In our validator, we will be creating a validator that accepts arguments and is generic (can be used on any property type).
The first step is to create the validator class. I will include it here first then provide explanations:
public class InValidator<T> : PropertyValidator { public IEnumerable<T> Enumerable { get; private set; } public InValidator(IEnumerable<T> enumerable) : base("Property {PropertyName} not in the specified enumerable.") { Enumerable = enumerable; if (enumerable == null) { throw new ArgumentNullException("enumerable", "Enumerable should not be null."); } } protected override bool IsValid(PropertyValidatorContext context) { var item = (T)context.PropertyValue; return Enumerable.Contains(item); } }
Like any custom property validator, this class inherits from the
PropertyValidator class. The key element here is the
IEnumerable<T> Enumerable property, whose value gets supplied from the constructor. The actual validation happens in the
IsValid method, where we just check if the property value is in the supplied collection.
Creating the Extension
The validator is usable now by using
SetValidator, but in order to get the fluent syntax above, we are going to need to write an extension method for this validator. It would look like this:
public static class Extensions { public static IRuleBuilderOptions<T, TProperty> In<T, TProperty>(this IRuleBuilder<T, TProperty> ruleBuilder, IEnumerable<TProperty> enumerable) { return ruleBuilder.SetValidator(new InValidator<TProperty>(enumerable)); } }
The extension method essentially acts as a wrapper for our validator class. In addition, it allows us to use the validator in a fluent way.
The fluent validation framework is a powerful framework that supports many validation scenarios, and includes a catch-all predicate validator. If you prefer to have validators that are more fit for your needs, creating a custom validator might be the way to go. This post showed you how to create one such validator. Hopefully it can help you in creating your own custom validators. Have fun! | https://www.ojdevelops.com/2013/12/custom-validator-in-fluent-validation.html | CC-MAIN-2021-10 | en | refinedweb |
GREPPER
SEARCH SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
PHP
>>
laravel make model
“laravel make model” Code Answer’s
how to create model in laravel
php by
on Jul 18 2020
Donate
1
php artisan make:model Flight
Source:
laravel.com
laravel create model
shell by
TheDutchScorpion
on Apr 22 2020
Donate
3
#
Source:
laravel.com
laravel create model and migration
shell by
TheDutchScorpion
on Apr 22 2020
Donate
3
# If you would like to generate a database migration when you # generate the model, you may use the --migration or -m option: php artisan make:model Flight --migration php artisan make:model Flight -m
laravel make model
php by
Fernando Gunther
on Nov 14 2020
Donate
0
php artisan make:model Flight --migration php artisan make:model Flight -m
Source:
laravel.com
model observer laravel
php by
Fragile Fish
on Jul 30 2020
Donate
1
namespace App; use Illuminate\Database\Eloquent\Model; class Post extends Model { protected $table = 'posts'; protected $fillable = ['title', 'slug', 'content']; protected static function boot() { parent::boot(); static::saving(function ($model) { $model->slug = str_slug($model->title); }); } }
Source:
laravel make model
php by
Fernando Gunther
on Nov 14 2020
Donate
0
php artisan make:model Flight --factory php artisan make:model Flight -f php artisan make:model Flight --seed php artisan make:model Flight -s php artisan make:model Flight --controller php artisan make:model Flight -c php artisan make:model Flight -mfsc
Source:
laravel.com
PHP answers related to “laravel make model”
create model for existing table in laravel
create model in laravel command line
create model with controller laravel
create new record via model in laravel
crrate model in laravel
laravel create
laravel create on model
laravel edit form modal example
laravel eloquent
laravel list of models
laravel make model with migration and controller
laravel model::query
laravel new
make() laravel
model laravel
php artisan make :migration with model
php artisan make model
use model from variable laravel
what is actullay work model in laravel
PHP queries related to “laravel make model”
model make laravel
laravel make migration with model
create a model lara
model create laravel
creat model laravel
laravel migrate model
laravel register observer
what does observer do in laravel
how model laravel
laraval make model
php artisan make:model
laravel module make model
make new model laravel
create a model laravel
model creating in laravel
how develope model in laravel
model::create laravel
laravel php artisan create model
how to create a model in laravell
make model options larvel
laravel on create model
model with edit in laravel
laravel model update
how to create model with migration in laravel
laravel creat model
createmodel in laravel
Model::create in laravel
laravel 4.2 make model
laravel use new model
default user migration laravel
create laravel model
laravel create model using php
command to create table migration in laravel
create model and migration laravel
make laravel model
how to add migration in laravel
create model in laravel 8
php artisan model
laravel create models from migrations
laravel make migration composer
create migration file for a model
laravel migration list
laravel create migrate and model
model create in laravel
make model from migration laravel
how to create migrations table in laravel
laravel controller migration fresh
laravel create alter migration
laravel orm,
create laravel from model
create model method to get data laravel
crrate model in laravel
Model::make laravel
laravel own observer method
make a model with laravel
laravel ele
laravel make
laravbvel make model commanbd
make a model in laravel
model where in laravel
php laravel model create
eloquent orm in laravel
laravel 7 make igratiom
laravel run single migration
model post php larabel
create model
model file in laravel
laravel eloquent on update
@ delete laravel
Post::create laravel
laravel eloquent update command
create method laravel
delete laravel eloquent
update data in laravel eloquent
update in laravel eloquent
laravel set default user migration
delete model laravel
laravel 404 function attribute if soft deleted
->where('expenses_category.id', '=', *) where all laravel
laravel create modal
laravel 8 update model
how to create a model in laravel 6
eloquent insertr
use own model table in laravel 8
create model function in laravel 8
php artisan make:model (User) -m-c-r
laravel model:create
create in laravel model
laravel make new model
use with data in laravel elequent
laravel collection
how to create a model in laravel
new model in laravel
laravel 8 migration change enum
protected $table in laravel
find in laravel eloquent
ORM inlaravel
Eloquent Entity::update
update eloquent
laravel eloquent scope
delete eloquent query
sequelize create model and migration
laravel artisan model create
laravel artisanmodel
laravel eloquont
laravel elequent
using where in eloquent laravel
laravel.com make models
inserting laravel model object to another table
inserting laravel object in table
laravel 5 update
update laravel eloquent example
laravel eloquent how id type
eloquent model in laravel
"laravel model observer"
laravel scope without reference
laravel model find
laravel update method
laravel model find all where
laravel insert eloquent
eloquent create
and laravel eloquent
create eloquent model laravel
laravel make model with table
retrieve method call in laravel model observer count
models in laravel
remove eloquent laravel
eloquent documentation
laravel eloquent instence with
what should i assign the type of quality in laravel
model in laravel
eloquent in laravel
update con eloquent
laravle model methods
laravel delete model
laraval cretae model
laravel model create update
model insert in laravel
create eloquent in laravel
make model in laravel
public function in laravel model
q: hooks in laravel
laraveel scope where
find in laravel
laravel connecting create methods
save model eloquent orm
eloquent orm
eloquent php save entity
eloquent model update
laravel define table in model
laravel eloqu
eloquent save()
model delete laravel example
eloquent delete
Laravel model insert
eloquent update laravell
php artisan migrate:rollback User
laravel artisan create migration and model
destroy laravel
laravel databse orm
laravel mode create()
create databse with eloquent laravel
laravel eloquent store get
eloquent queries laravel
laravel query scope
laravel model example
laravel model scope
query scope laravel
laravel orm
laravel find all
laravel que all into model
laravel mass creations
what is eloquent in laravel
laravel.com firstOrFail
laravel first
laravel all()
find orm laravel
find laravel orm
laravel create new model based on previous
update laravel eloquent
laravel eloquent update
laravel eloquent find()
delete laravel
custom database name define in laravel model
laravel model scope function
apply getCoordinates scope to a laravel model
create migration laravel with model
add record eqoquent
scope laravel mpodel
laravel update or create
how create a laravel model
model wheree
laravel find one
update in laravel
laravel find where
create or update laravel
Eloquent
Model::with()
Model laravel with
Modelwith laravel
laravel new model with values
$model->update(
make model with migration laravel
laravel model make
laravel model on delete
laravel ceate methid
laravel update model
laravel make:model -a
laravel 8 create model
find laravel
laravel delete
db model in larvel
laravel model ::create
laravel create method
models laravel
laravel global scope
how to make a model in laravel
laravel model use as
laravel model sacope
laravel make query in model file
eloquent model where
laravel moels
laravel update object model
delete in laravel
model in laravel
how to use addition when creating a model in laravel
create model with table laravel
update laravel
laravel first or create
php laravel eloquent
laravel command table
laravel create a model
eloquent insert
laravel model
laravel 7 eloquent model keytype
laravel eloquent model type
elequent laravel
laravel model mapping
laravel eloquent all where
scope laravel
update eloquent laravel
laravel eloquent update query
laravel models
create model instance laravel
create model iin laravel
create model in laravel 6
laravel migration 1 or 0
append attribute when updating observe laravel
where eloquent
eloquent laravel
laravel how to update model
laravel model fillable
update in laravel
assece all HasMany relaship from model laravel
how to make a model in the laravel 8
make model in laravel command
how to make model in laravel
laravel create model with migration easy way
->makeModel laravel
makeModel laravel
make model -mcr laravel
model creation in laravel
register a model in laravel
laravel model::create
laravel model create
create models in laravel
laravel 7 add model
laravel new model
create new model laravel
how to create model and migration in laravel
laravel make model with migration and controller
create model in laravel -m -c
how to make model in laravel 8
creating a laravel model
laravel craete model
how to create model in php laravel
make model with all laravel
call observer laravael
how to create a model laravel
model created laravel
laravel ovserver
make model using laravel
laravel create new model
how to create model -mcr laravel
add model to laravel
how to createmodel in laravel
model::with in laravel
with in model laravel
laravel with model
where or create laravel
laravel 7 elequent add to column
php artisan make migration laravel
migration with model laravel
laravel create from model
laravel model method
create new from model laravel
model in laravel 7
make:model -mcr laravel
eloquent model event
laravel 7 orm controller example
example laravel orm controller create
laravel eloquenet or
php laravel create
laravel observer creating
laravel model query
laravel eloquent set
laravel how does fillable match with database table
laravel update query
php artisan make:model -mcr
mack model laravel
laravel create model store
scope in model laravel 7
scope in model laravel
update table laravel
laravel drop migration
laravel 6 how to get the orm name command
laravel 6 how to get the orm name
laravel fetch one table attribute with array value
roll back migration laravel means
laravel migrate rollback
laravel get all updated at events
model and migration laravel
php artisan make migration --m
belongs to add data laravel
laravel make migration in controller
creating models in laravel
refersh custom databasae laravel
laravel eloquent remove some existing where condition
how to make model in php
scope of $this in laravel model
create table with model laravel
how to use attributes in where eloquent laravel
laravel insert to model
laravel eloquent and or
laravel reset migrations table
add model function with query laravel
laravel add udpate process into model
laravel eloquent model
get data using model from table laravel
laravel 7 modify make model command
laravel 7 modelmake command
laravel how to modify the make model command
php artisan make:model user -mcr
retriveing table data class laravel
how to bypass laravel first database
php artisan make:model Flight
laravel eloquent add scope on boot method
add query in boot method laravel
laravel add query in elequent boot methid
laravel elequent scope
find model similar to current laravel
laravel find or fail sql
laravel drop all migrations
laravel model get then save
laravel make new migrate file
laravel 7 create migration from existing database
laravel eloquent find or create
laravel command update model
Eloquent model example
php artisan model make
laravel add socpe in query in retrieve model event
laravel 7 create model
laravel model where first
laravel command line make migration
cqs laravel eloquent model
laravel create new item
laravel with eloquent model
using or in laravel model
how to generate database by command laravel
model laravel example
laravel create models
make model for table laravel
how to define table in in model in laravel
master laravel eloquent
model::created laravel
create migration laravel from database
drop table command line laravel
laravel define method
make models in laravel
using find in laravel
morpto new data laravel
make modeal in laravel
laravel eloquent actions
laravel eloquent in query
create migration with model laravel command
laravel find()->include()
laravel scoped attributes
laravel check which table is connected to a model
laravel find model by column
laravel model and migration
scope function in laravel
scope laravel model
laravel eloquent ::make model
laravel have created model how to creat the migration for it
laravel scope model
observers laravel 7
laravel model save method
laravel model command
laravel model command
laravel 1:1 model
number laravel eloquent
laravel 7 observers
laravel ::create
create another table in laravel
query scope in laravel
how to create a model in laravel 7
laraval php create
where laravel
laravel model eloquent
how to run the migration in laravel to down
where to create model in laravel
laravel create(
how to migrate table in mysql using laravel
how to migrate table in mysql using laravel]
laravel complete refresh
model table name laravel
laravel create table and model
observer laravel
laravel fine record where
laravel model with
migration to model in laravel
databales with laravel eloquent
laravel add
how to check error elete laravel elequent
laravel local scope
laravel model created listen
laravel model added listen
laravel model observer
how to update db by create function in laravel
laravel create modeal
php artisan drop migration
laravel make:model
create a new migration add to table laravel
default in laravel migration
laravel restore for array
create new model in code laravel
update and insert in one query laravel
laravel eloquent add method to model
php laravel eloquent where function
creating a model in laravel
how create table in laravel step by step
how to get model primary key laravel
scope in laravel
oberservers laravel
create in laravel
laravel migrations to sql
laravel make observer
laravel eloquent make()
make model migration laravel
creating saving laravel observer
model laravel docs make
laravel model how to get from
laravel user table
whole database table attributes:protected not shwoing in the laravel
re create migration on laravel
make observer model laravel
laravel primaryKey
run migration for existing tables in laravel
why we use migration in laravel
laravel scope query
define laravel eloquent scopes
laravel deleting event
create database laravel object
laravel drop database
laravel bulparent generate
laravel clear schema
laravel set functions in model
what are laravel models
laravel observer declaration
laravel eloquent DB delete
in laravel where do you find the model
laravel eloquent when
laravel observers
laravel create function
laravel eloquent all order by
laravel on model create
laravel create crequest
laravel artisan make model shortcut keys
model insert laravel
php make model
first in laravel eloquent
what is model laravel
create new model in laravel
how to create model laravel
php eloquent model
with laravel eloquent
eloquent update query in laravel
user with model where laravel
laravel artisan create model
static observer laravel
laravel model save data
protected table laravel
create update laravel
php artisan make:import SecteursImport --model=Secteur
laravel eloquent create
eloquent mode php
laravel eloquent store
laravel "5.8" model create
laravel 6 artisan generate model
find() in laravel
laravel eloquent get all records
laravel model::with
laravel table update
laravel where eloquent
php laravel update table
laravel eloquent method create
create ci3 migration files from database
laravel 7 create or update
laravel entity builder
model observer laravel
laravel model attributes
create alter migration in laravel
laravel make model migration
model update laravel
mlaravel getall eloquent
laravel model for table
laravel update
define table name in model laravel
model laravel
laravel eloquent installation
laravel eloquent get all
laravel insert data eloquent
where in laravel eloquent
laravel Scope function
return User::find laravel
make model in laravel php
laravel find record by id
laravel model::create()
create new table using migration laravel
laravel orm get data
change connection and run migration laravel
laravel update table
database modelling concepts laravel
laravel model where
laravel model name
migration in laravel 7 table
get all laravel query
generate model laravel
laravel protected with
laravel what is model
laravel create databaes
laravel migrate custom table is not creating
model command in laravel
make a model laravel
laravel eloquen
laravel model migration
php artisan make:migration alter
how to edit an existing migration in laravel
laravel create migration with model
laravel model is
creating a moel in laravel
laravel protected $where
laravel eloqunet where soft
where laravel query with model
laravel where are models
create a model in laravel
php artisan make migration
create new model and migration in laravel
how to update migration file in laravel
eloquent update laravel
eloquent delete laravel
where we can find model in laravel
find in laravel query
php artisan make model ,
laravel standard value migration
delete in eloquent orm laravel
model laravel create
laravel find
what is laravel eloquent
laravel database history table migration
how to create model in laravel
laravel eloquent commands
laravel has one migration
laravel check model create method
query scope
laravel make model with migration
existing migration laravel
laravel 5.8 create model command
php artisan make model
laravel where query in model
active status data laravel using model
what is laravel model
laravel eloquent doc
laravel model timestamps
do something while creating new item laravel
down in a migration laravel
set table laravel model
php atisan make model
make migration with model laravel
Model :: get() Laravel
laravel create database command
php artisan make model from table
if model is created in laravel
how laravel updated works
laravel eloquent get() returns
laravel eloquent all method
create laravel
make migration with laravel
create model table laravel
laravel create
php artisan how to create a database.php file
laravel undo migration premature end of data
make:model laravel
::create laravel
Laravel Model $table =
Laravel eloquent get
laravel where()
is it ok to refefetch all data when we add new data in laravel
laravel use where in model
how to generate migration files from tables laravel 6
how to generate migration files from tables laravel
laravel article table
generate migration from table laravel
save model another iD laravel
laravel find or fail name
laravel create new database
laravel fill command
find eloquent model object
link model and migration in laravel
created by in laravel
laravel eloquent events
create model in laravel7
laravel bulk except
laravel make model command
laravel generate model
laravel create new instance
laravel create entity
create ()laravel
create laravel
laravel ->default('sha1') migration example
laravel eloquent
php aritisan makemodel
laravel create model with migration
laravel save the model name on table and retrieve it
laravel schema sample
laravel schema example
laravel mass insert
laravel migration command error
laravel eloquent find by id with where
laravel eloquent all with where
update db object laravel
laravel migration create table and model
change models in bulk laravel
laravel database migration create table
laravel fillable
laravel eloquent package
laravel create table with model
laravel update attribute
laravel migrate existing table
laravel when scopes
what is a model laravel
laravel get model from database
create models laravel
get models from database laravel
laravel data convertion
update laravel query eloquent
artisan make model
laravel migrate commands
model created function
laravel migration commands
laravel 6 make model
make model from table
migration for existing table laravel
eloquent all
create method on relationship laravel
laravel fresh
laravel update eloquent
la5ravel get all models
laravel 7 create model from table
get model where laravel
how to use laravel migration file from controller
how to use migration file from controller in laravel
how to use migration classes from controller in laravel
laravel 7 find or create
laravel make model with table name
laravel database migrations data
laravel 5.8 make model
eloquent where id >
create migration for model laravel
how to create table with php artisan command
laravel 5.8 create model file
create database laravel
OrderProductDelivery::create([ laravel
laravel creating model
how to prevent add new row in laravel
laravel 7 model
laravel eloquent methods
unsing queryScope in with eloquent
Eloquent::where not working on new table
laravel clear table terminal
laravel instantiate model
laravel usin where model
what is a laravel model
create model in laravel
$project query eloquent laravel
laravel 5.4 change my eloquent return type to my model name
laravel insert
get all in laravel
create model laravel
laravel scope pendingForUser(
eloquent laravel logo
making model in laravel
laravel refresh
laravel create or update
how to use migration files method from command line in laravel
how to use migration file class from command line in laravel
laravel 7 wherein update
laravel make migration from model
laravel model to migration
laravel migration to sql
get all record from model laravel
laravel eloquent or
run migration with connection laravel
laravel ::find
updating an object in eloquent model laravel
laravel migration data type list
make a phantom table laravel
laravel create with model
laravel return model create
laravle model connection
php artisan make migration with model
create protected data in table laravel
create with table name in laravel
laravel create migration from model
laravel generate migration from model
laravel 5.7 make model
create new entry in table from model laravel
laravel eloquent delete
get with data from the model file laravel
laravel make:migration with model
migration file Reminder laravel
laravel create model from table
php artisan make:model
laravel scope
learn eloquent model laravel
update or create laravel
delete model then resave is laravel
create new based on model laravel
create model with migration laravel
laravel get data from model
first in laravel
laravel make model options
update in model laravel
laravel check if file has been updated in firstOrCreate
exising migration laravel
laravel User:ALL()
laravel get all fields from model
laravel get() find()
laravel observer
Model::delete laravel
laravel eloquent order by
drop table from migration in laravel
laravel eloquent count
laravel make model
how to use eloquent in laravel
php artisan make migration and model
php laravel create model
how to know if eloquent query in laravel has return
laravel lumen find or create new
make model laravel
laravel table name
create in model laravel
laravel mass create
laravel create on model
why we use App\model name in laravel
laravel eloquent return id 1
laravel model events
create model from models laravel
php artisan create model
how to create a model laravel 7
laravel create model and migration
laravel create moedel
laravel create model | https://www.codegrepper.com/code-examples/php/laravel+make+model | CC-MAIN-2021-10 | en | refinedweb |
Compile Error: 'ResourceDictionary' Root Element Is a Generic Type
Strange Compiler Error
One of my coworkers suddenly encountered the following error in a large .NET application we're developing:
'ResourceDictionary' root element is a generic type and requires a x:Class attribute to support the x:TypeArguments attribute specified on the root element tag.
Of course the
ResourceDictionary class in question wasn't generic, hence the error made no sense. To make matters worse, no changes were made to the file since it was checked out from source control; and that version of the project compiled fine.
- We started randomly reverting different changes in the working copy - without success.
- We tried to compile the same code on a different machine, just to make sure there wasn't something wrong with the developer's copy of Visual Studio - it failed the same way on other machines.
Eventually we ran out of ideas and decided to check whether anyone else has encountered this issue before. Sure enough, we found a MSDN forum post with exactly the same issue. The solution for the problem made no more sense than the problem itself: add the
x:Class attribute to the root element as suggested in the error message. Its value doesn't really matter and can be anything.
How to Reproduce the Issue
This was enough to resolve our issue, but I couldn't let the matter rest without investigating it further. After reading the forum thread in detail a couple of times, I started looking for a simple way to reproduce the issue in another project. It turned out you need at least 3 classes in your project:
- A generic class derived from
UserControl:
public class GenericBaseUserControl<T> : UserControl { public T Property { get; set; } }
- A custom user control deriving from it:
<genericResourceDictionary:GenericBaseUserControl x: <Grid> </Grid> </genericResourceDictionary:GenericBaseUserControl>
- A
ResourceDictionaryimporting the namespace with the above custom control:
<ResourceDictionary xmlns="" xmlns: </ResourceDictionary>
Known Workarounds
That's enough for the bogus compile error to appear. I know of 2 workarounds to avoid the issue:
- Remove the namespace containing the problematic custom control from the resource dictionary if you don't need it:
<ResourceDictionary xmlns="" xmlns: <!-- No GenericResourceDictionary namespace, no compile error --> </ResourceDictionary>
- Add the
x:Classattribute to the
ResourceDictionaryelement:
<ResourceDictionary xmlns="" xmlns: <!-- This attribute fixes the build --> </ResourceDictionary>
In spite of these known workarounds I decided to report the issue to Microsoft Connect. I don't want other developers wasting their time on this strange compiler error, like we did. Feel free to upvote it, if your opinion is the same. | https://www.damirscorner.com/blog/posts/20150222-CompileErrorResourceDictionaryRootElementIsAGenericType.html | CC-MAIN-2021-10 | en | refinedweb |
Knowledge sharing — Introduction to the Spring Cloud Bus message bus
In the previous article titled, Introduction to the Spring Cloud Stream System , we discussed about the details of the Spring Cloud Stream. In this article, we’ll be discussing about another component of the Spring Cloud system. Spring Cloud Bus is positioned to be the message bus of the Spring Cloud system, which uses message broker to connect all nodes of the distributed system. The official Reference document of the Bus is very simple. It does not even have a picture.
The code structure of the latest Spring Cloud Bus (the code is short) is provided as follows:
The Bus Demo
Before we analyze how the Bus is implemented, let us take a look at two simple examples that uses the Bus.
Add a New Configuration Item to All Nodes
The example of the Bus is relatively simple, because the default configuration is already available in the AutoConfiguration layer of the Bus. You only need to introduce the Spring Cloud Stream framework corresponding to the message middleware and dependencies of the Bus. After that, all running applications will use the same topic to receive and send messages.
The demo for the Bus has been uploaded to GitHub: This demo initiates five nodes. If you add a new configuration item to any one of these five nodes, you add it to all of them.
Access the configuration-retrieving address provided by the controller of any node (the key is
hangzhou):
curl -X GET ''
The results returned by all nodes are all UNKNOWN, because the key
hangzhou does not exist in the configuration of any node.
The Bus has a built-in
EnvironmentBusEndpoint, which is used to add or update configuration through the message broker.
Access the endpoint (URL) of a node
/actuator/bus-env? name=hangzhou&value=alibaba to add a new configuration item to this node (for example, access the URL of node1):
curl -X POST '' -H 'content-type: application/json'
Then access all nodes
/bus/env to obtain the configuration:
$ curl -X GET ''
unknown%
~ ⌚
$ curl -X GET ''
unknown%
~ ⌚
$ curl -X GET ''
unknown%
~ ⌚
$ curl -X GET ''
unknown%
~ ⌚
$ curl -X GET ''
unknown%
~ ⌚
$ curl -X POST '' -H 'content-type: application/json'~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
alibaba%
You can see that a new configuration item has been added to all nodes. The key of the configuration item is
hangzhou, and the value is
alibaba. The configuration item is added through the
EnvironmentBusEndpoint.
Modify the configuration of some nodes
For example, set the destination to rocketmq-bus-node2 on node1 (spring.cloud.bus.id of node2 is set to
rocketmq-bus-node2:10002, which matches the setting on node1) to modify the configuration:
curl -X POST '' -H 'content-type: application/json'
Access
/bus/env to obtain the configuration (the message is sent from node1, so the Bus also modifies the configuration of node1):
~ ⌚
$ curl -X POST '' -H 'content-type: application/json'~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
alibaba%
~ ⌚
$ curl -X GET ''
xihu%
~ ⌚
$ curl -X GET ''
xihu%
You can see that only the configuration of node1 and node2 is modified, and that of the remaining three nodes remain unchanged.
Implementation of the Bus
Concepts
Event
A remote event
RemoteApplicationEvent is defined in the Bus. This event inherits the event of Spring
ApplicationEvent, and has four implementations:
- EnvironmentChangeRemoteApplicationEvent: The remote environment change event. This event mainly receives the
Map<String, String>data, and inserts the data into the context
environmentof Spring. The Bus demo is completed by using this event in conjunction with
EnvironmentBusEndpointand
EnvironmentChangeListener.
- AckRemoteApplicationEvent: The remote acknowledgement event. The Bus will send an
AckRemoteApplicationEventevent upon successfully receiving a remote event.
- RefreshRemoteApplicationEvent: The remote configuration refresh event. This event is used in conjunction with the
@RefreshScopeannotation and all
@ConfigurationPropertiesannotations to dynamically refresh the @Configuration class that is annotated by these annotations.
- UnknownRemoteApplicationEvent: The remote unknown event. If an exception is thrown when you convert internal messages of the Bus into a remote event, the exception will be encapsulated as this event.
The Bus also has a non-
RemoteApplicationEvent - SentApplicationEvent: the message sending event. This event is used in conjunction with Trace to record the remote messages that have been sent.
All these events are used in conjunction with an
ApplicationListener. For example, the
EnvironmentChangeRemoteApplicationEvent is used in conjunction with
EnvironmentChangeListener to add or update the configuration:
public class EnvironmentChangeListener
implements ApplicationListener<EnvironmentChangeRemoteApplicationEvent> { private static Log log = LogFactory.getLog(EnvironmentChangeListener.class); @Autowired
private EnvironmentManager env; @Override
public void onApplicationEvent(EnvironmentChangeRemoteApplicationEvent event) {
Map<String, String> values = event.getValues();
log.info("Received remote environment change request. Keys/values to update "
+ values);
for (Map.Entry<String, String> entry : values.entrySet()) {
env.setProperty(entry.getKey(), entry.getValue());
}
}
}
Call the
EnvironmentManager#setProperty method to set the configuration upon receiving the
EnvironmentChangeRemoteApplicationEvent from another node. This method sends an
EnvironmentChangeEvent for each configuration item change, which is listened for by the
ConfigurationPropertiesRebinder to perform the rebinding operation, and to add or update the configuration.
Actuator Endpoint
The Bus provides two Endpoints:
EnvironmentBusEndpoint and
RefreshBusEndpoint to add or update the configuration, or to refresh the global configuration. Their endpoint IDs or URLs are
bus-env and
bus-refreshrespectively.
Configuration
Message sending in the Bus inevitably involves the Topic and Group information. Such content has been encapsulated in
BusProperties. The default prefix is
spring.cloud.bus. For example:
spring.cloud.bus.refresh.enabledis used to enable or disable the listener that listens for the global refresh event.
spring.cloud.bus.env.enabledis used to enable or disable the endpoint for adding or updating the configuration.
spring.cloud.bus.ack.enabledis used to enable or disable the sending of the
AckRemoteApplicationEvent.
spring.cloud.bus.trace.enabledis used to enable or disable the listener that listens for the Trace.
The default topic for sending messages is
springCloudBus, which can be changed through configuration. You can set the Group to the broadcasting mode, or set it to the latest mode by using UUID in conjunction with the offset.
Each Bus application has a unique Bus ID. The official format of the Bus ID is complex:
${vcap.application.name:${spring.application.name:application}}:${vcap.application.instance_index:${spring.application.index:${local.server.port:${server.port:0}}}}:${vcap.application.instance_id:${random.value}}
We recommend that you set the Bus ID manually, because the destination of remote events of the Bus need to match the Bus ID.
spring.cloud.bus.id=${spring.application.name}-${server.port}
Underlying analysis of the Bus
Underlying analysis of the Bus involves the following questions:
- How does the Bus send messages?
- How does the Bus receive messages?
- How does the Bus match the destination?
- How does the Bus trigger the next action after receiving a remote event?
The
BusAutoConfiguration automatic configuration class is annotated by the
@EnableBinding(SpringCloudBusClient.class) annotation.
The
usage of the @EnableBinding annotation has been specified in the previous article Knowledge sharing - Introduction to the Spring Cloud Stream system and how it works. Its value is
SpringCloudBusClient.class, and it will create the
DirectChannel message channel for the input and output methods in
SpringCloudBusClientbased on the source and sink interfaces:
public interface SpringCloudBusClient { String INPUT = "springCloudBusInput"; String OUTPUT = "springCloudBusOutput"; @Output(SpringCloudBusClient.OUTPUT)
MessageChannel springCloudBusOutput(); @Input(SpringCloudBusClient.INPUT)
SubscribableChannel springCloudBusInput();
}
The binding properties: springCloudBusInput and springCloudBusOutput can be changed by modifying the configuration file (for example, by modifying the topic):
spring.cloud.stream.bindings:
springCloudBusInput:
destination: my-bus-topic
springCloudBusOutput:
destination: my-bus-topic
Send and receive messages
// BusAutoConfiguration@EventListener(classes = RemoteApplicationEvent.class) // 1
public void acceptLocal(RemoteApplicationEvent event) {
if (this.serviceMatcher.isFromSelf(event)
&& !( event instanceof AckRemoteApplicationEvent)) { // 2
this.cloudBusOutboundChannel.send(MessageBuilder.withPayload(event).build()); // 3
}
}@StreamListener(SpringCloudBusClient.INPUT) // 4
public void acceptRemote(RemoteApplicationEvent event) {
if (event instanceof AckRemoteApplicationEvent) {
if (this.bus.getTrace().isEnabled() && ! this.serviceMatcher.isFromSelf(event)
&& this.applicationEventPublisher ! = null) { // 5
this.applicationEventPublisher.publishEvent(event);
}
// If it's an ACK we are finished processing at this point
return;
}
if (this.serviceMatcher.isForSelf(event)
&& this.applicationEventPublisher ! = null) { // 6
if (! this.serviceMatcher.isFromSelf(event)) { // 7
this.applicationEventPublisher.publishEvent(event);
}
if (this.bus.getAck().isEnabled()) { // 8
AckRemoteApplicationEvent ack = new AckRemoteApplicationEvent(this,
this.serviceMatcher.getServiceId(),
this.bus.getAck().getDestinationService(),
event.getDestinationService(), event.getId(), event.getClass());
this.cloudBusOutboundChannel
.send(MessageBuilder.withPayload(ack).build());
this.applicationEventPublisher.publishEvent(ack);
}
}
if (this.bus.getTrace().isEnabled() && this.applicationEventPublisher ! = null) { // 9
// We are set to register sent events so publish it for local consumption,
// irrespective of the origin
this.applicationEventPublisher.publishEvent(new SentApplicationEvent(this,
event.getOriginService(), event.getDestinationService(),
event.getId(), event.getClass()));
}
}
1. Use the Spring event listener to listen for all
RemoteApplicationEvents received by the application. For example,
bus-env sends the
EnvironmentChangeRemoteApplicationEvent and
bus-refresh sends the
RefreshRemoteApplicationEvent to the application. All these events can be listened for by the listener.
2. Verify that an event received by the application is not an
AckRemoteApplicationEvent (otherwise, the Bus will get stuck in an endless loop: it repeatedly receives and sends messages). Then verify that the event was sent by the application itself. If both conditions are met, perform Step 3.
3. Create a message by using the remote event as the payload. Then use the MessageChannel (the binding name of which is springCloudBusOutput) created by Spring Cloud Stream to send the message to broker.
4. Use the
@StreamListener to annotate the MessageChannel (the binding name is springCloudBusInput) created by Spring Cloud Stream, and the message received is a remote message.
5. Assume that the remote event is an
AckRemoteApplicationEvent, the trace feature is enabled, and the event was not sent by the application (indicating that it was sent by another application). If these conditions are met, send the
AckRemoteApplicationEvent to the application to allow it to acknowledge that it has received a remote event sent by another application. Then the process ends.
6. If the remote event was sent by another application to the application, perform Step 7 and Step 8. Otherwise, perform Step 9.
7. If the remote event was sent by another application, send the event to the application. If application has already been processed by the corresponding message recipient, you do not have to send the event to the application again.
8. If you have enabled the
AckRemoteApplicationEvent, create an
AckRemoteApplicationEvent and send this event to all applications. The reason for sending the event to the application is that you did not send the
AckRemoteApplicationEvent to the application. As a result, the application did not acknowledge the receipt of the event that was sent by itself. The reason for sending the event to other applications is that the application needs to inform other applications that it has received the message.
9. If you have enabled Trace, create and send the
SentApplicationEvent to the application.
After
bus-env is triggered, the
EnvironmentChangeListener of all nodes will detect the configuration change, and controllers of all these nodes will print the following information:
o.s.c.b.event.EnvironmentChangeListener : Received remote environment change request. Keys/values to update {hangzhou=alibaba}
If you listen for
AckRemoteApplicationEvent on the current node, you will receive information from all nodes. For example, the
AckRemoteApplicationEvent listened for on node5 is as follows:
ServiceId [rocketmq-bus-node5:10005] listeners on {"type":"AckRemoteApplicationEvent","timestamp":1554124670484,"originService":"rocketmq-bus-node5:10005","destinationService":"**","id":"375f0426-c24e-4904-bce1-5e09371fc9bc",124670184,"originService":"rocketmq-bus-node1:10001","destinationService":"**","id":"91f06cf1-4bd9-4dd8-9526-9299a35bb7cc",402,"originService":"rocketmq-bus-node2:10002","destinationService":"**","id":"7df3963c-7c3e-4549-9a22-a23fa90a6b85",406,"originService":"rocketmq-bus-node3:10003","destinationService":"**","id":"728b45ee-5e26-46c2-af1a-e8d1571e5d3a",70427,"originService":"rocketmq-bus-node4:10004","destinationService":"**","id":"1812fd6d-6f98-4e5b-a38a-4b11aee08aeb","ackId":"750d033f-356a-4aad-8cf0-3481ace8698c","ackDestinationService":"**","event":"org.springframework.cloud.bus.event.EnvironmentChangeRemoteApplicationEvent"}
Now, let us answer the four questions that are listed in the begining of this section:
- How does Bus send messages? The Bus sends an event to the
springCloudBustopic by using the
BusAutoConfiguration#acceptLocalmethod through Spring Cloud Stream.
- How does Bus receive messages? The Bus receives messages from the
springCloudBustopic by using the
BusAutoConfiguration#acceptRemotemethod through Spring Cloud Stream.
- How does the Bus match the destination? The Bus matches the destination by using the remote-event-receiving method of the
BusAutoConfiguration#acceptRemotemethod.
- How does the Bus trigger the next action after receiving a remote event? The Bus performs the next action after receiving the
RemoteApplicationEventon the current node based on the Spring event mechanism. For example, the
EnvironmentChangeListenerreceives the
EnvironmentChangeRemoteApplicationEvent, and the
RefreshListenerreceives the
RefreshRemoteApplicationEvent.
Summary
The Spring Cloud Bus does not have too much content. However, you need to first understand the Spring Cloud Stream system and the Spring event mechanism before you can sufficiently understand how the Bus processes local and remote events.
Currently, the Bus provides only a few built-in remote events, most of which are configuration related. We can use the
RemoteApplicationEvent in conjunction with the
@RemoteApplicationEventScan annotation to build our own microservice message system.
Author
Fang Jian (nickname: Luoye), GitHub ID @fangjian0423, open-source fan, Alibaba Cloud senior development engineer, developer of Alibaba Cloud EDAS. Fang Jian is one of the owners of the open-source Spring Cloud Alibaba project.
Reference: | https://alibaba-cloud.medium.com/knowledge-sharing-introduction-to-the-spring-cloud-bus-message-bus-f3fb41d70ec | CC-MAIN-2021-10 | en | refinedweb |
Python Musings #1: Reading raw input from Hackkerank Challenges
Want to share your content on python-bloggers? click here.
As some of you may or may not know, Hackkerank is a website that offers a variety practice questions to work on your coding skills in an interactive online environment. You can work on a variety of languages like, Java, C, C++, Python and more! There are a lot of high quality questions that can really challenge your present coding and problem solving skills and help you build on them.
When I started out, I found that reading raw data was more challenging than writing the rest of the solution to the problem; This blog post is to show how to read raw data as lists, arrays and matrices and hopefully shed some light on how to do this in other problems.
I’m sure there are other more effective ways to be reading raw input from Hackerrank, but this has worked for me and I hope it will be helpful for others as well; Sorry in advance if my code appears to be juvenile.
To solve these problems, I will be working with Python 3.
Step 0: Reading Raw input
To read a line of raw input. Simply use the
input() function.
While this is great for reading data. It gives it to us in the raw form, which results in the data being received as a string- which isn’t any good if we want to do calculations.
Now that we know how to read raw input. Lets now get the data to be readable and in the form that we want for solving Hackerrank challenges.
Step 1: Reading Lists
Lets look at a problem that we can to read raw input as a list to solve.
This problem is called “Shape and Reshape“. This involves reading raw, space-separated data and turning it into a matrix.
Turning the data into a matrix can be done by using the numpy package. But to do this, the data first needs to be made into a list. I do this by first reading the raw data with
input().strip().split(' ').
Lets explain what each part of this code does.
input() takes in the raw input.
The
.strip() method clears all leading and trailing spaces. While not necessary for our example, it is good practice to do this so as not to have to encounter this problem in other cases.
The
.split(' ') method splits the raw data into individual elements. Because the data is space separated we define the the split to be a space. We could technically write
split() without anything in the middle (the default is white-space) but for this example we are defining the separator as an individual space character.
The problem now is that the data needs to be converted to a proper form. Remember,
input() reads all raw data as a string. For our problem, our data needs to be in integer form. This can be done by iterating the
int() function across the elements in our list.
The code we use is thus:
n = input("Write input: ").strip().split(' ') data=[int(i) for i in n] # Print the data to see our result (Not required for the solution) print(data)
Looking at the for-loop may not seem intuitive when first looking at it if you are used to writing for-loops traditionally. But once you learn how to write for loops this way, it definitely will be more preferred. As someone who has come to learn Python after initially spending a lot of time with R. I would describe this method of writing a for loop analogous to R’s
sapply() or
lapply() functions.
And there you have it! With 2 lines of code our data is ready to solve the problem!
(For actually solving the problem, you will have to figure it out yourself or you can check out my code on my Python Musings Github Repository (Its still a work in progress, messy code and all)
Step 2: Reading Arrays
After looking in the discussions for this problem the raw data can be read directly as an array using the numpy package.
The code is:
import numpy as np data = np.array(input("Write input: ").strip().split(' '),dtype= int) # Print the data to see our result (Not required for the solution) print(data)
Essentially we put our raw input that we put in as a list before directly into numpy’s
.array() function. To coerce the data into integers we define the data type as an integer. Therefore we set
dtype=int in the
.array() function…. and there you have it! An array of integers!
Step 3: Reading Matrices
For reading matrices, lets look at the problem titled “Transpose and Flatten“. This requires reading a matrix of data and transposing it and flattening it. While doing those operations are pretty straight forward. Reading the data might be a challenge.
Lets first look at how the input format is.
We want to have to read data in a way that will let us know the number of rows and columns of the data followed by the elements to read.
To do this, the code is:
import numpy as np n,m =map(int,input().strip().split()) array= np.array([input().strip().split() for i in range(0,n)],dtype=int) print(array)
Lets now break down the code:
Python has a very cool feature that you can assign multiple variables in a single line by separating them by a comma. So we can immediately assign our rows and columns from our input in a single line. To assign differing values, we use the
map() function on the split input. To assure that the values are in integer form we apply the
int function to both variables.
To get the rest of the data, we can read it directly into numpy’s
.array() function, iterating across the number of inputs we know we will be having (i.e. the number of rows). With this, we get the matrix we want for our input and can now work on the problem!
Conclusion
Doing challenges on Hackerrank is a good way to build skills in knowing how to write code and problem solve in Python. I personally found it initially challenging to read data in a form I wanted it. I hope this article shed some light on solving these problems!
Be sure to check out my Python Musing’s Github repository to see where I am in my adventures!
Want to share your content on python-bloggers? click here. | https://python-bloggers.com/2020/06/python-musings-1-reading-raw-input-from-hackkerank-challenges/ | CC-MAIN-2021-10 | en | refinedweb |
Light nodekit class. More...
#include <Inventor/nodekits/SoLightKit.h>
Light nodekit class.
This nodekit class is used to create light nodes that have a local transformation and a geometry icon to represent the light source. SoLightKit adds three public parts to the basic nodekit: transform , light , and icon .
SoLightKit creates an SoDirectionalLight as the light part by default - all other parts are NULL at creation.
You can move the light relative to the rest of the scene by creating and editing the transform part.
You can add a geometrical representation for the light by setting the icon part to be any scene graph you like.
SoLightKit also adds two private parts. An SoTransformSeparator contains the effect of transform to move only the light and icon , while allowing the light to illuminate the rest of the scene. The second private part is an SoSeparator, which keeps property nodes within the icon geometry from affecting the rest of the scene. It also serves to cache the icon even when the light or transform is changing.
SoLightKit is derived from SoBaseKit and thus also includes a callbackList part for adding callback nodes.
(SoTransform) transform
This part positions and orients the light and icon relative to the rest of the scene. Its effect is kept local to this nodekit by a private part of type SoTransformSeparator. The transform part is NULL by default. If you ask for transform using getPart(), an SoTransform will be returned. But you may set the part to be any subclass of SoTransform. For example, set the transform to be an SoDragPointManip and the light to be an SoPointLight. Then you can move the light by dragging the manipulator with the mouse.
(SoLight) light
The light node for this nodekit. This can be set to any node derived from SoLight. An SoDirectionalLight is created by default, and it is also the type of light returned when the you request that the nodekit build a light for you.
(SoNode) icon
This part is a user-supplied scene graph that represents the light source. It is NULL by default - an SoCube is created by the lightkit when a method requires it to build the part itself.
Extra Information for List Parts from Above Table
SoAppearanceKit, SoBaseKit, SoCameraKit,. | https://developer.openinventor.com/refmans/9.9/RefManCpp/class_so_light_kit.html | CC-MAIN-2021-10 | en | refinedweb |
table.getIntersecting(geometry).optArg("index", index) → selection<stream>
Get all documents where the given geometry object intersects the geometry object of the requested geospatial index.
The
index optarg is mandatory. This command returns the same results as
row -> row.g(index).intersects(geometry). The total number of results is limited to the array size limit which defaults to 100,000, but can be changed with the
array_limit option to run.
Example: Which of the locations in a list of parks intersect
circle1?
import com.rethinkdb.gen.ast.Circle; Circle circle1 = r.circle(r.array(-117.220406, 32.719464), 10) .optArg("unit", "mi"); r.table("parks").getIntersecting(circle1).optArg("index", "area").run(conn);
Couldn't find what you were looking for?
© RethinkDB contributors
Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License. | https://docs.w3cub.com/rethinkdb~java/api/java/get_intersecting/index | CC-MAIN-2021-10 | en | refinedweb |
Background tasks¶
pretix provides the ability to run all longer-running tasks like generating ticket files or sending emails in a background thread instead of the web server process. We use the well-established Celery project to implement this. However, as celery requires running a task queue like RabbitMQ and a result storage such as Redis to work efficiently, we don’t like to depend on celery being available to make small-scale installations of pretix more straightforward. For this reason, the “background” in “background task” is always optional. If no celery broker is configured, celery will be configured to run tasks synchronously.
Implementing a task¶
A common pattern for implementing asynchronous tasks can be seen a lot in
pretix.base.services
and looks like this:
from pretix.celery_app import app @app.task def my_task(argument1, argument2): # Important: All arguments and return values need to be serializable into JSON. # Do not use model instances, use their primary keys instead! pass # do your work here # Call the task like this: # my_task.apply_async(args=(…,), kwargs={…})
Tasks in the request-response flow¶
If your user needs to wait for the response of the asynchronous task, there are helpers available in
pretix.presale
that will probably move to
pretix.base at some point. They consist of the view mixin
AsyncAction that allows
you to easily write a view that kicks off and waits for an asynchronous task.
AsyncAction will determine whether
to run the task asynchronously or not and will do some magic to look nice for users with and without JavaScript support.
A usage example taken directly from the code is:
class OrderCancelDo(EventViewMixin, OrderDetailMixin, AsyncAction, View): """ A view that executes a task asynchronously. A POST request will kick off the task into the background or run it in the foreground if celery is not installed. In the former case, subsequent GET calls can be used to determinine the current status of the task. """ task = cancel_order # The task to be used, defined like above def get_success_url(self, value): """ Returns the URL the user will be redirected to if the task succeeded. """ return self.get_order_url() def get_error_url(self): """ Returns the URL the user will be redirected to if the task failed. """ return self.get_order_url() def post(self, request, *args, **kwargs): """ Will be called while handling a POST request. This should process the request arguments in some way and call ``self.do`` with the task arguments to kick of the task. """ if not self.order: raise Http404(_('Unknown order code or not authorized to access this order.')) return self.do(self.order.pk) def get_error_message(self, exception): """ Returns the message that will be shown to the user if the task has failed. """ if isinstance(exception, dict) and exception['exc_type'] == 'OrderError': return gettext(exception['exc_message']) elif isinstance(exception, OrderError): return str(exception) return super().get_error_message(exception)
On the client side, this can be used by simply adding a
data-asynctask attribute to an HTML form. This will enable
AJAX sending of the form and display a loading indicator:
<form method="post" data-asynctask {% csrf_token %} ... </form> | https://docs.pretix.eu/en/latest/development/implementation/background.html | CC-MAIN-2020-34 | en | refinedweb |
Permissions¶
pretix uses a fine-grained permission system to control who is allowed to control what parts of the system.
The central concept here is the concept of Teams. You can read more on configuring teams and permissions
and the
pretix.base.models.Team model in the respective parts of the documentation. The basic digest is:
An organizer account can have any number of teams, and any number of users can be part of a team. A team can be
assigned a set of permissions and connected to some or all of the events of the organizer.
A second way to access pretix is via the REST API, which allows authentication via tokens that are bound to a team,
but not to a user. You can read more at
pretix.base.models.TeamAPIToken. This page will show you how to
work with permissions in plugins and within the pretix code base.
Requiring permissions for a view¶
pretix provides a number of useful mixins and decorators that allow you to specify that a user needs a certain permission level to access a view:
from pretix.control.permissions import ( OrganizerPermissionRequiredMixin, organizer_permission_required ) class MyOrgaView(OrganizerPermissionRequiredMixin, View): permission = 'can_change_organizer_settings' # Only users with the permission ``can_change_organizer_settings`` on # this organizer can access this class MyOtherOrgaView(OrganizerPermissionRequiredMixin, View): permission = None # Only users with *any* permission on this organizer can access this @organizer_permission_required('can_change_organizer_settings') def my_orga_view(request, organizer, **kwargs): # Only users with the permission ``can_change_organizer_settings`` on # this organizer can access this @organizer_permission_required() def my_other_orga_view(request, organizer, **kwargs): # Only users with *any* permission on this organizer can access this
Of course, the same is available on event level:
from pretix.control.permissions import ( EventPermissionRequiredMixin, event_permission_required ) class MyEventView(EventPermissionRequiredMixin, View): permission = 'can_change_event_settings' # Only users with the permission ``can_change_event_settings`` on # this event can access this class MyOtherEventView(EventPermissionRequiredMixin, View): permission = None # Only users with *any* permission on this event can access this @event_permission_required('can_change_event_settings') def my_event_view(request, organizer, **kwargs): # Only users with the permission ``can_change_event_settings`` on # this event can access this @event_permission_required() def my_other_event_view(request, organizer, **kwargs): # Only users with *any* permission on this event can access this
You can also require that this view is only accessible by system administrators with an active “admin session” (see below for what this means):
from pretix.control.permissions import ( AdministratorPermissionRequiredMixin, administrator_permission_required ) class MyGlobalView(AdministratorPermissionRequiredMixin, View): # ... @administrator_permission_required def my_global_view(request, organizer, **kwargs): # ...
In rare cases it might also be useful to expose a feature only to people who have a staff account but do not necessarily have an active admin session:
from pretix.control.permissions import ( StaffMemberRequiredMixin, staff_member_required ) class MyGlobalView(StaffMemberRequiredMixin, View): # ... @staff_member_required def my_global_view(request, organizer, **kwargs): # ...
Requiring permissions in the REST API¶
When creating your own
viewset using Django REST framework, you just need to set the
permission attribute
and pretix will check it automatically for you:
class MyModelViewSet(viewsets.ReadOnlyModelViewSet): permission = 'can_view_orders'
Checking permission in code¶
If you need to work with permissions manually, there are a couple of useful helper methods on the
pretix.base.models.Event,
pretix.base.models.User and
pretix.base.models.TeamAPIToken classes. Here’s a quick overview.
Return all users that are in any team that is connected to this event:
>>> event.get_users_with_any_permission() <QuerySet: …>
Return all users that are in a team with a specific permission for this event:
>>> event.get_users_with_permission('can_change_event_settings') <QuerySet: …>
Determine if a user has a certain permission for a specific event:
>>> user.has_event_permission(organizer, event, 'can_change_event_settings', request=request) True
Determine if a user has any permission for a specific event:
>>> user.has_event_permission(organizer, event, request=request) True
In the two previous commands, the
request argument is optional, but required to support staff sessions (see below).
The same method exists for organizer-level permissions:
>>> user.has_organizer_permission(organizer, 'can_change_event_settings', request=request) True
Sometimes, it might be more useful to get the set of permissions at once:
>>> user.get_event_permission_set(organizer, event) {'can_change_event_settings', 'can_view_orders', 'can_change_orders'} >>> user.get_organizer_permission_set(organizer, event) {'can_change_organizer_settings', 'can_create_events'}
Within a view on the
/control subpath, the results of these two methods are already available in the
request.eventpermset and
request.orgapermset properties. This makes it convenient to query them in templates:
{% if "can_change_orders" in request.eventpermset %} … {% endif %}
You can also do the reverse to get any events a user has access to:
>>> user.get_events_with_permission('can_change_event_settings', request=request) <QuerySet: …> >>> user.get_events_with_any_permission(request=request) <QuerySet: …>
Most of these methods work identically on
pretix.base.models.TeamAPIToken.
Staff sessions¶
Changed in version 1.14: In 1.14, the
User.is_superuser attribute has been deprecated and statically set to return
False. Staff
sessions have been newly introduced.
System administrators of a pretix instance are identified by the
is_staff attribute on the user model. By default,
the regular permission rules apply for users with
is_staff = True. The only difference is that such users can
temporarily turn on “staff mode” via a button in the user interface that grants them all permissions as long as
staff mode is active. You can check if a user is in staff mode using their session key:
>>> user.has_active_staff_session(request.session.session_key) False
Staff mode has a hard time limit and during staff mode, a middleware will log all requests made by that user. Later, the user is able to also save a message to comment on what they did in their administrative session. This feature is intended to help compliance with data protection rules as imposed e.g. by GDPR. | https://docs.pretix.eu/en/latest/development/implementation/permissions.html | CC-MAIN-2020-34 | en | refinedweb |
This post is part of the C programming code series which helps users to understand and learn C by examples. In this article, we will look into the C program to count the number of digits in any Integer.
#include <stdio.h> int main(){ int num, count = 0; printf("Enter the number\n"); scanf("%d", &num); while(num>0){ count++; num = num/10; } printf("Total digits count = %d\n",count); }
Enter the number 46837 Total digits count = 5
Now we will break the code into parts and try to understand what is going inside it.
Two integer type variables num and count are declared. We take the input and store them inside it.
The main logic of the c program to count the number of digits in an integer goes inside the while loop. The termination condition is defined as num>0. The variable count is incremented each time the loop runs.
The next line in the loop num = num/10 is the most important part here. It means after each loop removes the last digit. The loop terminates when num becomes 0, it means that every digit is counted.
Let’s understand the example step by step when the input is 46837.
- First Iteration: Count=1, num=46837/10 which means num=4683
- Second Iteration: Count=2, num=4683/10 which means num=46
- Third Iteration: Count=3, num=468/10 which means num=46
- Fourth Iteration: Count=4, num=46/10 which means num=4
- Fifth Iteration: Count=5, num=4/10 which means num=0
- Loop Terminates now as num is 0.
You may also Like:
Check a number is negative or positive in C.
Best Operating System for C programming Language | https://holycoders.com/c-program-to-count-the-number-of-digits/ | CC-MAIN-2020-34 | en | refinedweb |
table of contents
- NAME
- SYNOPSIS
- DESCRIPTION
- OPTIONS
- GIT COMMANDS
- HIGH-LEVEL COMMANDS (PORCELAIN)
- LOW-LEVEL COMMANDS (PLUMBING)
- CONFIGURATION MECHANISM
- IDENTIFIER TERMINOLOGY
- SYMBOLIC IDENTIFIERS
- FILE/DIRECTORY STRUCTURE
- TERMINOLOGY
- ENVIRONMENT VARIABLES
- DISCUSSION
- FURTHER DOCUMENTATION
- AUTHORS
- REPORTING BUGS
- SEE ALSO
- GIT
- NOTES
- buster 1:2.20.1-2+deb10u3
- buster-backports 1:2.27.0-1~bpo10+1
- testing 1:2.27.0-1
- unstable 1:2.28.0-1
- experimental 1:2.28.0+next.20200726-1
NAME¶git - the stupid content tracker
SYNOPSIS¶
git [--version] [--help] [-C <path>] [-c <name>=<value>] [--exec-path[=<path>]] [--html-path] [--man-path] [--info-path] [-p|--paginate|-P|--no-pager] [--no-replace-objects] [--bare] [--git-dir=<path>] [--work-tree=<path>] [--namespace=<name>] [--super-prefix=<path>] <command> [<args>]
DESCRIPTION¶Git¶--version
Other options are available to control how the manual page is displayed. See git-help(1) for more information, because git --help ... is converted internally into git help ....
>]
--html-path
--man-path
--info-path
-p, --paginate
-P, --no-pager
--git-dir=<path>
Specifying the location of the ".git" directory using this option (or GIT_DIR environment environment variable)
If you just want to run git as if it was started in <path> then use git -C <path>.
--work-tree=<path>
--namespace=<path>
--super-prefix=<path>
--bare
--no-replace-objects
--literal-pathspecs
--glob-pathspecs
--noglob-pathspecs
--icase-pathspecs
--no-optional-locks
--list-cmds=group[,group...]
GIT COMMANDS¶We divide Git into high level ("porcelain") commands and low level ("plumbing") commands.
HIGH-LEVEL COMMANDS (PORCELAIN)¶We separate the porcelain commands into the main commands and some ancillary user utilities.
Main porcelain commands¶git-add(1)
Ancillary Commands¶Manipulators:
Interrogators:
Interacting with Others¶These commands are to interact with foreign SCM and with other people via patch over e-mail.
git-archimport(1)
git-p4(1)
Reset, restore and revert¶There are three commands with similar names: git reset, git restore and git revert.
git reset can also be used to restore the index, overlapping with git restore.
LOW-LEVEL COMMANDS (PLUMBING)¶Although¶git-apply(1)
Interrogation commands¶git-cat-file(1)
In general, the interrogate commands do not touch the files in the working tree.
Syncing repositories¶git-daemon(1)
git-update-server-info(1)
The following are helper commands used by the above; end users typically do not use them directly.
Internal helper commands¶These are internal helper commands used by other commands; end users typically do not use them directly.
git-interpret-trailers(1)
CONFIGURATION MECHANISM¶Git¶<object>
<blob>
<tree>
<commit>
<tree-ish>
<commit-ish>
<type>
<file>
SYMBOLIC IDENTIFIERS¶Any Git command accepting any <object> can also use the following symbolic notation:
HEAD
<tag>
<head>
For a more complete list of ways to spell object names, see "SPECIFYING REVISIONS" section in gitrevisions(7).
FILE/DIRECTORY STRUCTURE¶Please see the gitrepository-layout(5) document.
Read githooks(5) for more details about each hook.
Higher level SCMs may provide and manage additional information in the $GIT_DIR.
TERMINOLOGY¶Please see gitglossary(7).
ENVIRONMENT VARIABLES¶Various Git commands use the following environment variables:
The Git Repository¶_DEFAULT_HASH
Git Commits¶GIT_AUTHOR_NAME
GIT_AUTHOR_EMAIL
GIT_AUTHOR_DATE
GIT_COMMITTER_NAME
GIT_COMMITTER_EMAIL
GIT_COMMITTER_DATE
Git Diffs¶GIT_DIFF_OPTS
GIT_EXTERNAL_DIFF
path old-file old-hex old-mode new-file new-hex new-mode
where:
<old|new>-file
<old|new>-hex
<old|new>-mode
GIT_DIFF_PATH_TOTAL
other¶GIT_MERGE_VERBOSITY
GIT_PAGER
GIT_PROGRESS_DELAY or dgram.
Unsetting the variable, or setting it to empty, "0" or "false" (case insensitive) disables trace messages.
See Trace2 documentation[2] for full details.
GIT_TRACE2_EVENT
GIT_TRACE2_PERF
GIT_TRACE_REDACT¶More¶See the references in the "description" section].
Users migrating from CVS may also want to read gitcvs-migration(7).
AUTHORS¶¶Report bugs to the Git mailing list <git@vger.kernel.org[7]>.
SEE ALSO¶gittutorial(7), gittutorial-2(7), giteveryday(7), gitcvs-migration(7), gitglossary(7), gitcore-tutorial(7), gitcli(7), The Git User’s Manual[1], gitworkflows(7)
GIT¶Part of the git(1) suite
NOTES¶
- 1.
- Git User’s Manual
- 2.
- Trace2 documentation
- 3.
- Git concepts chapter of the user-manual
- 4.
- howto
- 5.
- Git API documentation
- 6.
- git@vger.kernel.org
- 7.
- git-security@googlegroups.com | https://manpages.debian.org/experimental/git-man/git.1.en.html | CC-MAIN-2020-34 | en | refinedweb |
Visualizing epoched data¶
This tutorial shows how to plot epoched data as time series, how to plot the
spectral density of epoched data, how to plot epochs as an imagemap, and how to
plot the sensor locations and projectors stored in
Epochs
objects.
Page contents
We’ll start by importing the modules we need, loading the continuous (raw) sample data, and cropping it to save memory:
import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=120)
To create the
Epochs data structure, we’ll extract the event
IDs stored in the stim channel, map those integer event IDs to more
descriptive condition labels using an event dictionary, and pass those to the
Epochs constructor, along with the
Raw data
and the desired temporal limits of our epochs,
tmin and
tmax (for a
detailed explanation of these steps, see The Epochs data structure: discontinuous data).
events = mne.find_events(raw, stim_channel='STI 014') event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3, 'visual/right': 4, 'face': 5, 'buttonpress': 32} epochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, event_id=event_dict, preload=True) del raw
Out:
176 events found Event IDs: [ 1 2 3 4 5 32] 176 matching events found Applying baseline correction (mode: mean) Not setting metadata Created an SSP operator (subspace dimension = 3) 3 projection items activated Loading data for 176 events and 421 original time points ... 1 bad epochs dropped
Plotting
Epochs as time series¶
To visualize epoched data as time series (one time series per channel), the
mne.Epochs.plot() method is available. It creates an interactive window
where you can scroll through epochs and channels, enable/disable any
unapplied SSP projectors to see how they affect the
signal, and even manually mark bad channels (by clicking the channel name) or
bad epochs (by clicking the data) for later dropping. Channels marked “bad”
will be shown in light grey color and will be added to
epochs.info['bads']; epochs marked as bad will be indicated as
'USER'
in
epochs.drop_log.
Here we’ll plot only the “catch” trials from the sample dataset, and pass in our events array so that the button press
responses also get marked (we’ll plot them in red, and plot the “face” events
defining time zero for each epoch in blue). We also need to pass in
our
event_dict so that the
plot() method will know what
we mean by “buttonpress” — this is because subsetting the conditions by
calling
epochs['face'] automatically purges the dropped entries from
epochs.event_id:
catch_trials_and_buttonpresses = mne.pick_events(events, include=[5, 32]) epochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict, event_colors=dict(buttonpress='red', face='blue'))
Plotting projectors from an
Epochs object¶
In the plot above we can see heartbeat artifacts in the magnetometer channels, so before we continue let’s load ECG projectors from disk and apply them to the data:
ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_ecg-proj.fif') ecg_projs = mne.read_proj(ecg_proj_file) epochs.add_proj(ecg_projs) epochs.apply_proj()
Out:
Read a total of 6 projection items: ECG-planar-999--0.200-0.400-PCA-01 (1 x 203) idle ECG-planar-999--0.200-0.400-PCA-02 (1 x 203) idle ECG-axial-999--0.200-0.400-PCA-01 (1 x 102) idle ECG-axial-999--0.200-0.400-PCA-02 (1 x 102) idle ECG-eeg-999--0.200-0.400-PCA-01 (1 x 59) idle ECG-eeg-999--0.200-0.400-PCA-02 (1 x 59) idle 6 projection items deactivated Created an SSP operator (subspace dimension = 9) 9 projection items activated SSP projectors applied...
Just as we saw in the Plotting projectors from Raw objects section, we can plot
the projectors present in an
Epochs object using the same
plot_projs_topomap() method. Since the original three
empty-room magnetometer projectors were inherited from the
Raw file, and we added two ECG projectors for each sensor
type, we should see nine projector topomaps:
epochs.plot_projs_topomap(vlim='joint')
Note that these field maps illustrate aspects of the signal that have
already been removed (because projectors in
Raw data are
applied by default when epoching, and because we called
apply_proj() after adding additional ECG projectors from
file). You can check this by examining the
'active' field of the
projectors:
print(all(proj['active'] for proj in epochs.info['projs']))
Out:
True
Plotting sensor locations¶
Just like
Raw objects,
Epochs objects
keep track of sensor locations, which can be visualized with the
plot_sensors() method:
epochs.plot_sensors(kind='3d', ch_type='all') epochs.plot_sensors(kind='topomap', ch_type='all')
Plotting the power spectrum of
Epochs¶
Again, just like
Raw objects,
Epochs objects
have a
plot_psd() method for plotting the spectral
density of the data.
Out:
Using multitaper spectrum estimation with 7 DPSS windows
Plotting
Epochs as an image map¶
A convenient way to visualize many epochs simultaneously is to plot them as
an image map, with each row of pixels in the image representing a single
epoch, the horizontal axis representing time, and each pixel’s color
representing the signal value at that time sample for that epoch. Of course,
this requires either a separate image map for each channel, or some way of
combining information across channels. The latter is possible using the
plot_image() method; the former can be achieved with the
plot_image() method (one channel at a time) or with the
plot_topo_image() method (all sensors at once).
By default, the image map generated by
plot_image() will be
accompanied by a scalebar indicating the range of the colormap, and a time
series showing the average signal across epochs and a bootstrapped 95%
confidence band around the mean.
plot_image() is a highly
customizable method with many parameters, including customization of the
auxiliary colorbar and averaged time series subplots. See the docstrings of
plot_image() and
mne.viz.plot_compare_evokeds (which is
used to plot the average time series) for full details. Here we’ll show the
mean across magnetometers for all epochs with an auditory stimulus:
Out:
81 matching events found No baseline correction applied Not setting metadata 0 projection items activated 0 bad epochs dropped combining channels using "mean"
To plot image maps for individual sensors or a small group of sensors, use
the
picks parameter. Passing
combine=None (the default) will yield
separate plots for each sensor in
picks; passing
combine='gfp' will
plot the global field power (useful for combining sensors that respond with
opposite polarity).
Out:
81 matching events found No baseline correction applied Not setting metadata 0 projection items activated 0 bad epochs dropped 81 matching events found No baseline correction applied Not setting metadata 0 projection items activated 0 bad epochs dropped 81 matching events found No baseline correction applied Not setting metadata 0 projection items activated 0 bad epochs dropped combining channels using "gfp"
To plot an image map for all sensors, use
plot_topo_image(), which is optimized for plotting a large
number of image maps simultaneously, and (in interactive sessions) allows you
to click on each small image map to pop open a separate figure with the
full-sized image plot (as if you had called
plot_image() on
just that sensor). At the small scale shown in this tutorial it’s hard to see
much useful detail in these plots; it’s often best when plotting
interactively to maximize the topo image plots to fullscreen. The default is
a figure with black background, so here we specify a white background and
black foreground text. By default
plot_topo_image() will
show magnetometers and gradiometers on the same plot (and hence not show a
colorbar, since the sensors are on different scales) so we’ll also pass a
Layout restricting each plot to one channel type.
First, however, we’ll also drop any epochs that have unusually high signal
levels, because they can cause the colormap limits to be too extreme and
therefore mask smaller signal fluctuations of interest.
reject_criteria = dict(mag=3000e-15, # 3000 fT grad=3000e-13, # 3000 fT/cm eeg=150e-6) # 150 µV epochs.drop_bad(reject=reject_criteria) for ch_type, title in dict(mag='Magnetometers', grad='Gradiometers').items(): layout = mne.channels.find_layout(epochs.info, ch_type=ch_type) epochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w', font_color='k', title=title)
Out:
Rejecting epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006', 'EEG 007', 'EEG 015', 'EEG 016', 'EEG 023', 'EEG 039', 'EEG 041', '] Rejecting epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 007', 'EEG 048', 'EEG 055'] Rejecting epoch based on EEG : ['EEG 007'] Rejecting epoch based on EEG : ['EEG 003', 'EEG 007'] Rejecting epoch based on MAG : ['MEG 1711'] Rejecting epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 003', 'EEG 007'] Rejecting epoch based on EEG : ['EEG 001', 'EEG 002', 'EEG 007'] Rejecting epoch based on MAG : ['MEG 1711'] 8 bad epochs dropped
To plot image maps for all EEG sensors, pass an EEG layout as the
layout
parameter of
plot_topo_image(). Note also here the use of
the
sigma parameter, which smooths each image map along the vertical
dimension (across epochs) which can make it easier to see patterns across the
small image maps (by smearing noisy epochs onto their neighbors, while
reinforcing parts of the image where adjacent epochs are similar). However,
sigma can also disguise epochs that have persistent extreme values and
maybe should have been excluded, so it should be used with caution.
Total running time of the script: ( 0 minutes 23.433 seconds)
Estimated memory usage: 341 MB
Gallery generated by Sphinx-Gallery | https://mne.tools/stable/auto_tutorials/epochs/plot_20_visualize_epochs.html | CC-MAIN-2020-34 | en | refinedweb |
Microsoft ADO.NET is the latest data access API in the line of ODBC, OLE DB, and ADO. It is the preferred data access component for the Microsoft .NET Framework and allows you to access relational database systems.
The SQL Anywhere .NET Data Provider implements the Sap.Data.SQLAnywhere namespace and allows you to write programs in any of the .NET supported languages, such as Microsoft C# and Microsoft Visual Basic .NET, and access data from SQL Anywhere databases.
You can develop Internet and intranet applications using object-oriented languages, and then connect these applications to the database server using the SQL Anywhere.NET Data Provider. | http://dcx.sap.com/sqla170/en/html/3bcf7ea46c5f1014a9829e9a9b4a84a0.html | CC-MAIN-2020-34 | en | refinedweb |
I finish the part of finding the char at the position my homework required, but how to delete the character from a String that I found?
removeNchars takes a String, an int and a char and returns a String: The output string is the same as the input string except that the first n occurrences of the input char are removed from the string, where n represents the input integer. If there are not n occurrences of the input character, then all occurrences of the character are removed. Do not use
arrays to solve this problem.
HW2.removeNchars("Hello there!", 2, 'e') "Hllo thre!"
HW2.removeNchars("Hello there!", 10, 'e') "Hllo thr!"
HW2.removeNchars("Hello there!", 1, 'h') "Hello tere!"
public class HW2{ public static String removeNchars(String s, int a, char b){ StringBuilder s = new StringBuilder(); for(int i = 0; i<s.length(); i++){ if(int i=a&& s.charAr(i)==b){ } } } }
there is more than 1 way to do it. I can see you used StringBuilder so try this
the idea is to define string builder
StringBuilder sb = new StringBuilder(inputString);
use
deleteCharAt() to delete the char you don't want(docs).
and convert it back to string
String resultString = sb.toString();
good luck
Edit:
You can also create a new
StringBuilder(exmple name output) then iterate the src string and append the chars you don't want to remove to the output string with
.append(input);
something like this:
StringBuilder output = new StringBuilder(); output.append(char);
and return the output string after you finish iterating.
Python Remove Character from String, How do I extract a character from a string in Java? Thanks. I will try this first thing in the morning and see if my compiler likes it. The DB-NAME field on each record in my file can have up to 200 characters. So, I guess I'll need to search starting at the first character of the DB-NAME field and increment the starting position in the DB-NAME by 1 until the end of that string.
I don't know why you are using the StringBuilder class,but as you stated above i think this will help you with your homework,lets see if it works for you.
public static String removeNchars(String s, int a, char b){ String str = ""; for(int i = 0; i < s.length(); i++){ if(a > 0 && s.charAt(i) == b) { a--; }else { str += s.charAt(i); } } return str; }
How To Count Occurrences Of Each Character In String In Java, How do I remove a single character from a string in Python? Macro Example to Replace Character in String by Position. The following macro replaces the string starting in position 10 (myCharacterPosition) with a length of 1 character (myCharactersToReplace) with the string “+” (myReplacementString) within the string in cell A11 of the worksheet named “Excel VBA Replace” (myCell).
You can use
s.deleteCharAt(); like this:
public class HW2{ public static String removeNchars(String s, int a, char b){ StringBuilder s = new StringBuilder(); for(int i = 0; i<s.length(); i++){ if(int i=a&& s.charAr(i)==b) s.deleteCharAt(i); } } }
Delete substrings between start and end points, How do you count the number of occurrences of a character in a string in Java?.
Discussion 4-13, If str is a string array or a cell array of character vectors, then eraseBetween Create string arrays and delete substrings between start and end positions that are eraseBetween returns the boundaries as part of the output string array when they are exclusive. Calculate with arrays that have more rows than fit in memory.:
7. Strings, Another suggestion for a small part: Compute the sum of all the money, but don't chars are written in c++ using single quotes (strings are written using double quotes). You can cout variables of type char and it works as you would expect. cout << "My word is \"Aquarium\", and the character in position " << pos << " is Output: First non-repeating character is f. Can this be done by traversing the string only once? The above approach takes O(n) time, but in practice, it can be improved. The first part of the algorithm runs through the string to construct the count array (in O(n) time).
Java String exercises, Practice and solution., So far we have seen five types: int, float, bool, NoneType and str. want to treat a compound data type as a single thing, or we may want to access its parts. The len function returns the number of characters in a string: for char in fruit: print char __doc__ find(s, sub [,start [,end]]) -> in Return the lowest index in s where). | http://thetopsites.net/article/58263759.shtml | CC-MAIN-2020-34 | en | refinedweb |
Talend Data Mapper provides you with the possibility to export your Structures as CSV files, Java classes or Avro files, to enable you to work with the exported files in external tools.
The procedure for all three types of output file is more or less the same. Any particularlities based on the type of structure are specified below.
To export Structures as CSV files, Java classes or Avro files, do the following:
Click File > Export in the Export window that opens, expand Data Mapper and select CSV Export, Java Export or Avro Export as appropriate, then click Next.
In the window that opens, expand your project in the left-hand pane and select the Structures checkbox, then select the checkboxes in the right-hand window that correspond to the Structures you want to export.
In the To directory field, select the directory to which you want to export by clicking Browse, browsing to the target directory or clicking Make New Folder if the directory does not already exist, and then clicking OK.
[For Java classes] In the Java package name field, enter a name for the package you are going to create.
[For Avro files] In the Avro namespace field, enter a name for the Avro namespace.
Click Finish to complete the process. | https://help.talend.com/reader/YTUJp4~lScETGDBqSSOt5g/XQ89dX5mUG9oJaF5qE0Uew | CC-MAIN-2020-34 | en | refinedweb |
Hi Miles,
I thought the singular values are sorted from the largest to the smallest after "svd". But it seems they are not if we conserve the parity. It is easy to sort by ourselves, but I'm not sure if you missed it for ConserveParity=true. Thanks.
#include "itensor/all.h"
using namespace itensor;
int
main()
{
int N = 8;
auto sites = SpinHalf(N,{"ConserveSz=",false, "ConserveParity",true});
auto ampo = AutoMPO(sites);
for(int j = 1; j < N; ++j)
{
ampo += 0.5,"S+",j,"S-",j+1;
ampo += 0.5,"S-",j,"S+",j+1;
ampo += "Sz",j,"Sz",j+1;
}
auto H = toMPO(ampo);
auto sweeps = Sweeps(20); //number of sweeps is 5
sweeps.maxdim() = 10,20,100,100,200;
sweeps.cutoff() = 1E-10;
auto state0 = InitState(sites);
for(int is = 1; is <= N; ++is)
{
state0.set(is,is%2==1 ? "Up" : "Dn");
//state0.set(is,"Dn");
}
auto psi0 = randomMPS(state0);
auto [energy,psi] = dmrg(H,psi0,sweeps,{"Quiet",true,"EnergyErrgoal=",1e-6,"EntropyErrgoal=",1e-5});
println("Ground State Energy = ",energy);
auto b = 4;
//(n<=4) println(n, " ", p);
if(p > 1E-14) SvN += -p*log(p);
}
printfln("Across bond b=%d, SvN = %.10f",b,SvN);
return 0;
}
Jin
Hi Jin,
Thanks for the question. However, when I run the sample code you posted, I get the following output at the end:
Ground State Energy = -3.37493
1 0.893389
2 0.035519
3 1.64686e-05
4 4.13928e-06
Across bond b=4, SvN = 0.4569757731
which is indeed sorted. Are you seeing a very different output and can you post which singular values you are seeing?
Best,
Miles
Hi Jin,
Ok now I've figured it out. This will be the official answer (I had missed a detail about the code in my other answer).
The answer is that, no, the diagonal entries in the S tensor are in general not sorted. They are sorted within each block, but not across blocks. This is chosen so that U and V can retain a block-sparse structure when conserving quantum numbers.
So one way to get a sorted list of all of the singular values is to loop over them all first, put them into a std::vector, then call std::sort on that vector.
Another way is to call the older interface for the SVD that takes U,S, and V as (reference) arguments:
This older interface returns a Spectrum object which can be used to access a sorted list of all of the density matrix eigenvalues (squares of singular values).
Best regards,
Miles
P.S. just for extra info, the behavior of the code in the presence of quantum numbers is not specifically programmed for each kind of quantum number, such as parity or otherwise. It is written in a very generic way, so would likely either work for parity and all other types of quantum numbers or fail for parity as well as all other quantum numbers. | http://itensor.org/support/1932/singular-values-are-not-sorted-with-conserved-parity | CC-MAIN-2020-34 | en | refinedweb |
Get the highlights in your inbox every week.
How to develop functions-as-a-service with Apache OpenWhisk
How to develop functions-as-a-service with Apache OpenWhisk
Write your functions in popular languages and build components using containers.
Subscribe now
Apache OpenWhisk is a serverless, open source cloud platform that allows you to execute code in response to events at any scale. Apache OpenWhisk offers developers a straightforward programming model based on four concepts: actions, packages, triggers, and rules.
Actions are stateless code snippets that run on the Apache OpenWhisk platform. You can develop an action (or function) via JavaScript, Swift, Python, PHP, Java, or any binary-compatible executable, including Go programs and custom executables packaged as Linux.
Packages provide event feeds; anyone can create a new package for others to use.
Triggers associated with those feeds fire when an event occurs, and developers can map actions (or functions) to triggers using rules.
The following commands are used to create, update, delete, and list an action in Apache OpenWhisk:
Usage:
wsk action [command]
Available Commands:
create create a new action
update update an existing action, or create an action if it does not exist
invoke invoke action
get get action
delete delete action
list list all actions in a namespace or actions contained in a package
Set up OpenWhisk
Let’s explore how that works in action. First, download Minishift to create a single-node local OKD (community distribution of Kubernetes that powers Red Hat OpenShift) cluster on your workstation:
$ minishift start --vm-driver=virtualbox --openshift-version=v3.10.0
Once Minishift is up and running, you can log in with admin /admin and create a new project (namespace). The project OpenWhisk on OpenShift provides the OpenShift templates required to deploy Apache OpenWhisk:
$ eval $(minishift oc-env) && eval $(minishift docker-env)
$ oc login $(minishift ip):8443 -u admin -p admin
$ oc new-project faas
$ oc project -q
$ oc process -f | oc create -f -
Apache OpenWhisk is comprised of many components that must start up and sync with each other, and this process can take several minutes to stabilize. The following command will wait until the component pods are running:
$ while $(oc get pods -n faas controller-0 | grep 0/1 > /dev/null); do sleep 1; done
You can also watch the status with this:
$ while [ -z "`oc logs controller-0 -n faas 2>&1 | grep "invoker status changed"`" ]; do sleep 1; done
Develop a simple Java Action
Maven archetype is a Maven project templating toolkit. In order to create a sample Java Action project, you won't refer to central Maven archetype, but you need to generate your own Maven archetype first as below:
$ git clone
$ cd incubator-openwhisk-devtools/java-action-archetype
$ mvn -DskipTests clean install
$ cd $PROJECT_HOM
Let’s now create a simple Java Action, deploy it to OpenWhisk, and finally, invoke it to see the result. Create the Java Action as shown below:
$ mvn archetype:generate \
-DarchetypeGroupId=org.apache.openwhisk.java \
-DarchetypeArtifactId=java-action-archetype \
-DarchetypeVersion=1.0-SNAPSHOT \
-DgroupId=com.example \
-DartifactId=hello-openwhisk \
-Dversion=1.0-SNAPSHOT \
-DinteractiveMode=false
Next, build the Java application and deploy to OpenWhisk on Minishift locally:
$ cd hello-openwhisk
$ mvn clean package
$ wsk -i action create hello-openwhisk target/hello-openwhisk.jar --main com.example.FunctionApp
Having created the function
hello-openwhisk, verify the function by invoking it:
$ wsk -i action invoke hello-openwhisk --result
As all the OpenWhisk actions are asynchronous, you need to add
--result to get the result shown on the console. Successful execution of the command will show the following output:
{"greetings": "Hello! Welcome to OpenWhisk" }
Conclusion
With Apache OpenWhisk, you can write your functions in popular languages such as NodeJS, Swift, Java, Go, Scala, Python, PHP, and Ruby and build components using containers. It easily supports many deployment options, both locally and within cloud infrastructures such as Kubernetes and OpenShift. | https://opensource.com/article/18/11/developing-functions-service-apache-openwhisk | CC-MAIN-2020-34 | en | refinedweb |
java-jar-executable-manifest-main-class
Importance of Main Manifest Attribute in a Self-Executing JAR
1. Overview
Every executable Java class has to contain a main method. Simply put,
this method is a starting point of an application.
To run our main method from a self-executing JAR file, we have to create
a proper manifest file and pack it along with our code. This manifest
file has to have a main manifest attribute that defines the path to the
class containing our main method.
In this tutorial, we’ll show how to pack a simple Java class as a
self-executing JAR and demonstrate the importance of a main manifest
attribute for a successful execution.
2. Executing a JAR Without the Main Manifest Attribute
To get more practical, we’ll show an example of unsuccessful execution
without the proper manifest attribute.
Let’s write a simple Java class with a main method:
public class AppExample { public static void main(String[] args){ System.out.println("AppExample executed!"); } }
To pack our example class to a JAR archive, we have to go to the shell
of our operating system and compile it:
javac -d . AppExample.java
Then we can pack it into a JAR:
jar cvf example.jar com/baeldung/manifest/AppExample.class
Our example.jar will contain a default manifest file. We can now try
to execute the JAR:
java -jar example.jar
Execution will fail with an error:
no main manifest attribute, in example.jar
3. Executing a JAR With the Main Manifest Attribute
As we have seen, JVM couldn’t find our main manifest attribute. Because
of that, it couldn’t find our main class containing our main method.
Let’s include a proper manifest attribute into the JAR along with our
code. We’ll need to create a MANIFEST.MF file containing a single
line:
Main-Class: com.baeldung.manifest.AppExample
Our manifest now contains the classpath to our compiled
AppExample.class.
Since we already compiled our example class, there’s no need to do it
again.
We’ll just pack it together with our manifest file:
jar cvmf MANIFEST.MF example.jar com/baeldung/manifest/AppExample.class
This time JAR executes as expected and outputs:
AppExample executed!
4. Conclusion
In this quick article, we showed how to pack a simple Java class as a
self-executing JAR, and we demonstrated the importance of a main
manifest attribute on two simple examples.
The complete source code for the example is available
over
on GitHub. This is a Maven-based project, so it can be imported and
used as-is. | https://getdocs.org/java-jar-executable-manifest-main-class/ | CC-MAIN-2020-34 | en | refinedweb |
Table of Contents mail failed to pay on first billing and required follow-up.
In this tutorial, we will build a simple Rails app that acquires all invoices associated with your invoicing system and gives you the option to send the invoices by mail programmatically using the Lob API. Read on for the full tutorial or skip ahead to GitHub code.
Setting Up
This tutorial is written primarily for Rails, but the concepts can be translated across languages.
What you’ll need
- A free RapidAPI account to test API calls and export code snippets
- A Lob account to send the physical mail
- An invoicing system (we’ll use Freshbooks’ free 30 day trial)
What you’ll build
We’ll build this project in Rails and define two routes:
get_invoices, and
send_invoice. Using ERB and the Lob invoice template, we will dynamically generate an invoice with all of the Freshbooks invoice details filled out. We will call the Freshbooks and Lob APIs using RapidAPI. RapidAPI allows us to test the API calls in our browser and see a JSON response. It will also export the API call as a code snippet in the language of your choice.
What the final project will do
This tutorial will walk through how to programmatically send an invoice from Freshbooks via the US Postal Service with Lob. Note, we won’t be covering how to build a frontend.
When the app is visited, all of your Freshbooks’ invoices will rendered in a list along with a Send Mail button. When you click the Send Mail button, ERB will dynamically generate an invoice with all of the invoice information.
The resulting invoice document will look something like this, but you can customize it as you see fit.
Finally, the app will send the invoice through the United States Postal Service via Lob’s API. Pretty cool, right?
Building the Project
Let’s walk through the steps to build this application. If you reach a roadblock, you can compare your code to the finished project in this repo.
Step 1: Create an invoice controller
Before we mail the invoices out, we’ll need to retrieve them! The invoice controller will send all the invoices to the frontend and populate the app. Since we’re using Freshbooks as our invoicing system (thank you, 30 day free trial!), we’ll call our controller the Freshbooks controller. Here are the steps we took:
- Make a Freshbooks account
1a. Travel to and create an account.
1b. Populate a new business with invoices or import your existing business invoices. We chose to make Dave’s Construction business because, hey, who wouldn’t want to build stuff all day?
- Call the Freshbooks API
getInvoicesendpoint on RapidAPI
2a. The first thing you will need to do is generate an access token. Head to
2b. After you generate the access token, you’ll be taken directly into the RapidAPI testing environment. Test out your new access token with the
getIdentityInfo endpoint!
2c. Look through the JSON response and find your Freshbooks accountID. (response -> roles -> accountId). Copy this ID onto your clipboard.
2d. Go to the getInvoices endpoint. Enter your access token and paste your account ID. Expand the optional parameters section and add the word “lines” to the “include” parameter. This addition will give us access to all the details about each item on an invoice.
2e. Click the TEST Function button to call the API!
2f. Sign in to generate a code snippet in Ruby. You can call an API without a RapidAPI account, but you’ll need one to access the code snippet.
3. Build a custom route and controller
3a. Start a new Rails project (
rails new [project_name]) or navigate to an existing project.
3b. Type
rails g controller freshbooks into your terminal.
3c. Define a custom route (
get 'freshbooks/get_invoices', :to => ‘freshbooks#get_invoices’) in your config/routes.rb file.
3d. Write
gem 'rapidapi', '~> 0.1.3' in your Gemfile. Then, run bundle install in Terminal.
3e. In your FreshbooksController, define a method called
get_invoices and copy and paste your RapidAPI
getInvoices code snippet that was saved from the previous step.
Note: Prefix each RapidAPI class with a double colon “::”. These colons tell Rails to not implicitly call ‘self’ before calling the RapidAPI class.
3f. Finally, write
render json: JSON.pretty_generate(root[‘payload’][‘response’]
[‘result’][‘invoices’]) as the last line in your controller.
3g. Start your server with
rails s and navigate to localhost:3000/freshbooks/get_invoices. You should see a JSON response of all the invoices within your Freshbooks account.
Ta-da!
Your invoice controller should look like this:
def get_invoices require 'rapidapisdk' ::RapidAPI.config(project: "LobFreshbooks", token: ‘############’) root = ::RapidAPI.call('FreshbooksAPI', 'getInvoices', { 'accessToken': '###################', 'accountId': '#####', 'include': 'lines' }) render json: JSON.pretty_generate(root['payload']['response']['result']['invoices']) end
Step 2: Make a Lob Controller
Now that the invoicing controller is all set, we can move onto the Lob controller. The Lob controller will actually call the Lob API and send the invoice as a physical piece of mail (or a virtual one if we’re in test mode). Let’s build it!
Since this process is slightly more complicated than the invoice controller, we’ve divided it into three sections: set up, connect to RapidAPI, generate invoice.
A) Set up your Lob account and Lob controller
To kick this process off, we’ll create a Lob account and skeleton Lob controller.
1.Create a Lob account and get your Test API Key
1a. Head to Lob to sign up for an API key:
1b. Copy the “Test API Key” onto your clipboard.
Why are we using the test API key? This key allows us to make calls and test them without sending out physical mail just yet. Another bonus? We won’t be charged for calls using a test API key.
2. Set up your Lob controller
2a. Run
rails g controller Lob in Terminal.
2b. Create another custom route in your routes.rb file to look like
get 'lob/send_invoice', :to => 'lob#send_invoice'.
2c. Define a method in your LobController called
send_invoice.
B) Connect your Lob controller to the Lob API with RapidAPI
Here’s where it gets interesting…We will be sending our Freshbooks invoice details along with our AJAX call to the send_invoice route.
1. Connect to RapidAPI to call Lob’s createLetter endpoint
1a. This step should feel familiar! Go to the createLetter endpoint on RapidAPI’s Lob package. Paste your Lob “Test API Key” into the apiKey form.
1b. Test the API call and see the JSON results. Generate the code snippet in Ruby. Copy and paste this code snippet directly into your send_invoice method.
Note: Again, prefix each RapidAPI class with a double colon “::”.
2. Fill out the API parameters
2a. Create
letterTo and
letterFrom address objects. The letterTo and letterFrom parameters in your Lob RapidAPI call take a JSON address object that looks like this:
‘letterTo’: JSON.generate({ name: params['name'], address_line1: params['address_line_1'], address_city: params['city'], address_state: Lob.state[params['state'].to_sym], address_country: Lob.iso[params['country']], address_zip: params['zip_code'] })
Notice that each line of the letterTo address object contains an invoice parameter.
2b. Next, implement state and country conversions. Why do we need these? The Freshbooks JSON response returns full country and state names (ie. “California” or “India”) but the Lob API takes state and country abbreviations.
Since the
address_country and
address_state parameters access a Lob model, we’ll need to generate a new Lob model (
rails g model Lob). Take a look at our Lob model to see how we built it and see the two country and state hash objects. Then, implement the two state and country hash objects into two methods respectively.
2c. Fill in the letterFrom parameter with your company’s information:
'letterFrom': JSON.generate({ name: "Dave's Construction", address_line1: "600 California St.", address_city: 'San Francisco', address_state: 'CA', address_country: 'US', address_zip: '94109' })
C) Generate an invoice programmatically within your Lob controller
Finally, we’ll use ERB to generate an invoice within the Lob controller.
1. Create an .erb file
1a. If Rails hasn’t already created a lob/send_invoice.html.erb within your views folder, go ahead and do so now.
1b. Navigate to Lob’s invoice template and copy Lob’s invoice template into the send_invoice.html.erb file that you just created.
2. Alter the file to generate the invoice based on Freshbooks params
2a. Access to the
@params variable we defined in our LobController in the .erb file. Check out our finished .erb file here to get an idea of how you should modify this file.
3. Modify send_invoices method in LobController to generate the .erb file
3a. Bind the
@params variable to the .erb file to generate its result using this code snippet.
3b. Add these four lines to your code above the RapidAPI.call function
@params = params Dir.chdir(File.dirname(__FILE__)) erb_string = File.read('../views/lob/send_invoice.html.erb') renderer = ERB.new(erb_string) result = renderer.result(binding)
The first line just ensures that our master directory is set as our current directory, then we read our .erb file. Once we have read our .erb file we need to create a new instance of the ERB class, pass our .erb file as an argument, and render the result. The “binding” key word is Rails’ way of passing all of our needed variables (in this case, the instance variable @params) to our template.
3c. Our RapidAPI.call method takes a file parameter. Go ahead and set the variable “result” as our file. The final line of your send_invoice method should say
render json: response.
Boom!
Once your are finished, your LobController#send_invoice method should look similar to this:
def send_invoice require 'rapidapisdk' ::RapidAPI.config(project: [YourRapidProject], token: [YourRapidProjectKey]) @params = params Dir.chdir(File.dirname(__FILE__)) erb_string = File.read('../views/lob/send_invoice.html.erb') renderer = ERB.new(erb_string) result = renderer.result(binding) response = ::RapidAPI.call('Lob', 'createLetter', { 'apiKey': '###########################', 'letterTo': JSON.generate({ name: params['name'], address_line1: params['address_line_1'], address_city: params['city'], address_state: Lob.state[params['state'].to_sym], address_country: Lob.iso[params['country']], address_zip: params['zip_code'] }), 'letterFrom': JSON.generate({ name: "Dave's Construction", address_line1: "600 California St.", address_city: 'San Francisco', address_state: 'CA', address_country: 'US', address_zip: '94109' }), 'color': true, 'file': result }) render json: response end
Step 3: Build a Frontend
We won’t go into too much detail about how your frontend should look, but just know that you should have two AJAX calls: one to the freshbooks/get_invoices route, and another to your lob/send_invoice route. This project populates the frontend with invoices on the initial loading of the app and displays buttons for each invoice to send by mail respectively. The frontend uses React.js. Feel free to use our sample frontend as a way to get this app up and running.
get_invoices: $.get({ url: 'freshbooks/get_invoices' }); send_invoice: $.get({ url: 'lob/send_invoice', data: { name: invoice.fname + " " + invoice.lname, amount: invoice.amount, organization: invoice.organization, address_line_1: invoice.street, city: invoice.city, state: invoice.province, zip_code: invoice.code, country: invoice.country, invoice_number: invoice.invoice_number, customer_id: invoice.customerid, create_date: invoice.create_date, lines: invoice.lines } }).done(function(res) { console.log(res); });
Conclusion
While this tutorial focuses on Freshbooks, you can replicate the project with any invoicing system. RapidAPI has Shopify, SquareEcommerce and Stripe, or you could call the APIs directly.
Let us know what you build and feel free to raise an issue on GitHub with any questions or concerns. Happy invoicing! | https://rapidapi.com/blog/mail-overdue-invoices-with-this-lob-integration-rails-tutorial/ | CC-MAIN-2020-34 | en | refinedweb |
Welcome to my first cosmos tutorial (and in fact, my first article). We will be looking at drawing a mouse in cosmos today. I have found a few tutorials on the internet on how to do this, but none of them address the major problems. For example, if there is only one pixel and you add more, it lags. Also, they tend to leave pixels on the screen and they influence other drawings on the screen. I have written a full screen drawing manager that solves all these problems and allows you to easily draw other things like a task bar or a window.
To truly understand what we are doing we must first understand how cosmos draws to the screen. I will start by giving a code example and then explaining it.
First we create a variable VGAScreen.
public static VGAScreen VScreen = new VGAScreen();
Now we will set it up.
Console.WriteLine("Vga Driver Booting");
VScreen.SetGraphicsMode(VGAScreen.ScreenSize320x200, VGAScreen.ColorDepth.BitDepth8);
VScreen.Clear(0);
Console.WriteLine("Vga Driver Booted");
First we print "Vga Driver Booting". Next, we change the graphics mode to 320X200 and the color depth to 8. Then we clear the screen with the color black, or 0. After that, we print that the driver booted. This is kind of ironic because after the graphics mode has been set the console will no longer display. Maybe I should have made it, "vga failed to boot".
"Vga Driver Booting"
Now we will write a pixel.
VScreen.SetPixel320x200x8((uint) x, (uint) y, (uint) c);
Now that we have a basic black screen with one pixel of another color, we can go into the detail on how the operating actually sets the pixel. Your code ends up as assembly at the end of the day and that’s what the CPU runs, so to fully understand we must go back to the raw hardware. The way VGA works is simple, the pixels on the screen are stored in memory, which is usually on-board the display device. By changing the values in this memory we can change the pixels on the screen. The only problem is that, the bigger the resolution, the more space is needed. And considering that the memory is always the same size the idea of bigger resolutions is hard, the solution to this problem is using less space to store the pixels, and this is why a bigger resolutions has a smaller bit depth (hopefully that makes sense). If the screen size is 320 by 200, the color depth is 8. This means every pixel can be 1 of 255 colors (the size of a byte), which means the screens memory is 64 000 bytes in size. It is important that we remember this later because we will need to understand this when we write the screen buffer.
Back to how the pixels are actually manipulated in memory, let’s think of the screen as a byte array like this.
public static byte[] SBuffer = new byte[64000];
If every byte is one pixel the first byte would be the pixel at x = 0 and y = 0. The second byte in the array would be x = 1 and y = 0 etc. So if we wanted to index the array using x and y coordinates we would use a little formula that looks like this:
Index = (Screen Width * y) + x
For our screen size it would be:
Index = (320 * y) + x
Let’s look at the how this works then I will give you the C# code. OK so we know one row on the screen is 320 pixels long. This means that if we wanted the first pixel in any row on the screen we just have to multiply the width by the row number, and by adding the x to this we will have the exact number or index of that pixel in the array. This is precisely how the computer does it too.
Here is the C# code:
public static void SetPixel(int x, int y, int color)
{
SBuffer[(y*320) + x] = (byte)color;
}
Let’s look at the reason why the cosmos screen driver is so slow and how to solve the problem. The reason it is so slow is because it takes a long time to calculate the index and the memory writing operation is hard on time. To make matters even worse, the way the driver works makes it even slower, but one operation that is not that costly is reading memory, and this is where the key lays. Basically the way we work around this problem is by only drawing a pixel if it changes. It turns out that it is faster to check if a pixel is not the same color before drawing it, and it is faster than just drawing it over. Anyway, here is the code, and then I will explain it.
public static void ReDraw()
{
// VScreen.Clear(0);
int c = 0;
for (int y = 0; y < 200; y++)
{
for (int x = 0; x < 320; x++)
{
uint cl = VScreen.GetPixel320x200x8((uint) x, (uint) y);
if (cl != (uint)SBuffer[c])
{
VScreen.SetPixel320x200x8((uint) x, (uint) y, SBuffer[c]);
}
c++;
}
}
for (int i = 0; i < 64000; i++)
{
SBuffer[i] = 0;
}
}
This is a bit more complex to understand, but I’m sure we will get it done. We have three loops in this method. Loop one is for the y axis and loop two is for the x axis. By looping row by row and the column by column we will still loop throw all 64000 pixels on the screen, but we will have the x and y coordinates. The variable c keeps track of the index that we use on the buffer. Overall, it is not a bad idea to understand what's going on inside the loops. We have the if statement that just makes sure we don’t do a costly set operation for no reason. After that, we have loop number three that just fills the buffer with black. Congrats! You now understand how to double buffer the screen. This will remove all flickering from the screen the screen will redraw frame by frame and not pixel by pixel which will greatly improve the overall feel and performance. It will remove the need to manually dispose the previous frame like all the other blogs recommend. Now that we can draw properly to the screen, let's draw the mouse.
c
First we will create a new mouse
public static Mouse m = new Mouse();
New we need to start the mouse you need to do this in your boot sequence somewhere.
public static class BootManager
{
public static void Boot()
{
Screen.Boot();
Desktop.m.Initialize(320, 200);
}
}
Now we will draw the mouse like this.
Screen.SetPixel(m.X,m.Y,40);
Screen.SetPixel(m.X+ 1,m.Y,40);
Screen.SetPixel(m.X + 2, m.Y, 40);
Screen.SetPixel(m.X, m.Y + 1, 40);
Screen.SetPixel(m.X, m.Y + 2, 40);
Screen.SetPixel(m.X+ 1, m.Y + 1, 40);
Screen.SetPixel(m.X + 2, m.Y + 2, 40);
Screen.SetPixel(m.X +3 , m.Y +3, 40);
If we run this code we get this:
And there you have it, one sexy mouse in cosmos. If you move the mouse it will move flawlessly - no lag - and best of all it will never drop ghost pixels. Now let’s add a "task bar" (you will need to add this to your screen class).
public static void DrawFilledRectangle(uint x0, uint y0, int Width, int Height, int color)
{
for (uint i = 0; i < Width; i++)
{
for (uint h = 0; h < Height; h++)
{
SetPixel((int)(x0 + i), (int)(y0 + h), color);
}
}
}
All we do now is add this to the code.
Screen.DrawFilledRectangle(0,0,320,25,50);
Remember to add it before you draw your mouse so it is on the bottom here is the end result:
width="602px" alt="Image 2" data-src="/KB/cs/842576/image013.png" class="lazyload" data-sizes="auto" data->
And that is it for this tutorial. I will add a download link down below to the cosmos project.
This article, along with any associated source code and files, is licensed under The Creative Commons Attribution-Share Alike 3.0 Unported License
using System;
using System.Collections.Generic;
using System.Text;
using Sys = Cosmos.System;
using Cosmos.Hardware;
namespace taskbar
{
public class Kernel : Sys.Kernel
{
VGAScreen screen;
Mouse mouse;
protected override void BeforeRun()
{
screen = new VGAScreen();
mouse = new Mouse();
screen.SetGraphicsMode(VGAScreen.ScreenSize.Size320x200, VGAScreen.ColorDepth.BitDepth8);
screen.Clear(1);
mouse.Initialize(320, 200);
}
protected override void Run()
{
screen.SetPixel320x200x8((uint)mouse.X, (uint)mouse.Y, 40);
screen.SetPixel320x200x8((uint)mouse.X + 1, (uint) mouse.Y, 40);
screen.SetPixel320x200x8((uint)mouse.X + 2, (uint)mouse.Y, 40);
screen.SetPixel320x200x8((uint)mouse.X, (uint)mouse.Y + 1, 40);
screen.SetPixel320x200x8((uint)mouse.X, (uint)mouse.Y + 2, 40);
screen.SetPixel320x200x8((uint)mouse.X + 1, (uint)mouse.Y + 1, 40);
screen.SetPixel320x200x8((uint)mouse.X + 2, (uint)mouse.Y + 2, 40);
screen.SetPixel320x200x8((uint)mouse.X + 3, (uint)mouse.Y + 3, 40);
}
}
}
VScreen.SetGraphicsMode(VGAScreen.ScreenSize320x200, VGAScreen.ColorDepth.BitDepth8);
1>D:\TKP\Mercury I\ OS\tkposi-mercuri-a00xx\TKPOS I Mercuri\VGA\VGABoot.cs(26,17,26,32): error CS1061: 'Cosmos.Hardware.VGAScreen' does not contain a definition for 'SetGraphicsMode' and no extension method 'SetGraphicsMode' accepting a first argument of type 'Cosmos.Hardware.VGAScreen' could be found (are you missing a using directive or an assembly reference?)<br />
1>D:\TKP\Mercury I\ OS\tkposi-mercuri-a00xx\TKPOS I Mercuri\VGA\VGABoot.cs(26,43,26,53): error CS0117: 'Cosmos.Hardware.VGAScreen' does not contain a definition for 'ScreenSize'<br />
1>D:\TKP\Mercury I\ OS\tkposi-mercuri-a00xx\TKPOS I Mercuri\VGA\VGABoot.cs(26,77,26,87): error CS0117: 'Cosmos.Hardware.VGAScreen' does not contain a definition for 'ColorDepth'<br />
Build has been canceled.
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/842576/Cosmos-Large-Cursor-GUI-tutorial?display=Print | CC-MAIN-2020-34 | en | refinedweb |
Look and Feel
- 5 minutes to read
The ribbon, bars, and menu's look and feel depends on the skin applied to the application.
To specify the application skin, use the DevExpress Project Settings window:
- in the Solution Explorer, right-click the project
- in the context menu, click DevExpress Project Settings
- use Skin Name to select a skin
- use Skin Palette to select a color palette (for vector-image-based skins only)
See the following help topic for more information about application settings: Project Settings Page.
Visual Elements and Appearances
Bar and dock controls (ribbons, bars, menus, etc.) consist of multiple visual elements — ribbon pages, application menus, backstage views, etc. The skin specifies each visual element's appearance settings — background and foreground colors, font face, size and style, etc.
To customize appearance settings for bar and dock controls, you can use controllers. The BarAndDockingController component is a controller that specifies appearance settings for the following controls/components:
-)
Use the following components to access controllers:
DefaultBarAndDockingController — a toolbox component that allows you to access the default controller.
The default controller specifies appearance settings for all bar and dock controls/components in the application. You can utilize the controller in the Windows Forms Designer. Use the Controller property to access the default controller.
To access the default controller in code, use the static (Shared in VB) Default property.
BarAndDockingController — a controller that you can assign to a specific control/component. You can use this controller to customize appearance settings for the assigned control/component only.
To assign a controller to a control/component, use a dedicated property. For example, use the Controller property to assign a controller to a bar manager. The controller specifies appearance settings for all bars that belong to this manager.
You can specify appearance settings for the following visual elements:
- AppearancesBackstageView - Provides the default appearance settings applied to BackstageViewControl controls.
- AppearancesBar - Provides the default appearance settings of the Bars UI, implemented with the BarManager component.
- AppearancesDocking - Provides the default appearance settings for all dock panels, implemented with the DockManager component.
- AppearancesDocumentManager - Contains the default appearance settings applied to Views of the DocumentManager component.
- AppearancesRibbon - Provides the default appearance settings of the Ribbon Controls and Ribbon Forms.
- LookAndFeel - Provides Look And Feel and Skinning settings for the controls/components included in the XtraBars library.
- PaintStyleName - Gets or sets the name of the paint scheme applied to the Bars UI (BarManager), dock panels (DockManager) and MDI tabbed windows (XtraTabbedMdiManager).
- PropertiesBar - Contains the default customization settings of the Bars UI (BarManager).
- PropertiesDocking - Provides the default customization settings for the Application UI Manager (DocumentManager) and Dock Manager.
PropertiesRibbon - Provides the default customization properties for the Ribbon UI elements.
NOTE
Skins use brushes or images to fill the background. The actual behavior depends on the skin. The background color is not in effect if the skin uses an image to fill the background.
See the following help topic for more information about skins and appearance settings: Application Appearance and Skin Colors.
Individual Visual Elements
You can also specify appearance settings for a particular visual element. For example, the Bar.Appearance property allows you to specify appearance settings for a particular bar.
using System.Drawing.Drawing2D; bFile.Appearance.BackColor = Color.Gray; bFile.Appearance.BackColor2 = Color.White; bFile.Appearance.GradientMode = LinearGradientMode.Vertical; bFile.Appearance.Options.UseBackColor = true;
The table below contains properties that you can use to customize bar and dock controls/components.
Skin Editor
You can also use Skin Editor to customize a specific visual element's appearance.
TIP
To identify a visual element, hold Ctrl and click it.
See the following help topic for more information on how to create a custom skin and apply it to an application: WinForms Skin Editor. | https://docs.devexpress.com/WindowsForms/5482/controls-and-libraries/ribbon-bars-and-menu/common-features/look-and-feel | CC-MAIN-2020-34 | en | refinedweb |
Difference between revisions of "TextRework"
Revision as of 18:31, 29 March 2012
Contents
Proposal for restructuring how Inkscape handles text.
This is a work in progress...
Motivation
Inkscape currently handles all text layout using code written for the never-adopted "flowed-text" SVG1.2 spec. There are several problems with this code:
- The code does not handle normal SVG text well.
- It is extremely complex. It mixes up line-breaking with layout code.
- Inkscape's "flowed-text" is not recognized by any other SVG renderer.
Strategy
The major goal is to reorganize the code to treat all text as normal SVG text. Three special cases would be indicated by inkscape namespace tags giving a total of four types of text:
Normal SVG text
Standard SVG text with all properties ("x", "y", "letter-spacing, etc.).
The only special text manipulating provided by Inkscape would be that a carriage return would create a new <tspan> starting directly below the first "y" value of the <text> object and a distance of "line-spacing" below the <text> or <tspan> just before the carriage return (this should work for both right-to-left text and left-to-right text).
One problem with the current Inkscape implementation is that text in a <text> object is automatically moved to a <tspan>. This can cause problems with animated SVG's.
A block of text should be contained in one <text> element. Multiple lines should be handled in <tspan>s. This allows multi-line text selection (for example, you may want to change an attribute on a string of text that spans parts of two lines).
Normal Inkscape text
Normal SVG text with the added property that lines are handled by Inkscape, allowing insertion of new lines in the middle This is equivalent to the current Inkscape manipulation of text.
New attributes:
- inkscape:text-property="manage_lines" : Indicates type
- inkscape:text-role="line" : Marks start of each new line (replaces sodipodi:role="line").
One routine would be responsible for manipulation of lines, outputting normal SVG which would then be rendered by the normal SVG text routines.
Inkscape flowed text
Text where line-breaking would be handled by Inkscape to fit in a rectangle box, equivalent to the current Inkscape flowed:text. "letter-spacing" and "word-spacing" could be used to justify the text.
Hyphenation, indentation, etc. could be added here. Note that SVG2 may add some CSS text properties that could be useful here.
New attributes:
- inkscape:text-property="flowed_text"
- inkscape:text-align="justify"
- inkscape:text-box=href(#xxx) or <inkscape:flowregion>
One routine would be responsible for line-breaking, outputting normal SVG which would then be rendered by the normal SVG text routines. Line spacing is controlled only by the "line-spacing" property.. This routine would be responsible for verifying new lines fit in the box.
Inkscape flowed into shape text
A more generalized case of normal Inkscape flowed text.
Switching between the four modes would be possible, preserving the rendering of the text as much as possible.
Note: SVG2 will include text flow into arbitrary shapes using CSS Regions and CSS Exlusions.
Implementation Steps
- Write new routines to handle plain SVG text. Check that it works for all imported text.
- Switch "normal" Inkscape text (with lines handled by Inkscape) to use the new code.
- Rewrite line-breaking code to output standard SVG. | https://wiki.inkscape.org/wiki/index.php?title=TextRework&diff=prev&oldid=80654&printable=yes | CC-MAIN-2020-34 | en | refinedweb |
46158/how-to-convert-string-in-pandas-series-to-lower-case
Suppose you have the series stored in series_1 object, then to convert the strings in this series to lowercase, you will have to do something like this:
series_1.str.lower()
Hi,
To convert multiple columns to string, include a list of ...READ MORE
You can use split() using space as ...READ MORE
Use this :-
>>> datetime.datetime.strptime('2405201 ...READ MORE
g1 here is a DataFrame. It has a hierarchical index, ...READ MORE
You can do it by specifying index. ...READ MORE
Here's a sample script:
import pandas as pd
import ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
You can use a combination groupby function with the sum() method. ...READ MORE
Pandas allows you to change all the ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/46158/how-to-convert-string-in-pandas-series-to-lower-case | CC-MAIN-2020-34 | en | refinedweb |
shapeshift-scalashapeshift-scala
An asynchronous / non-blocking Scala library for the ShapeShift API.
ArtifactsArtifacts
The latest release of the library is cross-compiled for Scala 2.11, 2.12 and 2.13, and supports only Gigahorse with OkHttp backend as HTTP provider to avoid AHC/Netty version conflicts.
Previous versions of the library support only Scala 2.11 and were cross-compiled with different versions of Dispatch-Reboot, Gigahorse and Play-WS to fit every scenario.
Choose the appropriate artifact based on your needs:
and then, if you're using SBT, add the following line to your build file:
libraryDependencies += "com.alexdupre.shapeshift" %% "<artifactId>" % "<version>"
InitializationInitialization
To initialize the library you just need to instantiate a client with the chosen provider.
For Gigahorse:
import com.alexdupre.shapeshift.ShapeShiftAPI import com.alexdupre.shapeshift.provider.ShapeShiftGigahorseProvider val client: ShapeShiftAPI = ShapeShiftGigahorseProvider.newClient()
For Dispatch:
import com.alexdupre.shapeshift.ShapeShiftAPI import com.alexdupre.shapeshift.provider.ShapeShiftDispatchProvider val client: ShapeShiftAPI = ShapeShiftDispatchProvider.newClient()
For Play:
import com.alexdupre.shapeshift.ShapeShiftAPI import com.alexdupre.shapeshift.provider.ShapeShiftPlayProvider val client: ShapeShiftAPI = ShapeShiftPlayProvider.newClient(WS.client)
UsageUsage
The ShapeShiftAPI trait contains all the public methods that can be called on the client object.
Eg. if you like to trade Bitcoin for Ether you can obtain a deposit address with the following code:
import com.alexdupre.shapeshift.models._ val market = Market("BTC", "ETH") val outputAddress = "0xB368D70EF5F3466c8Fbd5B66cebE384D4E4C3d27" val order: Future[OpenOrder] = client.createOpenTransaction(market, outputAddress) | https://index.scala-lang.org/alexdupre/shapeshift-scala/shapeshift-gigahorse/2.1?target=_2.12 | CC-MAIN-2020-34 | en | refinedweb |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <net_config.h>
U8* http_get_var (
U8* env, /* Pointer to a string of environment variables */
void* ansi, /* Buffer to store the environment variable value */
U16 maxlen ); /* Maximum length of environment variable value */
The http_get_var function processes the string env,
which contains the environment variables, and identifies where the
first variable ends. The function obtains and stores the first
variable and its value into the buffer pointed by ansi, in
ansi format.
The maxlen specifies the maximum length that can be stored
in the ansi buffer. If the decoded environment variable value
is longer than this limit, the function truncates it to maxlen
to fit it into the buffer.
The http_get_var function is a system function that is in
the RL-TCPnet library. The prototype is defined in net_config.h.
Note
The http_get_var function returns a pointer to the
remaining environment variables to process. It returns NULL if there
are no more environment variables to process.
cgi_process_data, cgi_process_var
void cgi_process_var (U8 *qs) {
U8 var[40];
do {
/* Loop through all the parameters. */
qs = http_get_var (qs, var, 40);
/* Check the returned string, 'qs' now points to the next. */
if (var[0] != 0) {
/* Returned string is non 0-length. */
if (str_scomp (var, "ip=") == __TRUE) {
/* My IP address parameter. */
sscanf (&var[3], "%bd.%bd.%bd.%bd",&LocM.IpAdr[0],&LocM.IpAdr[1],
&LocM.IpAdr[2],&LocM.IpAdr[3]);
}
else if (str_Scomp (var, "msk=") == __TRUE) {
/* Net mask parameter. */
sscanf (&var[4], "%bd.%bd.%bd.%bd",&LocM.NetMask[0],&LocM.NetMask[1],
&LocM.NetMask[2],&LocM.NetMask[3]);
}
else if (str_scomp (var, "gw=") == __TRUE) {
..
}
}
} while (q. | https://www.keil.com/support/man/docs/rlarm/rlarm_http_get_var.htm | CC-MAIN-2020-34 | en | refinedweb |
Entrails of animals. Meaning of dream and numbers.
Find out what it means to dream entrails of animals. The interpretations and numbers of the Neapolitan cabala.
ox entrails 56
Meaning of the dream: incomes
chicken entrails 28
Description: professional achievements
sheep entrails 39
Interpretation of the dream: blessing on his work
entrails (tripe) 48
Translation: good wishes
bell of animals 72
Dream description: new openings in the labor
appearance of animals 42
Meaning: wise resolutions
trampling of animals 4
Translation of the dream: affliction secret
landing of animals 5
Interpretation: new aspirations
burr of animals 55
Sense of the dream: period of weakness
row of animals 15
What does it mean: unexpected change
multitude of animals 35
Meaning of the dream: health recovery
Watering animals 41
Description: concerns
embalm animals 49
Interpretation of the dream: health improvement
horn animals 80
Translation: good deals
fear of animals 7
Dream description: astuteness
grouping of animals 54
Meaning: sizes future
Mating of animals 68
Translation of the dream: winnings
count animals 48
Interpretation: rapid conquest
emerge of animals 8
Sense of the dream: important days
breed animals 20
What does it mean: desire to accumulate
blind animals 74
Meaning of the dream: extra gain
defend animals 41
Description: even temperament
donate animals 16
Interpretation of the dream: good and lasting solutions
select animals 9
Translation: friendship dangerous
cross animals 74
Dream description: revenge and satisfaction
drowning animals 19
Meaning: extravagant actions
kissing animals 75
Translation of the dream: illusions to forget
burning animals 24
Interpretation: lack of organization
draw animals 3
Sense of the dream: good health
requisitioning animals 71
What does it mean: unbridled ambition
dissect animals 86
Meaning of the dream: life unsafe
hugs animals 18
Description: obstinacy
pair animals 28
Interpretation of the dream: new commitments
poisoning animals 82
Translation: happy omen only for the rich
shoeing animals 62
Dream description: tranquility and safety
beat animals 26
Meaning: violent passion
breastfeed animals 42
Translation of the dream: energy direct harm
embark animals 59
Interpretation: sacrifices rewarded
sling animals 27
Sense of the dream: manic depression
breath of animals 15
What does it mean: to avoid friction
agony of animals 16
Meaning of the dream: opposition in business
herd of various animals 20
Description: profitable activities
appease animals 52
Interpretation of the dream: dangerous distraction
ensure animals 41
Translation: unrealizable desires
crouch animals 50
Dream description: originality
mark animals 23
Meaning: triumph of envious people
tie animals 6
Translation of the dream: uncertainty and doubt
race of animals 55
Interpretation: love of a great lady or inheritance succession
rent animals 39
Sense of the dream: jealousy declared
tormenting animals 20
What does it mean: bad luck and losses
love animals 32
Meaning of the dream: ill health
nursery rhyme of animals 67
Description: mood changes
import of animals 38
Interpretation of the dream: state of tormented and restless soul
seize animals 42
Translation: hard work
washing up animals 17
Dream description: fatigue resistance
punish animals 76
Meaning: favors granted
kidnap animals 6
Translation of the dream: hopes dashed | https://www.lasmorfianapoletana.com/en/meaning-of-dreams/?src=entrails+of+animals | CC-MAIN-2020-34 | en | refinedweb |
The
Arguments protocol has moved
from the objectbase library to the
defobj library. If you were subclassing
from the
Arguments class you will now
need to subclass from
Arguments_c and
import the defobj.h header file.
The
ListShuffler class has been
moved from simtools to
collections. You must now import
collections.h. Alternatively you can
use the new
[beginPermuted: aZone]
index creation method on a collection from the
Collection protocol in place
of the
ListShuffler protocol.
The -setLabels:: and
-setColors:: methods in the
Histogram
protocol now each must be given a new count:
(unsigned)labelCount and count:
(unsigned)colorCount argument, respectively. So,
for example, the code formerly in
MarketObserverSwarm.m in the
market application was: | http://download-mirror.savannah.gnu.org/releases/swarm/docs/set/x1399.html | CC-MAIN-2020-34 | en | refinedweb |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
int accept (
int sock, /* Socket descriptor */
SOCKADDR *addr, /* Pointer to address structure */
int *addrlen); /* Length of address structure */
The accept function accepts a connection request queued for
a listening socket. If a connection request is pending, accept
removes the request from the queue, and a new socket is created for
the connection. The original listening socket remains open and
continues to queue new connection requests. The socket sock
must be a SOCK_STREAM type socket.
In blocking mode, which is enabled if the system detects RTX
environment, this functions waits for a connection request. In non
blocking mode, you must call the accept function again if the
error code SCK_EWOULDBLOCK is returned.
The argument sock specifies a socket descriptor returned
from a previous call to socket.
The argument addr is a pointer to the SOCKADDR structure
that will receive the connection node IP addres and port number.
The argument addrlen is a pointer to the address length. It
should initially contain the amount of space pointed to by
addr. On return it contains the actual length of the address
returned in bytes.
The accept function is in the RL-TCPnet library. The
prototype is defined in rtl.h.
note
The accept function returns the following result:
connect, ioctlsocket, listen
. | https://www.keil.com/support/man/docs/rlarm/rlarm_accept.htm | CC-MAIN-2020-34 | en | refinedweb |
(React) dynamic component conditional setstate
I'm trying to create a dynamic component that matches the data index with the URL parameter blogID I get with the with the router.
Here I have the router parameters and send the props to the component
<Route path='/blog/:blogId/:blogTitle' render={() => <BlogPost blogData={this.state.blogData} /> } />
then on the component I set the initial state and I try to render the data that matches the index of the data, but I get an error of component repeatedly calling setstate and infinite loops.
constructor(props){ super(props); this.state = { blogId:'', blogTitle:'', blogData:[] } }
render(){ const { params:{ blogId, blogTitle } } = this.props.match; // so i map here to get the index and set the conditional to set the new state but I don't know where or how exactly this.props.blogData.map((val, idx) => ( idx == blogId ? this.setState({blogData:val }) : null )) return ( <div> <BlogView title={this.state.blogData.title} /> </div> ) }
You shouldn't setState inside render() function, the reason is that when you set state, compoennt needs to re-render to show user the updated data and then re-render again and again, instead do it in componentDidMount lifeCycle method so that it will run only once
componentDidMount() { const { params:{ blogId, blogTitle } } = this.props.match; this.props.blogData.map((val, idx) => ( idx == blogId ? this.setState({blogData:val }) : null )) }
How to become a pro with React setState() in 10 minutes, Of course, application UIs are dynamic and change over time. That's why state was created. State allows React components to change their Consider we have a component, and based on a boolean flag we want to show or hide a paragraph <p> Below is a small component which receives props having book name and author name. Lets not concentrate on how the whole app works. We want to focus on conditional rendering.
The reason you are getting an infinite loop that is calling a setState in your render method, which causes a re-render, which causes a setState, which causes re-render...so on.
Try moving this part out of the render method.
this.props.blogData.map((val, idx) => (
idx == blogId ? this.setState({blogData:val }) : null ))
State and Lifecycle – React, State and Lifecycle. This page introduces the concept of state and lifecycle in a React component. setState() to schedule updates to the component local state:. When I first started learning JavaScript, I was confused about exactly what line 3 above what trying to accomplish. I knew that changing an input field created an event object and that line 2 was…
If you call
setState in the render like this, you will cause infinite loops.
You don't need to
setState once you find the
blog, just use its
title in
BlogView after a
find:
render() { const { params:{ blogId, blogTitle } } = this.props.match; const blog = this.props.blogData.find((val, idx) => idx === blogId); return ( <div> <BlogView title={blog.title} /> </div> ); }
How to Show and Hide ReactJS Components, Everything in the React app is a component, so we have to play around components most of To show or hide any component using any condition, we should have the values, and setState({ showHideDemo1: !this.state. JSFiddles.
Add a conditional block before calling the set state inside render. if the both values are same, don't call the set state.
render(){ const { params:{ blogId, blogTitle } } = this.props.match; // so i map here to get the index and set the conditional to set the new state but I don't know where or how exactly this.props.blogData.map((val, idx) => ( (idx == blogId && val != this.state.blogData) ? this.setState({blogData:val }) : null )) return ( <div> <BlogView title={this.state.blogData.title} /> </div> ) }
How to Pass Props Object from Child Component to Parent , Below is an example of a basic React component: we would want to have component display dynamic data instead of static data. setState(state => ({ isDisplayed: !state. This is how conditional rendering works in React. Inside it, the Clock component schedules a UI update by calling setState () with an object containing the current time. Thanks to the setState () call, React knows the state has changed, and calls the render () method again to learn what should be on the screen.
Yes I finally did it,
I set the state on the componentDidMount()
componentDidMount(){ const { params:{ blogId, blogTitle } } = this.props.match; this.setState({blogId, blogTitle}); }
and the comparison on the render and it worked perfect
const blogger = this.props.blogData.map((val, idx) => ( idx == this.state.blogId ? <BlogView title={val.title} body={parse(val.body)} img={val.thumb} /> : null ))
React Patterns, If we plan to use this condition a lot, we can define another components to encapsulate the reused logic. const MinWidth = ({ width: minWidth, children }) => ( < In React, you can create distinct components that encapsulate behavior you need. Then, you can render only some of them, depending on the state of your application. Conditional rendering in React works the same way conditions work in JavaScript.
If-Else in JSX, Interactivity and Dynamic UIs · Multiple Components · Reusable Components This is because JSX is just syntactic sugar for function calls and object construction. This JSX: <div id={if (condition) { 'msg' }}>Hello World! you can use if statements outside of your JSX to determine which components should be used:. How to pass data between React Components; Advance Of React JS. Components Lifecycle in React JS; Components In React Js; How to Create a Login Form in React JS; The conditional operator in React JS; How to call a Child Component on click of a button; How to use bootstrap in React JS; Todo List Application in React; Create a Dynamic Table
React: Use a Ternary Expression for Conditional Rendering , Before moving on to dynamic rendering techniques, there's one last way to use Within the component's return statement, set up a ternary expression that npx create-react-app react-fundamentals Named exports V/S Default exports If a component is exported as named export , then it should be imported inside a brace with the exact same name.
Loading React Components Dynamically on Demand, setState . It is because to make component sorting easier later in render() (I'd appreciate it if anyone let me know instantiating here instead of in setState() enqueues changes to the component state and tells React that this component and its children need to be re-rendered with the updated state. This is the primary method you use to update the user interface in response to event handlers and server responses. | http://thetopsites.net/article/58075403.shtml | CC-MAIN-2020-34 | en | refinedweb |
The following class is accepted by JustIce (BCEL 5.2), whereas Sun verifier (correctly) rejects it:
Compiled from "Test.java"
public class Test extends java.lang.Object{
public Test();
Code:
0: aload_0
1: invokespecial #9; //Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: aload_0
1: iconst_0
2: caload
3: return
}
In the "main" method we are trying to read a char from an array of Strings and this is of course type-
incorrect.
My take on the solution is that in the org.apache.bcel.verifier.structurals.InstConstraintVisitor class, in
the visitCALOAD method, there are only two checks being made: whether the index is of int type, and
whether there is really an array on the stack. What is missing is the check, whether the array holds
element of 'char' type.
Created attachment 19195 [details]
the class illustrating the problem | https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=41069 | CC-MAIN-2020-34 | en | refinedweb |
I've been looking for days but couldn't find a way to change the tint color of a toolbar item on Android like on iOS
UINavigationBar.Appearance.TintColor = Color.FromHex("#FFFFFF").ToUIColor();
I found and applied lots of answers including defining your own style in xml but they just change back button color, title text color so on.
Is there a proper way to change toolbar items tint?
Any help would be appreciated.
Answers
@mchts
PCL CodeBehind:
using FormsToolkit;
using Xamarin.Forms.PlatformConfiguration;
using Xamarin.Forms.PlatformConfiguration.iOSSpecific;
using Models;
Create a new class in Models
Works with android, ios and uwp
@mchts
You can use the more general approach of using the NavigationPage method.
So if you are using a NavigationPage, then you can do something like this:
NavigationPage navPage = new NavigationPage {
BarBackgroundColor = Color.FromHex("#1FBED6"),
BarTextColor = Color.FromHex("#000000")
}
For more details:
@jezh I already tested this, it does not work with xamarin forms 2.5 +
@Anderson_Vieira
This falls into platform specific, you may have to mess about with the Android theme and it seems to depend on the Android API level, I don't think you'll achieve this via Forms.
You could try setting the opacity/color with platform specifics? could try setting the opacity level with BarSelectedItemColor
This works for iOS but didnt do the trick for android
This falls into platform specific, you may have to mess about with the Android theme.> @mchts said:
Did you try the other suggestions I mentioned for Android? you'll probably have to use Platform Specifics or the Android theme to achieve this.
@NMackay I tried platform specifics as @Anderson_Vieira suggested before but couldnt find a way to colorize icons (at least for android). The other two solutions you suggested are about navigation bar. Im trying to colorize icons
@NMackayI use the code I sent and it works perfectly, I did not have to work on the platforms individually.
@mchts That code is just to tinker with the title bar
Do we have to necessarily install the Forms Toolkit NuGet package? Also this is somewhat old. The latest version is at least 8 months old. I find Xamarin.Forms.MessageCenter. Can I substitute MessagingService with this MessageCenter? Or are they two completely different ones?
@ShantimohanElchuri
The two are completely different.
I recommend using version 1.1.18 because it works fine on android, uwp and ios to change the color of the bar, maybe another version of the package does not work on any of the platforms, but you can test and see the result you will get.
This worked for me:
Toolbar.axml:
values/styles.xml:
Images are saved as png files.
Can you help me?
I'm having a hard time understanding how sending a message will change the color?
Who is listening for the message?
@Anderson_Vieira
PCL CodeBehind:
using FormsToolkit;
using Xamarin.Forms.PlatformConfiguration;
using Xamarin.Forms.PlatformConfiguration.iOSSpecific;
using Models;
protected override void OnAppearing()
{
base.OnAppearing();
var navigationPage = Parent as Xamarin.Forms.NavigationPage;
Create a new class in Models
namespace ProjetoTeste.Droid.Models
{
public class MessageKeys
{
public const string ChangeToolbar = "ChangeToolbar";
public const string ToolbarColor = "ToolbarColor";
}
}
help me please how sending a message will change the color?
Who is listening for the message?
i want to change the color of toolbar icon in PCL project because we are sending the color from backend please help me in this | https://forums.xamarin.com/discussion/comment/378070 | CC-MAIN-2020-34 | en | refinedweb |
#include <qgsofflineediting.h>
Definition at line 32 of file qgsofflineediting.h.
Definition at line 37 of file qgsofflineediting.h.
Definition at line 55 of file qgsofflineediting.cpp.
Definition at line 60 of file qgsofflineediting.cpp.
convert current project for offline editing
convert current project to offline project returns offline project file path
Definition at line 68 of file qgsofflineediting.cpp.
return true if current project is offline
Definition at line 190 of file qgsofflineediting.cpp.
emit a signal that the next layer of numLayers has started processing
emit a signal that sets the mode for the progress of the current operation
emit a signal that processing has started
emit a signal that processing of all layers has finished
emit a signal with the progress of the current mode
synchronize to remote layers
Definition at line 195 of file qgsofflineediting.cpp.
Emitted when a warning needs to be displayed. | https://api.qgis.org/2.8/classQgsOfflineEditing.html | CC-MAIN-2020-34 | en | refinedweb |
I don’t really understand the solution. The first section of code is the answer provided.
def x_length_words(sentence, x):
words = sentence.split(" ")
for word in words:
if len(word) < x:
return False
return True
And this is my solution:
def x_length_words(sentence,x):
new=sentence.split(" ")
for i in new:
if len(i)>=x:
return True
return False
print(x_length_words(“i like apples”, 2))
should print False
print(x_length_words(“he likes apples”, 2))
should print True
My code, returned True and True. But it should return False and True.
Since the length of “i” is less than 2, it will return False, but it returned True.
Any help is appreciated. Thanks. | https://discuss.codecademy.com/t/what-is-the-logic-behind-returning-true-or-false-first/459896/12 | CC-MAIN-2020-34 | en | refinedweb |
Using Energy Modes with Bluetooth Stack
Introduction
This document discusses different energy modes available on EFR32 devices and demonstrates when and how to switch between energy modes while using Silicon Labs Bluetooth Stack. It also provides guidelines to save the most energy while ensuring that the stack is working.
EFR32 devices are designed for energy saving. Energy Management Unit (EMU) ensures that only actively needed peripherals are running and consuming current. For example, while waiting for an external signal or for the result of an A/D conversion, all clocks can be stopped (except ultra-low frequency clock) and all clocked peripherals can be shut down to save energy, but the device still stays in a state from which it can be woken up quickly.
Silicon Labs Bluetooth stack is designed so that it efficiently uses EMU. If configured properly, the stack puts the device into Deep Sleep mode and wakes it up only when needed, such as in communications windows or when a task is pending. This ensures minimal current consumption.
However, applications over the stack can have different needs regarding energy modes. For example, while waiting for an USART transaction, the processor can be shut down, but the device cannot be put into Deep Sleep mode. This is why it is important to understand how the stack handles energy modes.
Energy Modes of EFR32
EFR32 devices support a number of energy modes ranging from EM0 Active to EM4 Shutoff. EM0 Active mode provides the highest number of features, enabling the CPU, Radio, and peripherals with the highest clock frequency. EM4 Shutoff Mode provides the lowest power state, allowing the part to return to EM0 Active on a wakeup condition. In between modes different peripherals are enabled/disabled providing a highly scalable energy profiling possibility. Figure 1 shows different peripherals of an EFR32 device with the lowest energy modes in which they are active. For example, the Low Energy UART runs in EM0, EM1, and EM2 energy modes.
Note that the peripherals and their energy mode needs are highly dependent on the specific device.
Peripherals of an EFR32 Device
For more information about energy modes, see the Reference Manual of your device and AN0007 – Energy Modes.
Silicon Labs Bluetooth Stack needs the following peripherals to work:
- RAM memory – always needed for data retention
- RTCC – always needed for sleep timing
- LDMA – used for handling BGAPI commands in NCP mode
- UART – used for receiving/transmitting BGAPI commands/responses in NCP mode
- PROTIMER – used for protocol timing when receiving/transmitting packets
- RADIO – used for receiving/transmitting packets
RAM and RTCC are always needed while running the stack. RAM retains application data and RTCC ensures the working of soft timers and that the device will be woken up in time when a communication window opens and packets have to be received/transmitted. RAM needs EM3 or higher to work, while RTCC needs EM0/EM1/EM2 or EM4H to work from LFXO/LFRCO (ULFRCO is not supported by stack). This means that the device can be put, at most, into EM2 Deep Sleep mode to keep the stack working if higher energy modes are not needed.
EM0 Active mode is needed while tasks are running on the processor and/or radio is receiving/transmitting. EM1 Sleep mode is needed if there are no tasks running and no radio communication, but some peripheral is active that needs EM1 Sleep mode (e.g., USART, see Figure 1). In any other use case, the device can go into EM2 Deep Sleep mode.
The stack is designed to switch between EM0 Active and EM2 Deep Sleep modes if Deep Sleep is enabled and between EM0 Active and EM1 Sleep modes if Deep Sleep is disabled.
Enabling EM2 Deep Sleep in the Stack
Silicon Labs Bluetooth stack can be configured by either putting the device into EM1 Sleep mode or into EM2 Deep Sleep mode when EM0 Active mode is not needed. If sleep mode is not configured, EM1 is used by default.
To enable Deep Sleep in a C project, add this line to the gecko_configuration structure, which will be passed to gecko_init():
.sleep.flags = SLEEP_FLAGS_DEEP_SLEEP_ENABLE;
To disable Deep Sleep mode, set the sleep flags to 0:
.sleep.flags = 0;
Important: Bluetooth connections need at least 500 ppm timing accuracy. If LFRCO is used as a sleep clock (e.g., because a 32 kHz crystal is not connected), deep sleep mode has to be disabled because LFRCO cannot meet the accuracy. PLFRCO (available on some devices) may be used as sleep clock, as it is more accurate, but in this case the sleep_clock_accuracy has to be set to 500 ppm instead of the default 100 ppm, which can only be achieved with LFXO.
.bluetooth.sleep_clock_accuracy = 500,
In the SoC – Empty Software Examples provided with Silicon Labs Bluetooth SDK, Deep Sleep is enabled by default (if LFXO is present).
The stack puts the device into EM1 Sleep or into EM2 Deep Sleep mode every time the processor is not needed and starts a timer to wake it up when needed, for example when a communication window opens or a task is pending.
The default template of a Bluetooth stack based application looks like the following:
while(1) { evt = gecko_wait_event(); switch(BGLIB_MSG_ID(evt->header)) { //handling of different stack events } }
The function gecko_wait_event() automatically puts the device into EM1 Sleep / EM2 Deep Sleep mode after all tasks are done, and sets a timer to wake it up when needed. The function returns only when a stack event (e.g., connection established event) is raised. After this, the application can handle the event and perform other tasks.
To learn more about how to handle the main loop, see Scheduling application tasks while running BLE stack.
Wake Up from EM1/EM2 by Application
Sometimes the application needs to run even if there is no Bluetooth event, for example to periodically poll the state of some peripheral.
The device can be woken up by the application in two ways, as follows:
- Setting up a software timer in the stack with gecko_cmd_hardware_set_soft_timer(). This will wake up the device when the timer expires, generate a stack event (evt_hardware_soft_timer), and return from gecko_wait_event().
- Setting up any interrupt and triggering an external stack event from the interrupt handler with gecko_external_signal(signal). Now the interrupt wakes up the device and forces the stack to raise a new event (evt_system_external_event). A new event is raised and gecko_wait_event() returns.
For more information about the soft timer see the API Reference Manual. For more details about gecko_external_signal(), see Silicon Labs Bluetooth C Application Developer's Guide.
Temporarily Disable EM2 Deep Sleep Mode
The Deep Sleep enable bit in the stack configuration cannot be dynamically changed. In other words, after Deep Sleep is enabled/disabled, it cannot be withdrawn. However, there are two ways to temporarily disable going into EM2 Deep Sleep mode:
- in SoC mode you can use the sleep driver
- in NCP mode you can use the wake up pin.
With sleep driver the EM2 Deep sleep mode can be disabled (/blocked) temporarily by using SLEEP_SleepBlockBegin(sleepEM2). To Re-enable EM2 Deep Sleep mode, use SLEEP_SleepBlockEnd(sleepEM2). While EM2 is disabled (/blocked), the stack will switch between EM0 and EM1 temporarily.
To access these functions, sleep.c and sleep.h has to be added to your project, and sleep.h has to be included in your source file. sleep.c and sleep.h are added to Bluetooth projects by default because they are need by the stack (except if you are using an old stack version, where sleep.c is precompiled in the stack).
Note that multiple issuing of SLEEP_SleepBlockBegin() requires multiple issuing of SLEEP_Sleep_BlockEnd(). Every SLEEP_SleepBlockBegin() increases the corresponding counter and every SLEEP_SleepBlockEnd() decreases it.
In NCP mode, a wake up pin can be defined to disable EM2 Deep Sleep mode temporarily. While driving this pin high (or low, depending on the configuration), going into EM2 mode will be blocked. For more information about configuring the wake up pin, see AN1042: Using the Silicon Labs Bluetooth® Stack in Network Co-Processor Mode.
Temporary blocking of EM2 Deep Sleep can be useful, for example, when USART is used for a limited time. Deep Sleep has to be disabled to get the USART controller work and to receive messages. However, when not needed any more, Deep Sleep can be re-enabled to save energy.
Putting Device into EM3 Stop Mode
The Bluetooth stack does not work in EM3 Stop mode. However, if there are no connections alive and no advertisement/scanning is needed for a while, the device can be put into EM3 stop mode to save energy.
Since EM3 mode is blocked by default, this can be done by calling SLEEP_SleepBlockEnd(sleepEM3). The next call of SLEEP_Sleep() in the stack will put the device into EM3 mode, or application can call SLEEP_Sleep() directly. The device can be woken up by any interrupt. Call SLEEP_SleepBlockBegin(sleepEM3) within the interrupt handler to let the stack work again normally.
Note, however, that while EM2 Deep Sleep mode means a huge energy saving compared to EM1 Sleep mode, in EM3 Stop mode the current consumption drops only with around tenth of microamperes compared to EM2 Deep Sleep mode. See the Data Sheet of your device.
Putting Device into EM4 Hibernate / EM4 Shutoff Mode
If the application does not need to be operation for a while, the device can be put into EM4 mode, the lowest possible energy mode. In this mode, nearly everything is shut down and the current consumption is around hundred nanoamperes. However, to wake up the device from this state, you need to reset the device. That is, no data is retained from the previous state and the stack is reinitialized.
EM4 Hibernate and EM4 Shutoff are two types of EM4 modes. The most important difference is that RTCC can run in EM4 Hibernate mode, while it cannot run in EM4 Shutoff mode. Also, there is a 128 byte RAM retention possibility in EM4 Hibernate mode. To switch between EM Hibernate and EM4 shutoff use the following initialization:
EMU_EM4Init_TypeDef init_EM4 = EMU_EM4INIT_DEFAULT; init_EM4.em4State = emuEM4Hibernate; OR init_EM4.em4State = emuEM4Shutoff; EMU_EM4Init( &init_EM4 );
For detailed information about EM4 initialization, see.
To put the device into EM4 mode, use the function SLEEP_ForceSleepInEM4().
Be aware that, if the device goes EM4 very soon after reset, it may be hard to get attached to the target using debugger and you can easily lock yourself out. If you got locked out, start Simplicity Commander (C:\SiliconLabs\SimplicityStudio\v4\developer\adapter_packs\commander\commander.exe), connect to the Adapter, select Flash tab, and click "Unlock debug access".
Running RTCC in EM4 Hibernate Mode
In EM4 Hibernate mode, RTCC can run continuously.
- If RTCC is running from LFXO add the following to the EM4 initialization:
int_EM4.retainLfxo = 1;˛
- If RTCC is running from LFRCO add the following to the EM4 initialization:
int_EM4.retainLfrco = 1;
- If RTCC is running from ULFRCO add the following to the EM4 initialization:
int_EM4.retainUlfrco = 1;
To avoid the reset of the RTCC timer upon wake up from EM4, set the reset mode to LIMITED:
RMU_ResetControl(rmuResetSys,rmuResetModeLimited); RMU_ResetControl(rmuResetPin,rmuResetModeLimited);
In normal operation, when the Bluetooth stack runs, the RTCC runs from LFXO or LFRCO.
Wake up from EM4 Hibernate / EM4 Shutoff Mode
The device can wake up from EM4 mode, as follows:
- Driving low the reset pin
- State change of some dedicated pins
- Cryotimer interrupt
Pins dedicated for EM4 wake up are listed in EFR32BG1 Blue Gecko Bluetooth Smart SoC Family Data Sheet. To enable EM4 wake up on these pins, use the following template:
GPIO_PinModeSet(gpioPortF, 7, gpioModeInputPullFilter, 1); GPIO_EM4EnablePinWakeup( GPIO_EXTILEVEL_EM4WU1, _GPIO_EXTILEVEL_EM4WU1_DEFAULT ); GPIO_IntClear(_GPIO_IFC_EM4WU_MASK | _GPIO_IFC_EXT_MASK); NVIC_ClearPendingIRQ(GPIO_ODD_IRQn); NVIC_EnableIRQ(GPIO_ODD_IRQn); NVIC_ClearPendingIRQ(GPIO_EVEN_IRQn); NVIC_EnableIRQ(GPIO_EVEN_IRQn);
Cryotimer can also be used to wake up the device. For example, to wake up the device 4 seconds after putting it into EM4 mode, use the following initialization:
cryoInit.enable = false; cryoInit.em4Wakeup = true; cryoInit.osc = cryotimerOscLFXO; cryoInit.period = cryotimerPeriod_128k; CRYOTIMER_Init(&cryoInit); CRYOTIMER_IntEnable(1); CRYOTIMER_IntClear(1); NVIC_ClearPendingIRQ(CRYOTIMER_IRQn); NVIC_EnableIRQ(CRYOTIMER_IRQn); //… CRYOTIMER_Enable(1); SLEEP_ForceSleepInEM4();
In this example, the Cryotimer runs from LFXO (remember to set int_EM4.retainLfxo = 1). LFXO has a 32 kHz clock, consequently 128 k period will result in overflow in 4 seconds. Remember to define the IRQ handler for Cryotimer:
void CRYOTIMER_IRQHandler(void) { CRYOTIMER_IntClear(1); }
To differentiate EM4 wake up from other reset causes (e.g., power on reset, Watchdog reset) the following statement can be used:
#if defined(RMU_RSTCAUSE_EM4WURST) if ((RMU_ResetCauseGet() & RMU_RSTCAUSE_EM4WURST) == 1) #elif defined(RMU_RSTCAUSE_EM4RST) if ((RMU_ResetCauseGet() & RMU_RSTCAUSE_EM4RST) == 1) #endif { /*EM4 wake up*/ } else { /*other reset cause*/ }
Example
This guide has a related code example here: Using energy modes with Bluetooth stack | https://docs.silabs.com/bluetooth/3.0/general/system-and-performance/using-energy-modes-with-bluetooth-stack | CC-MAIN-2020-34 | en | refinedweb |
DMA_CfgChannel_TypeDef Struct Reference
Configuration structure for a channel.
#include <em_dma.h>
Configuration structure for a channel.
Field Documentation
◆ highPri
Select if channel priority is in the high or default priority group with respect to arbitration.
Within a priority group, lower numbered channels have higher priority than higher numbered channels.
◆ enableInt
Select if interrupt will be enabled for channel (triggering interrupt handler when dma_done signal is asserted).
It should normally be enabled if using the callback feature for a channel, and disabled if not using the callback feature.
Channel control specifying the source of DMA signals.
If accessing peripherals, use one of the DMAREQ_nnn defines available for the peripheral. Set to 0 for memory-to-memory DMA cycles.
◆ cb
User definable callback handling configuration.
Refer to structure definition for details. The callback is invoked when specified DMA cycle is complete (when dma_done signal asserted). Callback is invoked in interrupt context, and should be efficient and non-blocking. Set to NULL to not use the callback feature.
- Note
- Referenced structure is used by the interrupt handler, and must be available until no longer used. Thus, in most cases it should not be located on the stack. | https://docs.silabs.com/gecko-platform/latest/emlib/api/efm32zg/struct-d-m-a-cfg-channel-type-def | CC-MAIN-2020-34 | en | refinedweb |
When Oracle announced Java's new release cadence - with a new feature version appearing every 6 months, the intent was to increase the pace of innovation within the platform.
Whilst most Java developers are sticking with the long-term support (LTS) releases - Java 8 and 11 - the 6-month cycle is allowing the cutting edge of Java to move faster than ever before.
Java's core value of backwards compatibility means that delivering new features is not straightforward. Any new feature must not cause problems for existing code, and must interact cleanly with the established language semantics and syntax. The maturity and age of Java means that there are many touchpoints that any new feature may have to consider.
The new release cadence makes this job a little easier - if a feature is not quite ready for a particular release then it can be retargeted at the following release and ship 6 months later.
Oracle has also introduced Incubating Features - that allow features to be shipped in a preliminary state. These are typically delivered as a module - in a namespace that makes it clear that the API is not final and may change. On the language syntax and semantics front, there are also Preview Features, which allow developers to try out a proposed feature in detail and provide feedback before the feature is finalized.
In this eMag we want to showcase some of the smaller features that have been delivered and reached their final form in recent releases. Language evolution comes in both large and small packages (and sometimes the smaller ones are really stepping stones that unlock bigger changes).
In the coming pages, we will cover a range of topics including an overview of what's changed between Java 8 and 12, the expanding capabilities of Local Variable Type Inference, the ability to execute single-file Java programs as scripts and Graal - the new JIT compiler for the JVM. - Recent Innovations in the Java Platform include:
-.
- Running Single-file Programs without Compiling in Java 11 - Starting with Java SE 11, and for the first time in the programming language’s history, you can execute a script containing Java code directly without compilation. The Java 11 source execution feature makes it possible to write scripts in Java and execute them directly from the *inx command line.
- Java Feature Spotlight: Local Variable Type Inference - In Java Futures at QCon New York, Java Language Architect Brian Goetz took us on a whirlwind tour of some recent and future features in the Java Language. In this article, he dives into Local Variable Type Inference.
- Getting to Know Graal, the New Java JIT Compiler - Oracle have released Graal, a new JIT compiler for Java. For Java developers, Graal can be thought of as several separate but connected projects - it is a new JIT compiler for HotSpot, and also a new polyglot virtual machine, GraalVM. The initial release includes support for JVM bytecode and JavaScript with LLVM, Ruby and R in beta.
InfoQ eMags are professionally designed, downloadable collections of popular InfoQ content - articles, interviews, presentations, and research - covering the latest software development technologies, trends, and topics. | https://www.infoq.com/minibooks/java-platform-innovations/?topicPageSponsorship=c1246725-b0a7-43a6-9ef9-68102c8d48e1&itm_source=minibooks_about_java&itm_medium=link&itm_campaign=java | CC-MAIN-2020-34 | en | refinedweb |
Angular Event Binding Example
Here, i will show you how to works how to bind event in angular. you can understand a concept of how to bind click event in angular. if you want to see example of bind event in angular then you are a right place.
you can see event binding in angular 8. Alright, let’s dive into the steps.
You can easily event binding in angular 6, angular 7, angular 8 and angular 9 application.
In this post, i will give you simple exsample of click event binding with button and change event bing with select box.
So, let's see bellow simple example with demo and output.
src/app/app.component.ts
import { Component } from '@angular/core';
@Component({
selector: 'my-app',
templateUrl: './app.component.html',
styleUrls: [ './app.component.css' ]
})
export class AppComponent {
name = 'Angular Event Binding Example - ItSolutionStuff.com';
types = [
'User',
'Admin',
'Super Admin'
]
chaneType(event){
console.log('Call on change event.');
console.log(event);
}
buttonClick(event){
console.log('Call on click event.');
console.log(event);
}
}
src/app/app.component.html
<h1>{{ name }}</h1>
<div> Type :
<select (change) = "chaneType($event)">
<option *{{i}}</option>
</div>
<button (click)="buttonClick($event)">Click Me!</button>
You can see bellow preview:
You can see bellow output:
Call on change event.
preview-98637320847d9dfba629b.js:1 Event {isTrusted: true, type: "change", target: select, currentTarget: select, eventPhase: 2, …}
preview-98637320847d9dfba629b.js:1 Call on click event.
preview-98637320847d9dfba629b.js:1 MouseEvent {isTrusted: true, screenX: 1095, screenY: 288, clientX: 33, clientY: 106, …}.
- Angular Material Checkbox Example
- Angular Material Textarea Tutorial
- Angular Material Datepicker Example
- Angular Install Jquery npm Example
- How to Redirect to another Page in Angular?
- How to Create Custom Validators in Angular 9/8?
- How to Create New Component in Angular 8?
- How to Create Custom Pipe in Angular 9/8? | https://www.itsolutionstuff.com/post/angular-event-binding-exampleexample.html | CC-MAIN-2020-34 | en | refinedweb |
Technical Support
On-Line Manuals
RL-ARM User's Guide (MDK v4)
#include <rtl.h>
void *_calloc_box (
void* box_mem ); /* Start address of the memory pool */
The _calloc_box function allocates a block of memory from
the memory pool that begins at the address box_mem and
initializes the entire memory block to 0.
The _calloc_box function is in the RL-RTX library. The
prototype is defined in rtl.h.
Note
The _calloc_box function returns a pointer to the allocated
block if a block was available. If there was no available block, it
returns a NULL pointer.
_alloc_box, _free_box, _init_box
#include <rtl.h>
/* Reserve a memory for 32 blocks of 20-bytes. */
U32 mpool[32*5 + 3];
void membox_test (void) {
U8 *box;
U8 *cbox;
_init_box (mpool, sizeof (mpool), 20);
box = _alloc_box (mpool);
/* This block is initialized to 0. */
cbox = _calloc_box (mpool);
..
. | https://www.keil.com/support/man/docs/rlarm/rlarm__calloc_box.htm | CC-MAIN-2020-34 | en | refinedweb |
Data structure for complexe enumeration.
Project description
Catalog is a data structure for storing complex enumeration. It provides a clean definition pattern and several options for member lookup.
Supports Python 2.7, 3.3+
Install
pip install pycatalog
Usage
from catalog import Catalog class Color(Catalog): _attrs = 'value', 'label', 'other' red = 1, 'Red', 'stuff' blue = 2, 'Blue', 'things' # Access values as Attributes > Color.red.value 1 > Color.red.label 'Red' # Call to look up members by attribute value > Color('Blue', 'label') Color.blue # Calling without attribute specified assumes first attribute defined in `_attrs` > Color(1) Color.red
Attributes
_attrs: Defines names of attributes of members. (default: ['value'])
_member_class: Override the class used to create members. Create a custom member class by extending CatalogMember.
Methods
_zip: Return all members as a tuple. If attrs are provided as positional arguments, only those attributes will be included, and in that order. Otherwise all attributes are included followed by the member name.
> Color._zip() (('red', 1, 'Red', 'stuff'), ('blue', 2, 'Blue', 'things')) > Colot._zip('value', 'label') ((1, 'Red'), (2, 'Blue'))
Changelog
1.2.0 - Add support for Python 2. (Wrong direction. I know)
1.1.1 - Add _zip method
1.0.0 - Initial build and packaging
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pycatalog/1.2.0/ | CC-MAIN-2018-22 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.