Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
Hi there 👋
We have launched some great updates to Creative Force and are happy to share the release notes with you!
Cheers Matthias and the product team
Support for multiple samples of a single product
On occasions, several sizes of a product/ sample can be delivered to a studio and it may be beneficial to utilise the different sizes to create the final suite of images.
e.g. a size 8 blouse is the perfect fit for the studio mannequin and the preferred model is a size 12...
To support this way of working, and to maintain visibility of all samples received, we have developed a feature to allow multiple samples to be collated under a single product code. This allows any sample (or combination of samples) associated with the product to be used in photography whilst maintaining the correct product status.
To utilise this feature, it is essential that each sample has a unique identifier, which will become the 'sample code'.
When the user imports a job where multiple rows contain the same product code and these rows contain different sample codes, the user is presented with a new view. In this view, the user can see the various additional samples associated with the common product code. Here they can decide if they wish to import these samples collated under the common product code (default) or as individual products within the job (by unchecking the tick-box next to the sample code).
Please note, where a product code exists on multiple rows without a unique sample code, Creative Force will consider the additional rows to be 'redundant' and they will not be imported.
The user can now choose to view the products page either by 'Product' or by 'Sample'
When viewing by 'Product', the user will only see a single row per product and will be given a visual indication of the presence of additional sample information:
• sample codes
• Location and sub-location
• The number of Checked-in samples vs the total number of samples for the product
If the text is Blue, the multiple values are different, whereas Black text indicates common values for all samples
Additional improvements or bug-fixes
Display or hide the Preset Variants
Where additional image preset variants have been created in post-production, the user can now select which images to view (Main and variants). It is also possible to turn on/ off the file name, star rating and variant name displayed in the film-strip
Change backdrop colour
To aid image the user in post QC, they can now adjust the colour of the image backdrop. Choose between a default 'light' or 'dark' background, define a custom colour via a Hex or RGB value or select from the colour picker.
Display all available information and content
The post QC side-bar has been improved to show the retouching Instructions, Comments and Markings, Previous production examples and Good/ Bad examples. Images from production or the good/ bad examples can be opened and viewed in high-resolution.
Print sample labels in Check-in flow
When in the 'check-in' flow, it is now possible to print sample labels for all samples being checked in, only 'checked-in' samples or only samples yet to be 'checked-in'.
Questions, ideas or feedback?
We're striving to create the best product for you and your team! Feel free to drop us a line via email/chat or submit your ideas and feedback directly to our product team via email@example.com.
|
OPCFW_CODE
|
What is a QTI file?
QTI stands for Question and Test Interoperability, a widely used and adopted standard format for representing assessment content. A QTI file is a zip file containing assessment data in XML and its associated multimedia content such as an image. QTI files can be created, exported, and imported by different tools and systems that support the QTI specification. If you unzip a QTI file, the files will be in XML format.
Most instructors use QTI files created by a platform or tool supporting QTI export. Once exported, the QTI file should remain zipped. Systems supporting QTI file import know the expected structure and contents of a QTI zip file and how to transfer the contents into the system. The example below gives a quick glimpse into the contents of a QTI zip file.
The QTI zip file, ML text example.zip, was created on the zyBooks platform by exporting a test as a QTI file. The file contains an imsmanifest.xml file and folders named “items” and “tests.” The manifest file (imsmanifest.xml) describes the contents of the zip file. The test.xml file in the tests folder describes the structure of the test, including the order in which assessment items should appear. The items folder contains the information for each of the eight assessment items. For example, assessment item seven includes XML content for the question and the accompanying image file. The XML and image files for item seven are shown on the right.
Why are QTI files useful to instructors?
QTI files enable instructors to exchange assessment content between authoring tools, item banks, learning platforms, and assessment delivery systems. Because QTI files have a standard format and are supported by many systems, QTI files make it easy to share and use questions across different platforms. Many learning management systems (LMS), such as Canvas and Blackboard, support QTI file importing.
Specifically for instructors using a zyBook, a QTI file allows instructors to transfer zyBook test bank questions into their LMS to create and administer an assessment. The zyBooks platform supports exporting test bank questions as a QTI file.
A growing number of zyBooks include test banks, so be sure to check the About page of your book.
How to create a QTI file from your zyBook and use it in your LMS.
The following steps outline the general process for transferring test bank questions from your zyBook into your LMS using a QTI file.
- Check that your zyBook has a test bank available.
- Note: Eval copies show only the first chapter’s test bank questions.
- In your zyBook, navigate to the “Tests” tab. Then click the orange “+ Test” button.
- Select questions to transfer from the available test bank questions. You can select as many questions as you like from as many sections and chapters as you like.
- Export the selected questions as a QTI file. The file exported must be a zip file.
- Import the QTI zip file into your LMS.
- Review imported questions and create an assessment to administer through your LMS.
For more detailed steps and additional tips, see the following zyBooks Help Center articles:
|
OPCFW_CODE
|
What do Heterotrophs Move By?
Heterotrophs move by using pseudopods, cilia, or sarcodines, ciliates, zooflagellates, and can sometimes slide around using slime layers.
Protists are grouped by: heterotrophs that move, heterotrophs that can't move, and producers
When scientists divide protist into the producers heterotrophs that can move and heterotrophs that cannot move how are they grouping the protists?
They're classifying them by how they get their food.
autotrophic animals are different from heterotrophs because autotrophs produce their own food by photosynthesis while heterotrophs move around in search of food
pseudopods and cilia
Slime molds are plant like heterotrophs, which means that they are unable to move
Yes, jellyfish are heterotrophs. Heterotrophs eat and are consumers. Jellyfish eat, and are consumers: therefore, they are heterotrophs.
heterotrophs with pseudopods heterotrophs with flagella heterotrophs with restricted mobility nonmotile spore-former ( sporozoans ) ...........by: sharmaine
Prey are heterotrophs.
Animals are heterotrophs.
Archaebacteria are heterotrophs
Jellyfish are heterotrophs.
The difference of heterotrophs and autotrophs is that the autotophs can make their own food while heterotrophs consume their food. Heterotrophs can be herbivorous, omnivorous, or carnivorous.
Yes. All fungi are heterotrophs.
Yes. All animals are heterotrophs.
Autotrophs make their own food, Heterotrophs eat other Heterotrophs or AutoTrophs
Heterotrophs are living things that have to eat other living things to survive. That would be an animal, since plants make their own food. Heterotrophs that eat other heterotrophs would be animals that eat other animals. Heterotrophs that eat only other heterotrophs would be a carnivore. If the heterotroph eats both heterotrophs (animals) and autotrophs (plants), that would describe an omnivore.
first order heterotrophs
Actually worms are both Heterotrophs and Parasites
Consumers are heterotrophs. Autotrophs are producers.
Organisms in the fungi kingdom are heterotrophs.
Humans are heterotrophs. Plants are autotrophs.
Autotrophs depend on heterotrophs for minerals
Heterotrophs get energy from other organisms.
Autotrophs could not live without heterotrophs. This is because autotrophs get there nutrients from the soil which is decomposed heterotrophs.
Heterotrophs are not photosynthetic.They obtain carbon from others.
No. They are heterotrophs. All the pathogens are heterotrophs, probably.
Autotrophs and heterotrophs are organisims that get or make food.
snake are 2nd heterotrophs
Type your answer here... aerobic heterotrophs
Heterotrophs get their nutrients and food from other things. We depend on plants and other organisms to survive. Heterotrophs don't make their food.
Multicellular heterotrophs are located everywhere in the world. All animals and humans are multicellular heterotrophs and can be found on land and in the ocean.
Heterotrophs are a range of shapes and sizes. They can range from unicellular organisms to elephants. This is because heterotrophs eat other organisms for food.
Heterotrophs are also known as parasites or saprophytes depending on their mode of getting food.
Part heterotrophs? I don't know what you mean by "part."
Ostriches are heterotrophs because they have to find their own food.
Heterotrophs and autotrophs!! Since these bacteriums are chemosynthetic or photosynthet
Mushrooms are heterotrophs. They cannot make their own food.
Majority of algae known till now are autrotrophs and not heterotrophs.
Heterotrophs get their energy by consuming plants or animals, dead or alive.
Yes. Heterotrophs depend on the sun to assist with photosynthesis. This is the process used by heterotrophs to create energy needed for their survival.
|
OPCFW_CODE
|
True Chain; Quick Facts; True Chain Price (USD) Daily High / Daily Low All Time High Market Capitalization Daily Volume $: 0.192121 $: 0.193897 / $0.180321 $: …
Interact with the Ethereum blockchain easily & securely. Interact with the Ethereum blockchain easily & securely. - truechain/webwallet. some guy somewhere, often not hustling. _== o.O >> volcanic rhythms & barbecued thoughts.
- Hsbc denní limit přenosu austrálie
- Hlavní peněženka hvězdné lumeny
- W8 ben pokyny kanada
- Rcn platba za sledování filmů
- Dělá paypal hlášení přátel a rodinných plateb irs
- Cena malajsijského ringgitu v bangladéši
- Kurz eura k myr
- 24,15 btc na gbp
return " File True, funcargs: bool = False, truncate_locals: bool = True, chain: bool = True, TrueChain初链 and meetups to develop the research and developer community around TrueChain. spartakus: https://github.com/samikshan/ spartakus Jan 18, 2021 Contributions via GitHub pull requests are gladly accepted from their original author. By submitting any copyrighted material via pull request, Wallet: https://github.com/KnoxFS/kfx-wallet/releases. Mineable: No. Twitter: https ://twitter.com/ TrueChain (TRUE); TRUECOIN (TCOINT); TrueDeck (TDP) The full output and stack trace is available at https://gist.github.com/anonymous/ 5020860. I'm running Windows 7 (32-bit), Java 7, Gradle 1.4. This is a blocker for 初链(True Chain).
Contents Background 01 TheRiseofGlobalDigitalAssetTransactions TrueChain’sConsensusandTechnology 02 MinervaHybridConsensus IncentiveModel 04 Governance 05
TrueChain Statistics. TrueChain price today is $0.17674200 USD, which is up by 2.6% over the last 24 hours. There has been an hourly dip by -2.07%.
If TrueChain has 5% of Bitcoin's previous average growth per year $0.3481: $0.3895: $0.4359: $0.4877: If TrueChain has 10% of Bitcoin's previous average growth per year $0.3851: $0.4768: $0.5902: $0.7307: If TrueChain has 20% of Bitcoin's previous average growth per year $0.4592: $0.6777: $1.00: $1.48: If TrueChain has 50% of Bitcoin's previous
Mineable: No. Twitter: https ://twitter.com/ TrueChain (TRUE); TRUECOIN (TCOINT); TrueDeck (TDP) The full output and stack trace is available at https://gist.github.com/anonymous/ 5020860. I'm running Windows 7 (32-bit), Java 7, Gradle 1.4.
About TrueChain. The live TrueChain price today is . $0.204707 USD with a 24-hour trading volume of $32,439,391 USD. TrueChain is up 6.66% in the last 24 hours. The current CoinMarketCap ranking is #659, with a live market cap of $16,289,700 USD. It has a circulating supply of 79,575,543 TRUE coins and the max. supply is not available.
It has reportedly received investments from the likes of ZB capital, crypto capital, and UB.VC. The TrueChain focuses on building a free, open, safe, efficient and easy-to-use blockchain technology infrastructure, and building a blockchain economy operating system commercial infrastructure.It is the demand of the times and the dream of TrueChain to create a permissionless blockchain that will carry the future commercial decentralized Dec 22, 2020 · TrueChain (TRUE) is the world’s first public chain to achieve a fPoW + DPoS hybrid consensus, providing a high-performance and high-security public chain infrastructure for decentralized applications… TrueChain is committed to be the next generation of blockchain infrastructure. It is the world's first public chain adopting the fPow+ PBFT hybrid consensus that has a strong global open source developer community supporting it. THEKEY OFFICIAL. Read PINNED Message upon entering! THEKEY is a blockchain based identity verification technology (IDV) being developed to create secure digital identities for the future TrueChain [TRUE] is a cryptocurrency with its own blockchain.
truechain. verify github account ownership 0 0 0 0 Updated Mar 16, 2020. com.github.truechain » trueWeb3j Apache. TrueWeb3j Last Release on May 18, 2020 Indexed Repositories (1306) Central. Sonatype.
The platform has addressed the issue of decentralization and efficiency using a hybrid consensus combining PBFT and PoW. TrueChain is a truly fast, permissionless, secure and scalable PBFT-fPOW blockchain with global developer community. Website: Truechain.pro/en GitHub: GitHub.com Seele. Welcome to Seele's official Telegram group! Seele is a next-generation blockchain with which our dedicated team hopes to fundamentally transform the human experience. Get the TrueChain price live now - TRUE price is up by 11.16% today.
commit.id. https://github.com/KhronosGroup/OpenXR-SDK-Source/ “The Quest For The One True Chain” was written to embody the themes of governance that Dogecon 10)) ) chain = Chain(loop=True) chain.add_sequences( (my_first_sprite, sequence1), (my_second_sprite, sequence2) ) start_scene.animations.fire(None, Jul 2, 2019 TrueChain Partner and Investors Exchanges and Team; 15. The repositories tab within GitHub shows the improvements and innovations that Mar 15, 2019 different from the true chain. This is mitigated in Source: https://github.com/neo- project/docs/blob/master/en-us/node/consensus.md.krypto linux
ako prenesiem svoje mobilné dáta do svojho nového iphone
pruh pre ďateliny
éterové zlato wiki
9,9 miliárd dolárov v rupiách
koľko je hodín v číne teraz
prevod izraelských peňazí na americké doláre
- Zahájit podnikání v kryptoměně
- Těžba v new yorku
- Udělejte rotce kadetů nadřazené
- Jak obnovit coinbase peněženku
- Doom atmosfear
- Wallstreetbets reddit
- Stažení aplikace samsung market
奋斗的菲人★推广交流群★ ★★★★★★★★★★★★ 免费发布:t.me/bcfb888 曝光频道:t.me/pzbg888 招聘频道:t.me/hwjob365 鉴黄频道:t.me/zfjjh 担保频道:t.me/bddb365 招商频道:t.me/bcdlzs 群组导航:t.me/qzdh888 友情联盟: @coderzh
TPAY is the world's most secure coin. Join our community for breaking news, info, access and updates. Contents Background 01 TheRiseofGlobalDigitalAssetTransactions TrueChain’sConsensusandTechnology 02 MinervaHybridConsensus IncentiveModel 04 … TRUE_NETWORK_CONF corresponds to /etc/truechain/hosts, Default's under this project's config/ This file is populated with repetitive 5-6 lines containing loopback IP address 127.0.0.1. TRUE_SIMULATION is as follows: Nov 22, 2018 - python prototype for hybrid consensus. Contribute to truechain/py-trueconsensus development by creating an account on GitHub. TrueChain (TRUE) is a cryptocurrency and operates on the Binance Coin platform.
|
OPCFW_CODE
|
Tip: You may have one certificate of authenticity for Windows and another for Office. I doubt that there's actually a tool from Microsoft that can look at a key and identify what product it was for, was hoping that there was and someone here knew where to look for it. If you need help installing Office 2010, see and. The product keys they provide to students, teachers, and employees are known as volume license keys. If you find your computer listed, it means that the license is linked.
Normally you have several resources to find your Retail Product Key. Unfortunately Windows is stuck loading at the login page. No soliciting of any kind. Product keys supplied as part of your Visual Studio subscription do not allow unlimited activations of a product. If there is a reliable tool to do this I would be very grateful.
Select it from the View menu and you'll see the Product Key List appear in a browser window, as shown in Figure C. The keys that were claimed are both retail keys, and are displayed on the page. For example, the product key may have been mistyped or a product key for a different product might have been used. The underlying mechanics of Windows 8 are essentially the same as Windows 7 if you ignore the Metro User Interface. For example, you can open the file as a read-only workbook in Excel.
In support of this commitment, Microsoft has implemented daily key claim limits for Visual Studio subscriptions. If you're having trouble reading the characters in your product key, here's an example of what the letters and numbers look like: Tip: If you bought Office from an online retailer and received a confirmation email, try copying and pasting the product key from this email instead of typing it. I copied the config folder to a usb after booting into Ubuntu and ran Produkey on the folder in my current laptop. Individual product keys are found by selecting the blue Get Key link for a particular product on the page as shown below. Once ProduKey finds the product keys, it allows you to save them so that you can find them whenever you need to reinstall that software.
If you have any questions, please visit the. . They respond so fast to my purchase request. Other keys must be claimed by selecting the Get Key link for the product. Another very common situation is that you've got a bunch of computers, and you can't remember which product keys correspond to which computers.
I guess you cannot get any more legit than that. Try to research your issue before posting, don't be vague. Note that this product key won't match the product key shown in My Office Account. Please be as specific as possible. It reported my Office 2003 2007 2010 product keys correctly. I got all my files off the hard drive using Kaspersky recuse disk and plan on reinstalling Windows 10. DreamSpark Lab Keys are intended for use in university computer lab scenarios.
Notice that you can record a brief note about claimed keys in the Notes column. There is nothing unique about the disks, just the product key. However, it's important to remember that this process will change your account type from local to a Microsoft account. I've tried refreshing and resetting but it won't work. Starting with the , your product key is no longer only attached to your hardware — you can also link it to your Microsoft account. However, the characters shown uniquely identify your product key. I'll take a look there, but the problem is that I'm not confident that these keys were actually used and can be identified through Spiceworks.
It reported my Office 2003 2007 2010 product keys correctly. Like I said, probably a long shot, but hoping that someone could give me some direction so that I can get everything on track here. If you need additional keys, you can submit a request through Visual Studio Subscription and it will be considered for approval on a case-by-case basis. To begin with, from the View menu you can add gridlines and add shading to every other row, as shown in Figure B. You can use Enter to force newlines, which will be preserved. Select how you got Office from the options below. There are several reasons why you might get an error after entering a product key.
|
OPCFW_CODE
|
How to design CMOS bridge rectifier?
I designed the bridge rectifier circuit.
I wnat to make a full-wave rectification. so, I made a CMOS circuit.
I refered to other papers. Figure2.
"An ultra-low-voltage self-powered energy harvesting rectifier with digital switch control"
Link:
I think this circuit is full-wave rectification. But the result is a half-wave rectifier circuit. Is the result of this graph correct?
And my theme is Energy Harvesting. So input in mV units. I had input 0.3V but Simulations output is 1V. Why does this result?
Thank you for reading.
can you add the link in the comment. any one will add it to the question for you.
https://www.jstage.jst.go.jp/article/elex/12/3/12_12.20140921/_article
Have you tried a higher input voltage?
I tried 3V input. But still operation 1V input.
You can't ground the voltage source to the same ground as the output. You should remove the bottom ground and you will also need a load resistance.
Here is a schematic and simulation of a circuit that functions as a full wave bridge rectifier. This was done in LTSpice.
I removed 1 gnd. Also, I added 10Kohm load resistance. But it is still a half-wave rectifier circuit.
Please update the schematic and probe the other side of the AC source.
I updated the picture as you wish. And my theme is Energy Harvesting. So input in mV units.
You still have the unwanted ground in your schematic and you haven't probed both sides of the AC source. Your Virtuoso schematic does not match the simplified one.
I modified the circuit.
In my opinion, this circuit is full wave rectification. right?
Yes - it looks like it should work however the AC source MUST be floating. Neither of its terminals can connect to ground. I'm not convinced you have the Virtuoso schematic (or whatever you created the schematic in) has been modified correctly. The simplified circuit and the detailed one are NOT the same.
Thank you. I will look for other circuits and simulate them.
Your simplified circuit should work. I have added a circuit and simulation to my answer.
Thank you. Which paper did you refer to the circuit you uploaded? I also have to add Somoothing Circuit and DC-DC convter. I have tried various experiments and I will ask again.
I didn't refer to any paper - this is the same circuit as you have. I left off the MP3 and MN3 as they are not part of the rectification process.
|
STACK_EXCHANGE
|
Out of VRAM for --vram_0 with 128x128 resolution.
Description
Hi,
I tried running dreamfusion with higher resolution, as suggested in the README:
python main.py --text "a hamburger" --workspace trial -O --vram_O --w 300 --h 300
However, I get a CUDA out of memory error on my RTX 2080 Ti with 11GB of RAM.
OutOfMemoryError: CUDA out of memory. Tried to allocate 444.00 MiB (GPU 0; 10.76 GiB total capacity; 8.75 GiB already allocated; 431.00 MiB free; 9.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I found that this error happens even with:
python main.py --text "a hamburger" --workspace trial -O --vram_O --w 128 --h 128
I can only run it successfully with 64x64 images.
I tested both PyTorch 1.13.1 and PyTorch 2.0 with CUDA 11.7, but got the same problem.
I cloned the repo today (4/17) so perhaps a recent change broke this?
P.S. Thanks for the great repo!
Steps to Reproduce
python main.py --text "a hamburger" --workspace trial -O --vram_O --w 300 --h 300
Expected Behavior
python main.py --text "a hamburger" --workspace trial -O --vram_O --w 300 --h 300 should not run out of VRAM per README.
Environment
Ubuntu 20.04, PyTorch 1.13.1, CUDA 11.7
Ubuntu 20.04, PyTorch 2.0, CUDA 11.7
(I installed bash scripts/install_ext.sh)
Same thing happens in colab with Tesla T4:
main.py -O --vram_O --text 'a DSLR photo of a mug' --workspace trial4 --iters 5000 --lr 0.001 --w 300 --h 300 --seed 0 --lambda_entropy 0.0001 --ckpt latest --save_mesh --max_steps 512
CUDA out of memory. Tried to allocate 774.00 MiB (GPU 0; 14.75 GiB total capacity; 12.29 GiB
already allocated; 498.81 MiB free; 12.97 GiB reserved in total by PyTorch) If reserved memory is >> allocated
memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and
PYTORCH_CUDA_ALLOC_CONF
@ondrejbiza Hi, I have observed this too. I guess it's the removal of normal_net and using of finite difference normal that causes the increased memory usage.
I'm still looking for a better solution, but for now you can train NeRF at 64x64 and finetune DMTet at 512x512, which still gives high resolution results.
|
GITHUB_ARCHIVE
|
Cisco Mobility Express is a small to medium sized Wi-Fi solution which can be deployed in just under 20 minutes. In this episode, I talk about my what Cisco Mobility Express entails and how I configured a couple of Cisco 1800 series access points.
Other access points that can be controllers with Cisco Mobility Express include the 2800 and 3800 series access points. This is a special image and not the lightweight images we typically use with the larger controller based models. What’s so special with Cisco Mobility Express is there is a built-in controller. This AP can serve wireless clients and function as a controller to manage up to 25 access points and 500 clients.
Deploying a Cisco Mobility Express controller can be completed in under 20 minutes. After completing the boot up process, a new SSID, CiscoAirProvision, will be enabled. It can be joined using your desktop/laptop computer or with an app, CiscoWireless.
For testing purposes I used the app on my iPhone which was surprisingly simple.
It’s only 5 steps:
- Configure an admin account
- Setup the controller – System name, management IP address, etc.
- Configure wireless networks
- Set up RF Parameter Optimization
- Confirm and Reboot
Reminder: Configure your switch port properly! If you’re tagging multiple VLANs for your wireless networks, be sure to configure trunk ports to the access point.
A controller can function as one single controller but for redundancy, each Cisco Mobility Express AP (1800,2800,3800 series) can be redundant to each other. But if you want to statically configure a primary and secondary controller, you can do so using the CLI.
The election of a controller happens in one of three ways:
- User defined
- Least client load
- Lowest MAC address
All of your advanced troubleshooting will be done using the CLI as well.
Within the web interface, to manage the controller, you have the ability to modify the configuration such as radio policies for your SSID, VLAN tags for an SSID and advanced settings such as channels, channel widths, and transmit power.
Monitoring will yield statistics on access points and individual wireless clients.
You can view access point statistics such as:
- Channel utilization
- Configured data rates
- Current transmit power
Client statistics collected include:
- MAC address
- Current SSID connected to
- Signal strength
- Basic client capabilities
In addition to the statistics above, you can view the top applications used by each client and on the network.
To get to ap level from controller:
To get back to controller cli from ap cisco shell:
Troubleshooting AP join issues from controller:
debug capwap events enable
debug capwap detail enable
debug capwap errors enable
What you can configure via the AP:
Set static IP address:
capwap ap ip <ip-address> <subnet mask> <default-gateway>
Configure static controller IP:
capwap ap primary-base <controller-name> <ip-address>
Setup a primary and secondary AP for controller:
config ap priority 4 <ap> config ap priority 3 <ap>
Links and Resources
15 Wi-Fi Blogs To Read via Network Computing
Are there any other blogs missing from this list? One I can think of is http://www.mikealbano.com/
Interference sources on the Wi-Fi Network via Netscout
Cisco to dismiss up to 5500 employees or 7% of their workforce via Arstechnica
How To Deploy Cisco Mobility Express via Packet6
Troubleshoot AP Joining Issues via Packet6
Cisco Mobility Express Deployment Guide via Cisco
|
OPCFW_CODE
|
Reading analog signals from external sensors with MCP3008 is well known and wide-spread,. I was also working with this chip on my first experiments with photoresistors, but figured out, that 10 bit resolution would not suffice my needs.
So I went on for the next higher ADC model, the MCP3208, which provides 8 channels at 12 bit. Naturally, I was hoping for a "plug-and-play" upgrade, having just to replace the chip and start measuring in ~0.0122 V steps (5V reference).
Well, there were a couple of things in the way....
I took the Python code from and added code for a stepper motor control (more on that in another project).
The values looked garbled, and the reason is clear: the SPI interface messages must be adapted as discussed in , in depth in .
You may want to try more parameters, e.g. spi.xfer2([ 6 | (channel&4) >> 2, (channel&3)<<6, 0], 500000,1000). Check for their meaning.
adc = spi.xfer2([ 6 | (channel&4) >> 2, (channel&3)<<6, 0])
data = ((adc&15) << 8) + adc
After fixing this, it still did not show the expected results, values were erratic, what else needs to be done? Would I have to add an amplifier, as mentioned in and ?
Equation 4.1 in , shows how the output value derives from input voltage and reference voltage. There is discussion about using 4095 vs. 4096, as mentioned in . I use 4095, as this is the maximum output value read from the device:
value(out) = 4095 x V(sample)/V(reference)
While trying to establish the measurement setup, V(sample) is kept equal to V(reference), as I am using a simple voltage divider curcuit, with large resistance ~2MOhms to Ground vs. 5.1kOhms to channel 0 of the MCP3208.
"Basic SAR ADC Operation" describes how an analog voltage is transformed into a digital value by means of a capacitive array. For 500kHz, the default SPI clock speed, it takes ~3µs to sample 12 bit . So if power oscillates within this time the values will not be stable. From figure 4 we take, that we should not run at much higher clock speeds, because the higher the sample resistance is, the more time it takes to charge the capacitors. Giving it too little time for charging and reading to fast would yield bigger errors.
Attaching a power source unit at 5 V (or 3.3V, see below!) replacing RPi's GPIO 2 +5V out, yields much better results!
Monitoring RPi's GPIO 5V out with an oscilloscope revealed many jumps in voltage, mostly < 0.05 V. This would explain why driving the MCP3008 with RPi power suffices most needs.
Still, there were occasional single jumps of +- 5 counts, how about them? The countermeasure is a 0.1µF ceramic capacitor between VDD and Ground, also mentioned in .
After these modifications I read reproducible values, yielding smooth curves.
I used this in my Polarimeter project on instructables. The complete wiring diagram for this is available as a public project in easyeda .
If you are looking for higher resolution or the I2C protocol, checkout the ADS1115 page .
A reader from Mexico (Thanks, Pablo!) pointed out correctly, that in this setting D-out (Pin 12) connected to SPI-0 MISO (Pin 21, BCM 9) is driven with 5V, while GPIOs are specified for 3.3V only. The other connections are not affected as they are driven by the Pi. I never had an issue with my 2B, but to be safe, you can switch to 3.3V, use a level shifter or add a small resistor (330 Ohms) from D-out to Ground which reduces voltage to ~3.2V.
|
OPCFW_CODE
|
This work was supported in part by Award KUS-C1-016-04 made by King Abdullah University of Science and Technology (KAUST). Mikyoung Jun's research was also partially supported by NSF Grants DMS-0906532 and DMS-1208421. Istvan Szunyogh acknowledges the support from ONR Grant N000141210785.
Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126, 1719–1724.
Calvet, L. E., V. Czellar, and E. Ronchetti, cited 2012: Robust filtering. [Available online at http://ssrn.com/abstract=2123477.]
Daley, R., 1991: Atmospheric Data Analysis. Cambridge Atmospheric and Space Science Series, Cambridge University Press, 457 pp.
Fahrmeir, L., and H. Kaufmann, 1991: On Kalman filtering, posterior mode estimation and Fisher scoring in dynamic exponential family regression. Metrika, 38, 37–60.
Genton, M. G., 2003: Breakdown-point for spatially and temporally correlated observations. Developments in Robust Statistics, R. Dutter et al., Eds., Springer, 148–159.
Genton, M. G., and A. Lucas, 2003: Comprehensive definitions of breakdown-points for independent and dependent observations. J. Roy. Stat. Soc., B65, 81–94.
Hampel, F. R., 1968: Contributions to the theory of robust estimation. Ph.D. thesis, University of California.
Harlim, J., and B. R. Hunt, 2007: A non-Gaussian ensemble filter for assimilating infrequent noisy observations. Tellus, 59A, 225–237.
Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796–811.
Huber, P. J., 1981: Robust Statistics. Wiley, 308 pp.
Ingleby, N. B., and A. C. Lorenc, 1993: Bayesian quality control using multivariate normal distributions. Quart. J. Roy. Meteor. Soc., 119, 1195–1225.
Lorenz, E. N., and K. A. Emanuel, 1998: Optimal sites for supplementary weather observations: Simulation with a small model. J. Atmos. Sci., 55, 399–414.
Luo, X., and I. Hoteit, 2011: Robust ensemble filtering and its relation to covariance inflation in the ensemble Kalman filter. Mon. Wea. Rev., 139, 3938–3953.
Maronna, A., R. D. Martin, and V. J. Yohai, 2006: Robust Statistics: Theory and Methods. Wiley, 436 pp.
Ruckdeschel, P., 2010: Optimally robust Kalman filtering. Berichte des Fraunhofer ITWM 185, 53 pp.
Schick, I. C., and S. K. Mitter, 1994: Robust recursive estimation in the presence of heavy-tailed observation noise. Ann. Stat., 22, 1045–1080.
Schlee, F. H., C. J. Standish, and N. F. Toda, 1967: Divergence in the Kalman filter. Amer. Inst. Aeronaut. Astronaut. J., 5, 1114–1120.
Szunyogh, I., E. J. Kostelich, G. Gyarmati, E. Kalnay, B. R. Hunt, E. Ott, E. Satterfield, and J. A. Yorke, 2008: A local ensemble transform Kalman filter data assimilation system for the NCEP global model. Tellus, 60, 113–130.
Tavolato, C., and L. Isaksen, 2010: Huber norm quality control in the IFS. ECMWF Newsletter, No. 122, ECMWF, Reading, United Kingdom, 27–31.
Tukey, J. W., 1970: Exploratory Data Analysis. Vol. 1. Addison-Wesley, 688 pp.
West, M., 1983: Generalized linear models: Scale parameters, outlier accommodation and prior distributions. Bayesian Stat., 2, 531–558.
Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 1913–1924.
|
OPCFW_CODE
|
Beholders suck, they're stupid, this is dumb.
Look at this stat block, just look at it. There isn't a proper, easy to roll hit dice number, they give you range of 45-75. How the hell am I supposed to generate that?? 44+1d30??
And the armor class, this is just ridiculous, it has 3 different ACs. Judging by the description, you have to roll percentile dice to determine where you hit it before you can roll to attack just so you can figure out what armor class you're rolling against.
And how does it attack? Well it has a bite attack for 2d4 damage, and generally then you have to roll 1d4 to determine how many magical attacks it gets, through which it is implied you have to roll a d10 that many times to determine what different magical attacks it uses. Oh, and also it has an anti-magic ray, because screw you if you're a magic-user and you wanted to have a fun time.
Also I'm just realizing you have to keep INDIVIDUAL TRACK OF ALL 10 EYE STALKS' HP???? OF WHICH THEY HAVE 8-12 EACH?? EACH FLIMSY EYE STALK HAS MORE HEALTH THAN THE AVERAGE PERSON.
Okay, lets say you do get into combat with this thing and everything is set up and you can get past all that. Now, in order to find out what its magical attacks do you have to OPEN ANOTHER BOOK TO CHECK WHAT THE SPELLS DO!!!!!!
Oh, and what is the alignment of a creature for whom running combat is an absolute chaotic nightmare?
We can fix this.
ATK 1 bite + 1d6 magical attacks (automatically hit)
Magical Attack Table:
1. Hypnotize: Target must make a saving throw or obey a command made by the beholder
2. Laser Beam: Target must make a saving throw or take 3d6 damage
3. Paralyze: Target must make a saving throw or be unable to move for 1d6 rounds
4. Push: Target must make a saving throw or be flung 1d6x10 feet away, taking 1d6 damage in the process
5. Blind: Target must make a saving throw or be blinded for 1d6 rounds
6. Terrify: Target must make a saving throw or be affected with abject, supernatural terror, attempting to flee the battle and trying to kill anything that gets in their way. This lasts for 1d6 rounds.
Every time a beholder takes 6 damage or more, the beholder takes a -1 penalty to determine how many magical attacks they get to make, representing an eye being cut off.
There, no more stupid to-hit percentile rolls, no more page flipping, no more anti-magic ray, no more variable amount of eye rays which can hit given the angle which someone attacks it at, no more stupid eye stalk HP.
|
OPCFW_CODE
|
What is the use of a sharepoint crawl?
I know a crawl is used to update the index in order to do a search on SharePoint quickly. But what I do not understand why one needs a crawl in the first place!
Whenever a page is updated, added or changed, why isn't the index updated in that very instance? This would mean the index is up-to-date immediately, and you don't have to run a 'crawl' ever. Wouldn't that be much easier?
Maybe I am missing the big picture here, so any insights would be great.
You can enable that feature in from SharePoint Server 2013 and forward, and you get the functionality you ask for. However, this feature "continuous crawl" is resource intensive and most organisations chose not to use it. You may need to double the memory from 16GB to 32GB on all your application servers.
See Manage continuous crawls in SharePoint Server 2013 for more on the topic.
But you can't avoid crawling and index itself. They need to be there to get Search to work.
I understand indexing is essential to make sure you can do a search in the first place. But why a crawl? A crawl looks to me like a waste of time and resources...
@Alex What goes into the index, and what permission does every item have? Should it be in the index or not? And what about metadata, do we need everything or just some? That's the crawl components job, which you can configure. You need it, SharePoint need it and Google need to crawl to make it possible to search. There may be other ways, but then we're talking Computer Science research and not business applications. Interesting question though :)
@Alex But I'm generalizing too much. Actually you have six components in SSA: 1) Analytics processing component 2) Content processing component 3) Crawl component 4) Search administration component 5) Query processing component and the 6) Index component. Keep reading: https://technet.microsoft.com/en-us/library/jj862354.aspx and https://technet.microsoft.com/en-us/library/jj862355.aspx
Maybe you could provide a more general insight in searching, indexing and the like, sharepoint independent? I cannot understand why this has been so very complicated? Is the creation of a search index so complicated in general? Or just because of an 'organical growth' of how sharepoint handles things? Or, asked differently, could sharepoint have been created to function much easier and faster?
Index can be auto updated if we configure the continuous crawling option as others mentioned.
But it can hit the performance of the SharePoint, Because from crawling to indexing is not a simple task. Below are the steps which search performed to index a single file.
New document added
Crawl Check the File Type and Path.
Now check the Crawl rule, if file type is exclude from crawling or this path is exclude from crawling.
After passing this step,Now it check the piece of software / Ifilter to read the content of file and all associated properties
During this process, it will skip the words which dont want to index.
Crawler store the information about the item in the Crawl Database i.e last crawl time, the last crawl ID, and the type of update during the last crawl .
Now hand over information to Content Processing components.
Content Processing the Process these items and add into the Index. ( Content Processing parsing and extracting document properties and various other tasks such as linguistic processing, property mapping etc).
this is how one file added into the Index.
I think this will explain how its work and why it is resource intensive.
Apart from that just think, while crawling something happen to the document and crawler should go back and start from 0.
So keeping all these kind of situation in mind they design this in way which cause less impact.
SharePoint crawl is the tool that creates the index to be searched against. An index is simply "a set of items each of which specifies one of the records of a file and contains information about its address" (Google definition).
The content is all there as soon as you click "Save", but the crawl is the thing that looks at new (or modified) content, parses the metadata and keywords, and creates connections with other existing data.
The reason to do interval crawls instead of a continuous crawl is for efficiency and server-load reasons. As you mentioned in our conversation, the following scenario would not be an efficient use of resources:
A user makes a change to a page, then again and then again. A crawl at, say, every 30 minutes would find maybe the final form (and index the page once) while with my suggestion the page would have indexed after each iteration (so n times).
To add to that, if it were to occur in the middle of the day in a large company, the servers may already be under load from other user interaction, and to crawl each and every modification (possibly tens of thousands per minute in a large enough company) would place significant additional load on the servers.
HowStuffWorks has a great breakdown of how/why search engines work, including the purposes and functions of crawlers. The same principles would apply to SharePoint as well.
http://computer.howstuffworks.com/internet/basics/search-engine1.htm
Thanks for this link, but it explains web searches. Here you need those spiders to search through content, because you do not know when something is changing on a web-page in the internet. But sharepoint is one site! Its a collection of servers in one place practically, and I see that you can use the same approach as working with 'spiders' to create a search index. But why make it so complicated? Why not indexing a sharepoint-site the moment it is created or modified? As a sharepoint site is created and/or modified, sharepoint know - it must store this information somewhere in the database.
Why not indexing it right away? Avoiding a later process, it appears right away in the search etc. Why this cumbersome approach?
Benny mentioned above that a continuous crawl would perform like you're suggesting, with an instantaneous indexing of new content. However, it's resource intensive for the server to be constantly searching for new content to add to the index, so the alternative is the more common approach, to crawl at regular intervals.
Basically, something has to add the content to the index...that's the crawl's job. Whether you run it at an interval or continuously (providing instant search) is an administrative choice.
Why is it resource intensive? I do not understand. If you do not change any content, no indexing is done. It is only then done, if you CNANGE something on the webpage. Sharepoint must notice when someone adds a new document, or clicks the save button. Only then this particular piece of data/content is being indexed. Maybe there is still a misunderstanding? What I want to say: If a user 'clicks the save button', then and only then the new content is being indexed. No need to index all the time and use resources... I hope it is clearer now what I mean...
In your words I guess that something is the "click on the save-button"...
I'll admit the theories are a bit above my head still, but my understanding is the crawl is the tool that creates the index, whereas an index is "a set of items each of which specifies one of the records of a file and contains information about its address" (Google definition). The content is all there as soon as you click "Save", but the crawl is the thing that looks at new content, parses the metadata and keywords, and creates connections with other existing data. It's resource-intensive because it has to look at all of the information around (and sometimes in) the content (depending on...
your crawl settings. If you do a continuous crawl so content is immediately available, your servers are constantly examining each change immediately, looking at its metadata and contents, and trying to create relationships with other data. "Resource-intensive" does not necessarily mean performance-degrading if there are small amounts of information changed at any given time. In a large company though, tens of thousands of changes could be made any given minute, and the servers would have to handle that plus whatever else users are asking them to do.
Let us continue this discussion in chat.
|
STACK_EXCHANGE
|
Zephyr for JIRA Importer Utility
To know more about ZFJ Importer release, see the Release Notes
Latest Release 0.38 Download
This guide will cover how to import tests into Zephyr for JIRA (both Server/DataCenter and Cloud versions) using the Importer Utility; this includes:
- Downloading the Importer Utility
- Launching the Importer Utility
- Configuring fields necessary for importing
- JIRA results of an import
Java – Please make sure you have the latest version of Java installed. You can acquire it from http://java.com if you do not have it already. API Access & Secret Keys (only for Zephyr for JIRA Cloud) – you’ll need your Zephyr for JIRA API access and secret key in order to make a connection. You can find them by logging into your JIRA Cloud instance, browse to Tests (top menu bar) > Importer > API Keys.
Selecting "API Keys", the Zephyr API Key view will be displayed which contains an Access Key and Secret Key unique to your JIRA user. Also on this view are three options in the top right hand corner:
- Copy to clipboard which copies both Keys to your system clipboard
- Delete which removes the current generated Key pair
- Regenerate which regenerates the Key pair. Note that once the Keys are regenerated the previous Keys can no longer be used with the Importer Utility.
Use the Copy to Clipboard option to save this Key pair to your system clipboard. You will require it before starting the import process in the Importer Utility.
Note the following:
- The use of the Access and Secret Keys is required only for Zephyr for JIRA Cloud users. This option in the Test menu is not available in Zephyr for JIRA Server/DataCenter.
- For security purposes, do not share this Key pair. Each user that needs to use the Importer Utility should use the steps above to generate their unique Key pair.
- This Key pair does not expire so users can use their unique Key pair every time they use the Importer Utility.
- Copying to the clipboard copies both Access Key and Secret Key in below format. Paste it to a notepad. Copy each key individually and paste it in its appropriate box in the importer.
- “XXXXXXX” in Access Key (First box) and “YYYYYYY” in Secret Key (Second box) of the importer.
Downloading the Importer
The Zephyr for JIRA Importer Utility is available on BitBucket. The direct link for it is https://bitbucket.org/zfjdeveloper/zfj-importer/downloads. This location provides download of the Importer Utility software. Download the Utility software to the system from which imports to Zephyr for JIRA will be run.
Using the Importer
Launching the Importer
To launch the utility double click on the downloaded jar file or run through the command prompt as: java -jar <importer file> which will result in opening a window as shown below.
Note: By default, the Excel tab will be selected in the importer.
Importing from Excel
- Enter the URL for your JIRA Server or Cloud instance. If importing to Zephyr for JIRA Cloud, make sure the Cloud checkbox is checked. Enter your JIRA username and password. Zephyr for JIRA Server version users, proceed to Step 3.
- For Zephyr for JIRA Cloud users only: The "ZFJ URL" field is a read-only field and is pre-entered. Enter your API Access and Secret keys (view the "Requirements" section of this document to see how to access this Key pair).
Select the project and issue type desired (supported: Test, Bug, Improvement, Task, New Feature)
- Select a Discriminator and enter Starting Row value.
The Discriminator field is used by the Importer Utility to determine the end of one test case and the beginning of the next test case. It allows one of four options for selection:
- By Sheet: use this when a test case is listed per excel sheet
- By Empty Row: use this when an empty row separates consecutive test cases
- By ID change: use this when a unique test ID exists for each test case to be imported
- By Test Case Name change: use this when each test case has a unique test case name
Note: to properly determine the Starting row value, reference the excel file to be imported. The Starting row will be the first row in the excel spreadsheet containing a test case. For example, if the first two rows of the excel sheet are headers and the third row is where the first test case is listed, then the Starting row value should be set to 3.
- Map your Excel spreadsheet columns to your JIRA and Zephyr for JIRA fields. Use the column letters instead of any headers you might have to reference the data. For example: if test case name is in Column B of the excel sheet, enter "B" against the JIRA field “Name”
If your Excel file has multiple sheets and you wish to import them all, check Import All Sheets.
Select the Excel file to be imported by choosing either Pick Import File or Pick Import Folder.
Click Start Import. The results of the import will appear in the log window below. Any errors that occur will be displayed here.
|
OPCFW_CODE
|
Restarting a Node app running with pm2-runtime causes the container to disappear but app still runs
My NodeJs App was successfully deployed inside a Docker Container. I put this command in Dockerfile:
CMD ["pm2-runtime", "app.js"]
After restarting the app from the PM2 key metrics panel, the container disappeared (docker ps shows nothing) but strangely the app kept running. After a couple of minutes I figured out that pm2 was running it globally in the server. pm2 ls shows the app process.
Is this behaviour considered normal? Can this be prevented?
This is not the expected behvour, there might be the case the same application is running on host with same keys, so by issue restart command can restart both process, but pm2 dashboard able to recognise the process even if it running against same keys.
What I assume, That after the restart from the dashboard there were some errors that kill the container and you were not able to see the container when you run docker ps, as pm2-runtime retires 3 times if there is any error occurs.
One way to double-check is to verify, run docker ps -a and grab the container id and check docker logs stopeed_container_id, you will probely see somethign like
2019-1-02T20:53:03: PM2 log: 0 application online, retry = 3
2019-1-02T20:53:05: PM2 log: 0 application online, retry = 2
2019-1-02T20:53:07: PM2 log: 0 application online, retry = 1
2019-1-02T20:53:09: PM2 log: 0 application online, retry = 0
2019-1-02T20:53:09: PM2 log: Stopping app:www id:0
2019-1-02T20:53:09: PM2 error: app=www id=0 does not have a pid
2019-1-02T20:53:09: PM2 log: PM2 successfully stopped
So overide this behour to do not stopped container even if there is error, I will not recomend this approch but You can run with --no-auto-exit flag.
--no-auto-exit
do not exit if all processes are errored/stopped or 0 apps launched
pm2-runtime option
CMD pm2-runtime --no-auto-exit app.js
|
STACK_EXCHANGE
|
I don't know how to ask this question exactly but we are closing our company at the end of April. We have 14 Virtual Servers running on ESX 5.5 host. I need to get 3 of those machines (Domain Controller, Application Server & SQL Server) to a tower server with the ability to spin them up at any point in the future. This is required so that he can complete some remaining transactions from his home and connect to the domain. He does not have the space or desire to bring a 42U Rack to his house and really only needs the 3 VM's to finish out the remaining transactions and then do his taxes at the end of 2019.
Looking for suggestions on how to do this as simple as possible.
You could easily buy a laptop with adequate memory and cpu power and install ESXi on it and move the VMs to it.
Esxi has a free license limiting the cpu and ram on the vms and no backup api. It can be installed on most computers. Only issue will be network adapters. While not perfect can move to a newer esxi box. The latest versions of 6 and above have a web interface to make config easier. Would take some work to move over and would need Hd space but that’s one option
Another option: Hyper-V is in Windows 10 Pro. You could migrate the VMs to Hyper-V and then he could just spin them up on his Windows 10 PC. (new hardware or if his existing could handle or be upgraded?) Either way, make sure a backup process is also put in place. Would hate for a PC to die and take his VMs with it right before he needed them for tax purposes.
Here is the link to the free VMware vSphere hypervisor
open your 5.5 client
click configuration then storage
rightclick the datastore and choose browse datastore
from there you can upload/download files
where you can copy the files to another place (at very least you need the vmx and vmdk files, copy the whole folder is best)
you can then upload to a new host server and add a new virtual server and point to the vmx file and it is transferred across.
Welcome to the community!
Are they all on a single host right now? I would just bring to the house an existing host with the VMs. If you currently using a SAN or other external storage, migrate the VMs to internal storage on the server.
My thoughts exactly. Why buy more equipment when you have everything you need now?
Running ESXi on a laptop can be done but is difficult because the hardware doesn't comply with the HCL requirements. The better way is to backup the VMs and restore to Hyper-V VMs instead. I've done what you want to do using many different methods (including using vSphere), but the Hyper-V method has always been the easiest.
|
OPCFW_CODE
|
What does usage at a non-uniform instantiation mean?
I am unable to compile the following code:
open Genotype
open Genome
type IAgent =
abstract member CrossoverA: Genome<'T> -> unit
type internal AgentMessage<'T> =
| GetEnergy of AsyncReplyChannel<int>
| CrossoverMessage of Genome<'T>
| CompareMessage of Genome<'T>
type Agent<'T>(initialLifeEnergy : int, genotype : IGenotype<'T>) =
let LifeEnergy = initialLifeEnergy
let mailbox = new MailboxProcessor<AgentMessage<'T>>(fun inbox ->
let rec loop =
async {
let! (msg) = inbox.Receive()
printfn "Message received: %O" msg
match msg with
| GetEnergy reply ->
reply.Reply(LifeEnergy)
| CrossoverMessage genome->
printfn "crossover"
| CompareMessage fenome ->
printfn "compare"
}
loop )
do
mailbox.Start()
member this.CrossoverA(genomeIn: Genome<'T>) = (this :> IAgent).CrossoverA(genomeIn: Genome<'T>)
interface IAgent with
member this.CrossoverA(genomeIn: Genome<'T>) =
printfn "Crossover"
mailbox.Post(CrossoverMessage genomeIn)
There is an error in line member this.CrossoverA(genomeIn: Genome<'T>):
Error 1 The generic member 'CrossoverA' has been used at a non-uniform instantiation prior to this program point. Consider reordering the members so this member occurs first. Alternatively, specify the full type of the member explicitly, including argument types, return type and any additional generic parameters and constraints.
Error 2 One or more of the explicit class or function type variables for this binding could not be generalized, because they were constrained to other types
and also in line mailbox.Post(CrossoverMessage genomeIn):
Error 3 The type ''T' does not match the type ''a'
I an not using the variable ''a' anywhere in the project. Also, the name CrossoverA is used only in this file. I feel puzzled, other classes in the project were created with similar typing patterns and work well.
It likely can't infer the return type of CrossoverA due to the forward call to your interface implementation. Type inference can only use type information available prior to the current point. Ideally, the interface would forward calls to the class and not the other way around. That would fix the inference issue.
EDIT - Another issue seems to be the use of the type arg 'T in IAgent.CrossoverA, which isn't defined on the type.
Not at all. Without forwarding to interface Errors 1 and 3 are gone, but the Error 2 is still there. By the way, how to implement it the way that the interface would forward calls to the class?
Ah, you're right. I think the problem is that IAgent.CrossoverA uses a type arg that isn't defined on the type. Add the type arg to the interface definition (type IAgent<'T>) and again at the implementation point and it should work.
|
STACK_EXCHANGE
|
Why is it not allowed to recomend a specific product?
For example what is wrong with the question "which is the best Anti-Virus"? If the problem is there is no best then how can the question be rephrased so that the asker can make an informed decision as to which is best for him?
What's the problem you need fixing? Asking for a new product isn't a problem, it's the step before you have one
For example my computer freezes randomly and someone suggested to check the internal temperature and recommended SpeedFan. I red that SpeedFan is notoriously inaccurate and from the readings it gives me I think it is.
My comment here is not an answer for you (see my answer below). This a commentary on the question you ask, and I think the title of this question could be improved. How can I improve a shopping question?
You have self-answered your question:
There is no best.
The focus here lies on best. What does this adjective mean?
best /best/
Of the most excellent, effective, or desirable type or quality: "the best pitcher in the league".
— Google - define:best
But what is excellent, effective and desirable for me isn't necessarily excellent, effective and desirable for another person. For instance, I like Process Monitor to get rid of viruses as I don't like to have something waste resources in the background (or consume power, etc...).
Yet another person is very much likely going to a disagree with me, Process Monitor has such a steep learning curve because just disabling some entries can render your computer unbootable. And it might not be easy to spot the viruses in the list at first. Oh, and it's something you have to do manually, so you can't schedule it and forget about it.
So fine for me, so useless for another. My best is not his / her best...
The keyword best is extremely subjective; and if you want more detail, one of the Stack Exchange community managers has written a full blog post on this subject.
random has asked a good question as well:
What's the problem that you need fixing?
Let's look at your example question:
For example my computer freezes randomly and someone suggested to check the internal temperature and recommended SpeedFan. I red that SpeedFan is notoriously inaccurate and from the readings it gives me I think it is.
I ain't seeing a problem here. Why not? Because it shows no indication of research, which makes it more like homework than a problem. I would expect such question to be closed with a simple:
Which of at least 22 alternatives have you tried? Why didn't they work?
And only if you after that still haven't found what you were looking for, you have a problem for which you can actually explain what the problem is and why you can't get it solved. This is essentially in the FAQ.
Why ask a question if you can do a search instead and directly try some solutions?
The question is "How can I rephrase it?".
Do some homework
Include links or search results in your question that lead to directly conflicting answers.
Use quotes from the links a lot, because links go away sometimes.
Pose your real question: Which authority is more reliable (in this very specific aspect) here?
Even the commentary on a question that gets closed can be enlightening. Keep in mind that you probably won't be worrying about questions being closed, if you show that you did your homework.
|
STACK_EXCHANGE
|
Helm Fails silently when /.config isn't owned appropriately
Output of helm version: version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.7"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T14:19:25Z", GoVersion:"go1.13.13", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): ???
helm # produces no output
echo $? # produces 1
At least the rc status was 1.
It turns out the issue is that my ~/.config/ directory was owned by root:root rather than myself.
As a work around:
sudo chown -R $USER:$USER ~/.config/
helm # produces help
I'm going to mark this as a bug. If there is a permission issue the Helm client can provide a better error message.
@addyess You can use:
export HELM_DEBUG=1
It will show the error meessage.
try setting permissions on /home/chris/.config to 700 (only the owner can read/write)
this works
drwx------ 3 addyess addyess 4096 Aug 13 19:29 .config
this fails
drwx------ 3 root root 4096 Aug 13 19:29 .config
I also cann't repoduce this case
# apple @ liuming-dev in ~/.kube [16:52:14] C:1
$ ll
total 144
-rwx------ 1 daemon daemon 49K 9 2 15:09 config
# apple @ liuming-dev in ~/.kube [16:52:37]
$ ll /usr/local/bin/helm
lrwxr-xr-x 1 apple admin 29B 9 2 09:56 /usr/local/bin/helm -> ../Cellar/helm/3.3.1/bin/helm
# apple @ liuming-dev in ~/.kube [16:52:58]
$ helm list --kubeconfig ./config
Error: Kubernetes cluster unreachable: error loading config file "./config": open ./config: permission denied
I also cann't repoduce this case
# apple @ liuming-dev in ~/.kube [16:52:14] C:1
$ ll
total 144
-rwx------ 1 daemon daemon 49K 9 2 15:09 config
# apple @ liuming-dev in ~/.kube [16:52:37]
$ ll /usr/local/bin/helm
lrwxr-xr-x 1 apple admin 29B 9 2 09:56 /usr/local/bin/helm -> ../Cellar/helm/3.3.1/bin/helm
# apple @ liuming-dev in ~/.kube [16:52:58]
$ helm list --kubeconfig ./config
Error: Kubernetes cluster unreachable: error loading config file "./config": open ./config: permission denied
Your file is named config, it should be named .config
|
GITHUB_ARCHIVE
|
The best applications have the best validation and the least occurrences of errors. When dealing with the Internet we have all run into the situation where want to go to a URL or access a Web Service within code and assume that we have an Internet connection. But unbeknownst to us the router needs to be rebooted again and we are hung out to dry until the application times out.
In this example we will show you how to test for an Internet connection before you make that all important call to the Web Service so you can capture the fact that the Internet is down and display the appropriate message or branch off to another process.
Server Intellect assists companies of all sizes with their hosting needs by offering fully configured server solutions coupled with proactive server management services. Server Intellect specializes in providing complete internet-ready server solutions backed by their expert 24/365 proactive support team.
To implement this example we will need to add a reference to an external dynamic link library and use the functionality within that library to accomplish our task.
This namespace is required in order to access the external library and must be added to your project as one of your “includes”. Failure to add this namespace will result in an error being thrown while you are trying to build the application. Once you add this library you will be able to add any external library, not just the one that we are dealing with in this example.
The external library is added to your project via the following syntax:
static extern bool InternetGetConnectedState(ref StateOfConnection lpdwFlags, int dwReserved);
What we have said is that we want to access the functionality within the “wininet.dll” and declared that library for use. There are two parameters that are required to use this library. They are:
lpdwFlags - Pointer to a variable that receives the connection description. This parameter can be one or more of the Connection State values. Please refer to the section “Definition of Connection States” in Fig. 1 further down in this article.
dwReserved - Must be zero
This external method returns a bool value that denotes whether there is an Internet connection or not.
- TRUE if there is an Internet connection
- FALSE if there is not an Internet connection
In our example we will be passing an instance of the enumeration to the external library as a ref parameter as well as passing a zero as the second ref parameter. The zero is a hard coded value and is not derived from any process and is not meant to denote anything in your application other than a required value for the external library.
The following is how we declare the enumeration that describes the connection condition and is passed to the external library by reference.
If you're looking for a really good web host, try Server Intellect - we found the setup procedure and control panel, very easy to adapt to and their IT team is awesome!
public enum StateOfConnection
Modem = 0x1,
Lan = 0x2,
Proxy = 0x4,
Installed = 0x10,
OffLine = 0x20,
Configured = 0x40
In the above example we used the “Flags” attribute. What we are saying here is that you have an enumeration, that usually denotes an integer, but adding the “Flags” attribute to the enumeration declaration you are actually saying “Treat this enumeration as a bit field, that is, a set of flags.”
In the next article we will continue to build this example, adding the remaining lines of code and bringing everything mentioned in the article together so we can test our Internet connection.
We used over 10 web hosting companies before we found Server Intellect. Their dedicated servers and add-ons were setup swiftly, in less than 24 hours. We were able to confirm our order over the phone. They respond to our inquiries within an hour. Server Intellect's customer support and assistance are the best we've ever experienced.
What have we learned?
- That CSharp has the ability to import external libraries easily
- That we can test an Internet Connection before we try to access it
- How to use an enumeration
- That an enumeration can be treated as a bit field
|
OPCFW_CODE
|
Intro to Web Scraping by Dan Nguyen
Meet Your Web Inspector by Dan Nguyen
How to Import Web Data into Google Docs by Amit Agarwal
Amit Agarwal holds an Engineering degree in Computer Science from I.I.T. and has previously worked at ADP Inc. for clients like Goldman Sachs and Merrill Lynch. In 2004, Amit quit his job to become India’s first and only Professional Blogger. Amit authors the hugely popular and award-winning Digital Inspiration blog where he writes how-to guides around consumer software and mobile apps. He has developed several popular web apps and Google Add-ons including Mail Merge for Gmail. You can read his interview on Lifehacker and YourStory.
HTML ELEMENT REFERENCE –
Samantha Sunne and Matt Wynne:
Getting data from the web: From the quick grab to the intricate scrape
Samantha Sunne and Matt Wynn IRE 2015 Philadelphia “Scraping” is a term for catching, collecting or coaxing data off the web. If, for example, you have a list of campaign donors, but you want it in a spreadsheet, you’re going to want to scrape it.
WEB TRICKS AND SECRETS These techniques may or may not qualify as “scraping,” but they do let you to gather data using just a web browser in a way you wouldn’t usually use it.
● Use a web inspector or page source to take a look at the data or sources that your web page is accessing. Try Samantha Sunne’s tutorial or Dan Nguyen’s guide to the web inspector.
● Some page source code or database results come in a format called JSON. It’s not very pretty to look at, but you can: ○ Convert it to a csv (a type of spreadsheet) http://sunlightfoundation.com/blog/2014/03/11/makingjsonassimpleasaspre adsheet/ ○ make it look nicer: http://codebeautify.org/jsonviewer
● Obtain metadata (data that describes other data, such as a timestamp on a photo) on things like websites, photos and webpages POINTANDCLICK TOOLS These are some scraping tools that may make your life easier. Or they might make your life harder. Try to figure out which it is, early on, so you don’t waste too much time. Extremely simple tools like Google Scraper are either going to work or they’re just not.
● Scraper ○ A very simple Google Chrome extension that scrapes text and tables that you select. Almost equivalent to copyandpaste. ● DownThemAll ○ A Firefox addon that downloads all the links, images or text you select on a webpage in one fell swoop.
● Kimono ○ A website and browser extension that uses the same pointandclick technology to create APIs (really, just scrapers that will run automatically whenever you want them to).
● import.io ○ A free app that will scrape web pages and sites manually or automatically. Has a slightly larger learning curve.
● Outwit hub ○ A similar app that identifies all the assets (text, images, etc.) in a web page or site and enables you to download them all.
● Samantha’s other recommendations: https://delicious.com/samanthasunne/scraping
● Scott Klein and Michelle Minkoff’s recommendations: datagrab
● If you’re looking for something specific, try searching Github. Lots of people have made scrapers they just put up for public use. For example, “scrape linkedin”
SCRAPING WITH SCRIPTS
Ok, so there are lots of ways you can scrape data off the web without ever having to crack open a terminal. Why would you ever want to level up?
● Services come and go, but your code will stay the same.
● The web may evolve to leave some services in the dirt. But learn to code, and you’ll have a toolbox that can always get the job done (even if you do need to pick up new tools from time to time).
● Programming in general isn’t going to get less useful. Scraping is a fantastic way to back into learning a skill that will make you better at what you do. SO…. HOW? Any language can scrape from websites. PHP, Perl and Ruby, to name a few, all have tons of supported libraries and great documentation. I’m a Python guy, though, so we’ll stick to some useful libraries in that language.
● BeautifulSoup. Beautiful Soup understands websites and other formats (XML, specifically) for what they are. Rather than seeing the source of a page as a big blob of text, BeautifulSoup understands that tags represent different types of elements, that styles and classes are used to represent different levels of information, and so on.
● Mechanize. Mechanize can automate much of the interaction with a website we take for granted as we’re clicking around in our daily lives. It can fill out forms, cli ck links, navigate to urls and the like.
● Requests. Requests is a simple, modern library that basically accesses sites.
● Selenium. More and more often, sites are working outside of HTML, which really throws a monkeywrench into everything. Selenium is a library that literally cracks open a browser and cranks through commands. The learning curve is a little steeper, but it’s a good trick to know about when the time comes.
|
OPCFW_CODE
|
About GPU Programming
Over the last few decades, General Purpose GPU (GPGPU) computing has moved from fad to necessity for modern compute intensive applications. GPUs offer a considerably higher theoretical peak performance than CPUs that can potentially reduce the time to solution and the energy footprint of applications.. Historically, to implement algorithms on GPUs, programmers had to reframe their algorithms in terms of graphics operations which was time consuming and error-prone. The continued interest and success of GPGPU computing led to the introduction of new languages and specifications, like CUDA, HIP, and OpenCL in addition to directive based programming models, like OpenACC, and a zoo of GPU accelerated libraries. Currently, the GPGPU ecosystem is rapidly evolving as GPU acceleration brings value to all scientific domains.
GPU Acceleration Challenges
Distinct Memory Space
Modern GPU accelerated platforms employ one or more dedicated GPUs on each compute node. A dedicated GPU has its own memory space, distinct from the host system memory. Data and instructions are shared with the GPU across a bandwidth limited PCI bus. Data transactions between the CPU and GPU most often result in overall application slow-downs during the early porting stages. Developer teams must plan to minimize or hide these data transactions to achieve optimal performance
With any programming language, API, or library, GPU software developers must decide how they will handle GPU and CPU memory spaces.
In a broad sense, GPUs are distinct from CPUs in two ways
GPUs have lower clock speeds than CPUs
GPUs can process 1000's of threads of execution per clock cycle, while CPUs process 10's of threads per clock cycle
GPU architecture offers significantly higher performance peaks over CPUs. However, only some applications can out-perform their CPU implementations after porting to GPUs. Performance gains over existing CPU code depend on the amount of parallelism and the compute intensity of an application's algorithms and how well the developer exposes the parallelism on the target GPU hardware.
Profiling and Debugging
Traditional HPC profilers and debuggers are unable to report details of kernels running on GPUs. Profiling and debugging GPU accelerated applications requires experience with tools such as nvprof and cuda-gdb. Score-P, Vampir, and commercial tools, like ARM-Forge (formerly Allinea-Forge), are a necessity when developing multi-GPU applications.
Planning to accelerate your HPC application with GPUs
When considering a transition to GPUs for your application, it is necessary to be aware of and familiar with characteristics of the GPGPU ecosystem :
Basics of GPU accelerated platforms and the types of applications that perform well on GPUs
Programming language (Fortran, C/C++, and Python) and compiler support for GPU programming languages, specifications, and APIs
Profiler and debugger support for GPU and multi-GPU applications
GPU API difficulty, maintainability, code readability, and performance trade-offs
Community engagement and activities
GPU hardware diversity
Cloud provider support and pricing
On-premise infrastructure and maintenance costs and considerations
Awareness of the the GPGPU ecosystem can help you make informed decisions when planning to accelerate your applications with GPUs.
GPU Acceleration Support
Fluid Numerics offers services ranging from training and education to hands-on development alongside your teams! Training and education is delivered through one-on-one training, team training, or larger scale GPU hackathons.
GPU Programming Curriculum
Fluid Numerics is developing GPU programming curriculum that we can offer at no-cost. We believe in empowering teams to learn new skills in an open and inclusive environment. Simply sign up for access!
|
OPCFW_CODE
|
- If your membership includes it, there are exercise files that you can download to follow along with this course. Here in my system, I've already downloaded the exercise files.zip to my desktop. And then unzipped it to this folder right here, called exercise files. If I open that up there are two sub-folders inside of the exercise files folder itself. The first one is called color for video editors.dra. What's a D-R-A? Well DRA is a DaVinci Resolve archive. Throughout this course we'll be using DaVinci Resolve as the main tool I use to show practical examples.
Why DaVinici Resolve? Well first, DaVinci Resolve is pretty much the most popular color correction tool in the world. But over the past few years, it's also become a robust editorial, audio, and compositing tool. And best of all, it's free. It's a free download from the Black Magic website. Now if you're using another NLE, such as Adobe Premier Pro or Apple's Phonica Pro. Don't worry, in this XML folder I provided two XMLs of the main timelines that we'll be using in this title.
You can import these XMLs into your own NLE and then re-link to the media that's used in this course. Where's the media? Well that's really simple. If I step back a level and go into the DRA folder. There's a folder called media files right here and this folder contains all of the media used in the practical sections of this course. If you are using DaVinci Resolve though, getting this DRA imported is very simple. I'm simply gonna switch over to DaVinci Resolve and here in my project manager, I'm gonna right click in the gray area and choose restore I don't wanna choose import, but rather restore is the proper command for restoring an archive.
So I'll click restore. I'll navigate to my desktop, exercise files, and simply select color for video editors.dra and choose open. In just a moment the archive will be restored into my current database. Don't worry about this media offline thumbnail, if you double-click to go into the project, all of the media should be reconnected. And throughout the practical chapters where we're in DaVinci Resolve in this title. I'll direct you to the proper clip and the proper timeline. Just remember that all the media used in the course are for educational purposes only.
Please don't distribute or use the clips for commercial purposes. Now if you're new to DaVinci Resolve, don't worry. The online training library contains many titles that can quickly get you up to speed with this powerful tool.
- How people see
- Creatively and technically evaluating a project
- Interpreting your client's direction for a project
- Estimating how long a project will take
- Six stages that happen in a color correction workflow
- Timeline level grading
- Building a correction and look toolkit
|
OPCFW_CODE
|
Improved New Project and Import project experience
Hopefully this will be a better and simpler users.
edit: i made some improvements
now you dont have the set default folder in the dialog, maybe it would be better to add a button to open the editor settings from the manager.
you can create a project folder right from the dialog
also the non error messages are hidden by default, the warnings can be viewed by pressing the status icon.
if you cancel the creation, or change the path the manager will get rid of the created folder, so you can create a new one.
and this is how the errors look.
Minor nitpick but I think the messages should say "will be created in" instead of "will be created on".
I like the changes! I haven't looked at the commit itself in depth though.
I like the changes too, but IMO those messages are a little bit too intrusive.
For me the ideal solution would be to have an icon next to the path field or the import button, and the message is displayed when hovering over the icon. We could have two icons, one for warning message and one for errors. This would be like the Eclipse IDE does.
Maybe there just could be. One checlbox which allows to choose if godot ahould create a dedicated folder...
This would make it as obvious without forcing the user to read everything.
[checkbox] Create folder for project.
Checking this box updates the path, so it is obvious, that the project is now created inside the subfolder.
Path before:
Users/name/Documents/godotProjects/
Path when the box is checked:
Users/name/Documents/godotProjects/New Game Project
Maybe we could wven display the .project (or however it is called atm.
Path before:
Users/name/Documents/godotProjects/New Game Project.project
Path when the box is checked:
Users/name/Documents/godotProjects/New Game Project/NewGameProjwct.project
I think that would make it as obvious than the messages + brings the advantage that ppl who dont know how godot behaves can choose a folder and can adapt there the folder dependent on what they thought godot would do.
If they created a subfolder -> they deactivate create folder for project if they didnt vice versa...
Why would it by default not create a directory for a new project?
@Sslaxx whatever you are used to and which software you use.
But you could be right, that creating a folder for the project is the better standard.
(but there definitlt should be the option... I remeber when i was using windows that each installer did it differently and i always ended up with some rpograms on the top lvl and some programs one lvl too deep ;) )
Maybe adding a button next to the Project name to create a folder, also im not that sure about automatic folder creation, i dont know the filesystem restrictions with the characters and since the project name can be anything, im not sure about this (of course we could sanitize the name but it might be confusing).
also icons instead of message would be good too, but i dont know if it is better to be explicit (specially for newcomers)
@cryptonaut yeah will change that
What's the state here? Do you still want to make changes?
@akien-mga yeah i will made some changes
Ping.
@akien-mga still planning to work on this, however the rename PR introduced some conflicts, and the code that was already messy got worse, im thinking about rewriting that dialog.
@akien-mga i updated this, i think it can be merged now.
Not sure what happened between this commit and current release, but this behavior is not present in v3.1.1.stable.official. It continues to demand an empty but present directory.
Not sure what happened between this commit and current release, but this behavior is not present in v3.1.1.stable.official. It continues to demand an empty but present directory.
See https://github.com/godotengine/godot/pull/15835 (disallow in 3.x) and https://github.com/godotengine/godot/pull/42526 (allow in 4.x). If you want this feature in 3.x, the latter PR must be backported or cherry-picked.
|
GITHUB_ARCHIVE
|
RareBERT: Transformer Architecture for Rare Disease Patient Identification using Administrative Claims
Keywords:Healthcare, Medicine & Wellness, (Deep) Neural Network Algorithms
AbstractA rare disease is any disease that affects a very small percentage (1 in 1,500) of population. It is estimated that there are nearly 7,000 rare disease affecting 30 million patients in the U. S. alone. Most of the patients suffering from rare diseases experience multiple misdiagnoses and may never be diagnosed correctly. This is largely driven by the low prevalence of the disease that results in a lack of awareness among healthcare providers. There have been efforts from machine learning researchers to develop predictive models to help diagnose patients using healthcare datasets such as electronic health records and administrative claims. Most recently, transformer models have been applied to predict diseases BEHRT, G-BERT and Med-BERT. However, these have been developed specifically for electronic health records (EHR) and have not been designed to address rare disease challenges such as class imbalance, partial longitudinal data capture, and noisy labels. As a result, they deliver poor performance in predicting rare diseases compared with baselines. Besides, EHR datasets are generally confined to the hospital systems using them and do not capture a wider sample of patients thus limiting the availability of sufficient rare dis-ease patients in the dataset. To address these challenges, we introduced an extension of the BERT model tailored for rare disease diagnosis called RareBERT which has been trained on administrative claims datasets. RareBERT extends Med-BERT by including context embedding and temporal reference embedding. Moreover, we introduced a novel adaptive loss function to handle the class imbal-ance. In this paper, we show our experiments on diagnosing X-Linked Hypophosphatemia (XLH), a genetic rare disease. While RareBERT performs significantly better than the baseline models (79.9% AUPRC versus 30% AUPRC for Med-BERT), owing to the transformer architecture, it also shows its robustness in partial longitudinal data capture caused by poor capture of claims with a drop in performance of only 1.35% AUPRC, compared with 12% for Med-BERT and 33.0% for LSTM and 67.4% for boosting trees based baseline.
How to Cite
Prakash, P., Chilukuri, S., Ranade, N., & Viswanathan, S. (2021). RareBERT: Transformer Architecture for Rare Disease Patient Identification using Administrative Claims. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 453-460. https://doi.org/10.1609/aaai.v35i1.16122
AAAI Technical Track on Application Domains
|
OPCFW_CODE
|
|From:||Lars Jahr Røine|
|Subject:||Fwd: Implementation and Benchmarking of plotting function|
|Date:||Mon, 27 Feb 2012 18:20:52 +0100|
Hello,I asked a question about this on #Octave and wad advised to post my question here.
I've implemented a plotting function for plotting unstructured grids as an addition to a open source matlab toolbox which is currently being ported to Octave. My code is in C++ using OpenGL. I've now managed (I think) to move the functionality into the Octave C++ source code, but I run into problems when I try to benchmark.
What I've done so far is to implement a class Grid in grid.hpp and included this in gl-render.h. I'm also using a simple fps counter which I have included (FpsCounter.hpp). I have then made a new function in the opengl_renderer class called draw_unstructured_grid(const grid::properties &props), where grid::properties is a class I've implemented in the file graphics.h.in. In addition I've made a script file in /octave/scripts/plot/ called plot_grid.m which checks the input before sending it to the C++ code. I have made some modifications in the makefiles in /octave and in /octave/src (Which probably should be done in a different way, since I've now come to understand that the Makefiles shouldn't be modified directly). Here I have added -lGLEW a couple of places.
At first I had both a grid.hpp and a grid.cpp file, but then I got compile errors because the compiler couldn't find the files that where declared in grid.hpp but implemented in grid.cpp. I now assume this is because I didn't do anything to link my own code into octave. I noticed however that I was able to call functions that where fully implemented in the header file, grid.hpp. I therefore moved the entire implementation into grid.hpp and got my code to work. I am positive it works since I'm able to call my new plot_grid function from Octave, and I get a correct plot up and running. Debug print that I have in the methods in the grid class are also printed when running the Octave command. The fps-counter prints out results (incorrectly, but still it prints). How is this possible when I haven't linked my own code with the rest of the Octave code? (I guess this is more of a general C++ question, but still I hope I could get some help). And more importantly, how should I attack the problem of using both header and source files in Octave. Where do I make modifications so that my code is linked into the Octave code?
I originally thought I had gotten around the linkage problem by implementing all the code in the header-files until I tried to do some benchmarks. My idea was to make a large/infinite loop inside the draw function in /octave/src/DLD-FUNCTIONS/__init_fltk__.cc and then count the frames drawn inside the function I have implemented in the opengl_renderer class (draw_unsctructured_grid). When I did this the plots did not however appear (only the fltk window) until after the loop had finished. I got a tip from #Octave to call __fltk_redraw__() each time in the loop. To do this I had to forward declare __fltk_redraw__ because it is found in a later stage in the source file than the draw function. Octave compiles just fine, but I get an error when I call graphics_toolkit("fltk"):
error: feval: /octave/src/DLD-FUNCTIONS/__init_fltk__.oct: failed to load: /octave/src/DLD-FUNCTIONS/__init_fltk__.oct: undefined symbol: _ZN11OpenGL_fltk15__fltk_redraw__Ev
error: called from:error: /home/lars/master/master_repo/larsjr-master/octave-dev/octave/scripts/plot/graphics_toolkit.m at line 56, column 5
I would appreciate all advice on how to solve these problems, and explanations on what I'm doing wrong :) It would be very interesting to learn how to change Octave's build system in a correct way.
|[Prev in Thread]||Current Thread||[Next in Thread]|
|
OPCFW_CODE
|
M: IPhone App Developers Gripe About Payment Delays and Dismal Customer Service - coglethorpe
http://www.techcrunch.com/2009/03/24/iphone-app-developers-gripe-about-payment-delays-and-dismal-customer-service/
R: electromagnetic
Why is Apple dealing with developers with customer service, they should deal
with their developers like you would any business partner.
Edit: There's people who've made millions through their Apps, and likely made
Apple millions in the process too, so are these guys treated poorly too?
I always thought Apple had a good reputation for customer service, at least
they always have been when I've had to contact them. But that could just be
because, unlike most companies, I actually get to talk to someone in the same
country as me.
R: pz
its about time this started getting some attention. i am currently selling
just enough to get by (living in SF, with a taste for good whiskey) and its a
pain not knowing when the money will come in. it took them over 2 months to
deliver my december US earnings. and when i contacted them i just got back
excerpts from some template. since its in violation of their contract, i
wonder if its grounds for a class action suit? seems a bit aggressive, but
they don't seem too inclined to correct things
R: patio11
Maybe you need to send them to collections.
Stop laughing. You're in business now. Businesses issue invoices, businesses
pay invoices, if businesses don't pay invoices they get sent about one letter
as a polite reminder and then things get escalated from there.
If they fob your collections office off with a form letter then you sue them,
get a judgment, and send over the sheriff's department to enforce it.
R: tomjen
The problem is that Apple can behave as they want since they can prevent him
from ever making a sale again if they want.
R: sounddust
Welcome to running a business, it sucks sometimes. It's often hard to get paid
when you're not a W2 employee. The worst stories here are still better than
dealing with most affiliate/ad agencies, who constantly try to shift their
terms to net-60, net-90, net-120.. and pay late anyway. It seems like this is
the default strategy of all B2B commerce.
The good news is that unlike most people who owe businesses money, Apple is
too big to get away with sticking it to their developers for too long, and
things will probably improve soon. If not, there's always the class-action
lawsuit option..
R: briansmith
Wow, that DocStoc embed actually looks decent and improved the page. Usually I
hate that kind of thing.
R: TweedHeads
Techcrunch attacking apple? why i am not surprised?
|
HACKER_NEWS
|
How to freeze header and first column of a Table data grid listing with unlimited rows and columns?
I have an asp.NET grid listing with unlimited columns and rows. This is showing as a result of search (some kind of work history data). Depending on the search criteria, the no. of columns and no. of rows will increase.
I need to fix/freeze first row (header portion) and 3 columns on the left (that 3 columns need to show all the time and rest of the contents can scroll).
In the code page this much content is visible:
<div style="height:500px; overflow:auto">
<asp:GridView ID="someid" runat="server">
</asp:GridView>
</div>
The header columns are dynamically coming and 'n' no. of heading will come (like April 2016, May 1026 and so on..), so cannot apply 'id' for each heading. also the same for the first 3 left columns. Any solution for this?
possible duplicate of Scrollable HTML Table with fixed header and fixed column
I've 'n' no. of rows and columns, that is the issue, anyway thanks for the direction.
Take a look at this gridviewscroll pluggin, here you find a jquery pluggin to accomplish what you want
and after you add the proper css and js files the code you should use is this:
$(document).ready(function () {
gridviewScroll();
});
function gridviewScroll() {
$('#<%=someid.ClientID%>').gridviewScroll({
width: 660, //change this two values by
height: 200, //your real width and height
freezesize: 3
});
}
I will try for this one. thanks for the link, it showing absolutely what i want.
@ Enrique Zavaleta: Here is a problem, the header columns are dynamically coming and 'n' no. of heading will come, so we cannot apply ID for that. also the same for the first 3 left columns. Any solution for this?
I don't know why we cannot apply the same ID depending on the num of columns. The GridView's ID is the same no matter how many columns it has. And to freeze the first 3 columns yo don't need their ID, just with freezesize: 3 is fine and it must work
@ Enrique , this is the issue <Columns> <asp:BoundField HeaderText="ProductID" DataField="ProductID" /> </Columns>
these are Static, but in my case all are dynamic.
You could edit your question with the issue, with more details of what is your problem, I mean, writing why is not working, what is the code you have tried and what is what you are seeing
|
STACK_EXCHANGE
|
The Music of Machines
links<! -- ========================== GROUP PEOPLE ========================== -> <! -- ========================== GROUP PAGES/TABS ========================== ->
The Music of Machines - overview<! -- ========================== PAGE CONTENT ========================== ->
Janani Mukundan: Teaching a machine to compose music
Inside IBM Research | Where great science and social innovation meet
Listen to The Music of Machines on Soundcloud
IBM Watson, the cognitive computer that defeated the best Jeopardy! champions in the world, has moved on. Since 2011, Watson has revolutionized the way human beings can see patterns in “unstructured” data — research papers, blogs, reports, photos — and influence decisions in healthcare, finance, retail, travel and services.
In this episode of Inside IBM Research, Janani Mukundan (pictured), a machine learning researcher at the IBM Austin Lab, talks about how a stack of Restricted Boltzmann Machines is now building off Watson to “learn” how to compose music.
The cool thing about using music — using machine learning to learn music — is first thing is, it can be expressed mathematically. So, you can tell what pitch is being played, and what is the time signature, what is the key signature, etc. using the input files.
When we started out, the rules of music, we thought it was simpler than actual natural language, and the semantic and grammatical rules were simpler. So we thought it would be good to learn all of these things.
So, what we’re trying to learn here is, I guess pitch variations and rhythm variations in the song, and the input that we use basically is a MIDI file, which already has all this information. You have to give it a digital representation. You can’t just give it an mp3. You have to give it a MIDI.
So, we’re trying to extract features of the song and express it in a different way.
A Restricted Boltzmann Machine is basically a stochastic neural network. It has a layer of visible units or neurons. And it also has another layer of hidden units. And the idea behind a Restricted Boltzmann Machine, especially for our project is, we provide the information on the pitch, the note that’s being played at any given point, and the visible layer captures this information, and we train the model in a way that the hidden layer tries to capture or extract features of the visible layer by, you know, updating the weights and so on. And the hope is that once the hidden layer of neurons captures the essence of this visible layer, which is basically the music, we should be able to recreate this input with just the extracted features and the weights.
So, in an ideal world, what would happen is, if we don’t perturb or disturb the model, we should be able to exactly reconstruct the input that was provided to it. But now we don’t want to do that. We want to be able to create new music.
So, in order to create new music, we bias the model, or we perturb the model. And we add creativity genes, or neurons, to this model, so that, when you try to extract the features, you’re not only extracting the essence of the actual song, but you’re also adding subtle nuances that were not existing in the original song. So, when you recreate the music, you get a song that is familiar, but is also yet different from what was given to the model.
You can actually mix multiple songs to come up with one song. So, mixing “Mary Had a Little Lamb” and “Oh Susannah” to come up with a new version of the song that sounds similar to both songs basically.
A neural network basically consists of a visible layer of neurons. And when I say “visible layer,” it means that the input is actually provided to those neurons. What is an input for us? An input is basically — given a particular instance of time, what pitch is being played?
So, I take an input song and I divide it into one-eighth of notes, or one-quarter of notes or half notes and so on. And then every neuron represents a half note or a quarter note. And then the value of that neuron is basically the note of the pitch that is being played.
And this is the most simplest [sic] song. We can play around with other things. Like, we can add dynamics to the neuron. We can say that, “I want this neuron to be a quarter note as opposed to a half note. Those things are, like, additions that you can do it.
But the basic form is having a neuron and it represents a note or a pitch that is being played at any given point in time.
So that’s what basically those two layers mean.
And the algorithm that we use so that the hidden layer captures, or extracts, the features of the input is called contrastive divergence. It’s just an algorithm that’s been used for a really long time.
So, when I give an input to my model, this model has no knowledge of what good music sounds like. So, it’s completely unsupervised. It doesn’t know what sounds good, what sounds bad. So, you just give it a piece of music, and then it comes out with a new piece of music.
Now, we as human beings already know what sounds good and what sounds bad. We know certain notes go well with certain other notes. We know certain key signatures are better when played differently, and so on.
So, all of this information is provided to the WolframTones model. So, it’s kind of supervised, as in, you already know what sounds good and you want to extract something else from what sounds good to come up with new music. That’s pretty much what WolframTones does.
So, what I noticed from my training experiences: The harder the song, the better it learns. Classical music, it was able to learn really well. because classical music is really hard. Some pieces of classical music are easy, but most pieces are really hard. And it was able to learn it better. It was able to add more subtlety and more nuances to it to make it sound different.
When the music is really simple, like, you know, pop music, it was much simpler [sic] when compared to classical music. The output wasn’t very creative, in my opinion, because the input that was given to it was already not so complex. So the output that came out of it was not complex as well. So, that was one drawback. It couldn’t come up with something very creative when you gave it a simpler song as opposed to a more difficult piece.
Like, I trained De Angelus Gloria, which took about, I think, fifteen minutes on my laptop. I trained one of Adele’s songs, and that took about, like, five minutes to train because it was much simpler to train.
I actually tried to learn Dire Straits, one of my favorite bands. It was really hard to learn psychedelic rock, or any other kind of rock music because it didn’t have grammatical rules like classical music, or even like pop, which is very simple. So that was really hard to learn, and I’m still trying to learn it.
How can we make use of this? The possibilities are endless.
You can think of a cognitive music composer Pandora station. You’re tired of listening to all sorts of songs that you already listened to before. You let the computer create your own music for you.
And I can think of composers using it. You know, they want to tweak certain aspects of their song, make it sound different. They can just plug this piece of information into the model. It comes up with an endless number of alternatives that it can use.
I can also think of a cognitive cloud service offering. You have cloud and mobile platforms that you can have streaming, composition, licensing, etc. for music. All sorts of things can be done with it.
And the whole idea of basically being able to pick music from different genres and mix them together to come up with new music is something new that doesn’t exist right now.
So just like how we can listen to music and learn from it, this can be applied to natural language as well. The models are going to be much more complicated and the grammar’s going to be different. The semantic searches are going to be different. But in essence, music is a language, and, you know, natural language can also be applied to the same model. You feed pieces of information — books, or whatever it is — to the model. It tries to extract features out of it. You can classify books based on this. It’s pretty much the same idea for music as a language and natural languages for language as well.
Were you able to hear how the “perturbed model” resulted in one piece that sounds like a cross between “Mary Had a Little Lamb” and “Oh Susannah” and another that sounds like De Angelus Gloria?
You can hear the complete musical pieces that Janani's cognitive model was able to learn here:
Mary Had a Little Lamb & Oh Susannah
|De Angelus Gloria|
You’ve been listening to Inside IBM Research. I’m your host, Barbara Finkelstein. Our producer is Chris Nay at our IBM Austin Lab. Our music is “Happy Alley” by Kevin MacLeod (a human composer). Share this episode with colleagues and friends — and keep an ear out for our next episode.
Last updated on September 30, 2015
|
OPCFW_CODE
|
How to secure your server: 5 tips for the best server protection
Server protection is one of the biggest concerns for security teams nowadays. Weak protection can open the door for attackers to gain unauthorized access to your server through several types of malware. Today cybercriminals are more aggressive than ever. Set up your server protection with these basic steps to keep attackers away.
SSH keys: a must for server protection
Also known as Secure Shell, the SSH keys are a cryptographic network protocol. The SSH keys provide a higher level of security than a conventional password.
This is because the SSH keys can resist a brute force attack much better. Why? because it is almost impossible to decipher. On the contrary, an ordinary password can be cracked at any time.
When SSH keys are generated, two types of keys are obtained: a private one and a public one. The private one is saved by the administrator while the public one can be shared with other users.
Unlike traditional passwords on servers, the SSH keys have a long string of bits or characters. To crack them, the attacker would take some time trying to decrypt access trying different combinations. This happens because the keys (public and private) must match to unlock the system.
Set up a Firewall
Having a Firewall is one of the basic measures to guarantee server protection. A Firewall is necessary because it controls incoming and outgoing traffic based on a series of security parameters.
These security parameters are applied according to the type of Firewall you use. There are three types of Firewall according to their technology: packet filter firewall, proxy filter, and a stateful firewall. Each of these services offers a different way to access the server.
For instance, a filtered Firewall is one of the simplest mechanisms for server protection. It basically checks the IP address, the port source, destination IP address, destination port, and the type of protocol: IP, TCP, UDP, ICMP. Then compares this information with the specified access parameters and if they match, access to the server is allowed.
A proxy filter is placed as an intermediary between two communicating parties. As an example, we can think of a client computer that requests access to a website. The client must create a session with the proxy server to authenticate and validate the user’s access to the internet before creating a second session to access the website.
Regarding a stateful Firewall, it combines the technology of a proxy and a packet filter. In fact, it is the Firewall most used for server protection since it allows to apply security rules using UFC firewall, nftables, and CSF firewall.
In conclusion, using a Firewall as a server protection tool would help you defend the content, validate access, and control incoming and outgoing traffic through pre-established security parameters.
Establish a VPN
Setting up a VPN (a virtual private network) is essential to access the information of remote servers under the security parameters of a private network.In basic terms, a VPN behaves like a virtual cable between a computer and a server.
This virtual cable creates a tunnel through which encrypted information travels. In this way, the information exchanged between the server and the authorized computer is protected from any intrusion.
In conclusion, a VPN offers security protocols that protect the information that passes through the server and create secure connections through the encryption of data.
Encryption using SSL and TLS
SSL and TSL encryption are an alternative if you do not want to use a VPN tunnel. The SSL (Secure Sockets Layer) is a digital certificate to protect the information transfer.
On the other hand, TSL (Transport Layer Security) is the second generation that follows SSL. The TLS establishes a secure environment between the user and the server to exchange information. It does this by using HTTP, POP3, IMAP, SMTP, NNTP and SSH protocols.
When using SSL and TSL through a KPI (Public Key Infrastructure), you can create, manage, and validate certificates. You can also identify systems with specific users to encrypt communication.
In other words, when you establish authorization certificates, you can trace the identity of each user connected to your private network and encrypt their traffic to prevent the communication from being hacked and strength your server protection.
Don't forget the Isolated Execution!
An Isolated Execution is a way to protect your private network from massive malware infection. Isolated Execution works by creating an isolated environment for the execution of unknown or untrusted applications.
This way you can open suspicious files from unknown sources without compromising the rest of the infrastructure. To imagine this, think about what happens in science fiction movies.
A group of scientists is examining an extraterrestrial creature inside a sealed chamber. This measure prevents the spread of unknown viruses that can kill humans. Maybe our example is exaggerated but it is exactly what Isolated Execution does to protect your infrastructure.
The Isolated Execution acts through “sandboxes”, as the controlled environments are called. You just need to select the file and send it to an isolated execution using the option “Send to Sandbox VM”.
If the file is infected, it will only affect the environment of the Sandbox and it will be deleted after the Sandbox is restarted.
Some advantages of an Isolated Execution are better detection of threats, delay an attack by limiting the speed of propagation and distribution of exploits, and avoid a human error when executing files in vulnerable environments.
|
OPCFW_CODE
|
I’m pleased to announce that I’m finally ready to make my first fully-fledged commercial Mac OS X application available to the world!
SourceTree is a user-friendly Mac OS X front-end for Mercurial and Git, the two most popular distributed version control systems used today. The goal was to create a single tool which could deal with both systems efficiently, and to give a developer quick and intuitive access to the things (s)he needs to just get on with building software.
I thought I’d answer a few background questions on this that I get asked on occasion:
Why Mercurial AND Git?
Other apps tend to concentrate on just one version control system, so why am I supporting two? Well, as a developer I’m regularly coming across projects from both sides of the fence, and in practice I find I need to use both fairly regularly. I personally chose Mercurial for my own projects (and discussed why here), but I still use Git when dealing with other projects, and spend a fair amount of time hopping between the two. It struck me that even though they have their differences, they are both based on the same distributed principles, so having to use two separate tools was just unnecessary. I wanted a single tool which provided a common interface where that made sense, while still exposing the things they do differently where that was useful too. SourceTree 1.0 is my first attempt at that.
Why only Mac OS X?
There were actually multiple reasons for this choice:
- I wanted to learn Objective-C and Cocoa on a real project
- I know from experience that designing for multiple platforms can be a distraction, with more time spent on compatibility issues, and less on functionality - and that’s before you even consider the compromises you have to make, particularly on UI conventions which are far from uniform across platforms. I’ve been a multi-platform developer for more than 10 years, and for a change I just wanted to focus on the end user results and nothing else. I’m aware that schedules slip very easily when you overcomplicate, and I’m already supporting multiple DVCS systems (something I consider to be an important feature point), so I deliberately chose to keep this element simple.
- Mac OS X has become my own platform of choice for most things now. The combination of stability, user-friendliness, Unix underpinnings and well designed hardware match my current needs perfectly. I’m done with the ‘some assembly required’ PCs that I loved tinkering with over the past 15 years
What about Subversion?
A few people have asked me if I plan to add Subversion support too. I actually did intend to originally, until I realised how much time it was going to take to just do a decent job on Mercurial and Git. Within the time constraints, I focussed on the subject areas that I felt I could contribute most to - there are already quite a few Subversion tools out there for Mac OS X, but Mercurial and Git are much less well served, so that’s where I focussed my efforts.
I still have Subversion support tentatively on my work plan, but it’s not top of the list. I think it’s better to do your most important features well before diversifying. Plus, there are problems with Subversion - it’s very, very slow compared to Mercurial and Git, so to match the performance in SourceTree of things like the interactive searches and dynamic refreshing / log population I’d probably have to do a ton of extra caching just so the user wasn’t sat tapping their fingers.
Edit: I made my decision on this: I don’t plan to support local Subversion, but to support operating with Subversion servers with Mercurial and Git locally via hgsvn and git-svn.
Why didn’t you make it open source?
Sorry folks, while I love contributing to open source (I’ve done a bit on SourceTree too, sending a patch back to BWToolkit), making it work as a business is very hard indeed. I half-killed myself trying to combine being an open source project leader and doing other commercial activities at the same time, so now I’m trying a more traditional approach. One thing I learned in the last few years is that there are some sectors & application types where being an open source maintainer is very compatible with also running a business based on that project, and there are others where you can really only do one or the other simultaneously without flaming out. Sucks, but there it is 😉
I have a public, official roadmap for SourceTree and encourage users to suggest things they think should be on there, via the support system. I learned from running an open source project for 10 years that being open about your plans can be a big benefit - users like to know where things are likely to be going, and often have better ideas than the developer on what could do with a bit more spit and polish. They can also tell you what’s important to them, which is crucial for prioritising - as developers we tend to get carried away with things we want to work on, but in the end, it’s scratching the customer’s itch that matters most.
And while I’m really quite proud of SourceTree 1.0, there are plenty of features I’d like to continue to add, and definitely more room for some totally unnecessary beautification which I didn’t have time for in the first release. Hey, this is OS X 😉
SourceTree is available now on a 21-day trial license. Go get it already 😀
|
OPCFW_CODE
|
Below is a post in reply to some very good questions posed by my DP regarding the thinking behind choosing ipads for our school. The below article brings up some great points of consideration for all schools. We are particularly interested in points 4 and 5 especially the "compelling answer to Why Ipads?"
I also like the idea of drawing all ideas back to the 4Cs - see this link for a summary of the thinking behind Creativity Critical Thinking Collaboration and Communication as 21st Century Skills.
See this link for ideas on how to integrate the 4Cs into planing and curriculum implementation.
The 4Cs in Education
The idea of keeping it as a single user device is interesting as I have read conflicting views on this. On the one hand yes it is designed for the apple user experience, on the other hand our students are good at (and used to) collaborating together to solve problems and don't have the adult baggage of 'owning' the device.
BYOD offers a different perspective on this as students WILL have their own device rather than a class pod. I am not to worried about using it as a multi user device at the moment, even if we think 1:1 is the best way eventually, for now we can only develop our thinking with experience. I guess when/if we have 1:1 then collaboration will occur in different ways, so two people talking as they use and then the whole idea of sharing output through blog/portfolio/websites (which we can develop ahead of 1:1).
I find it difficult to predict best practice in these terms with such a new device. As long as we stick to our Key Competencies and our 21st C aspirations of Critical Thinking, Communication, Creativity and Collaboration (The 4Cs) we will be on the right track. One point is, are we confident that the old fashioned laptops are being used for best practice? I don;t remember too many conversations about this in schools but I would link it to the same issues with any device.
Yes, always a good one, how do you justify to the unconverted? This is a different conversation than to the 'converted'. For me the solution to this is could be three fold
- one provide a succinct vision statement for stakeholders (linked to Key Compentencies)
- two provide regular relevant and excellent examples of learning linked to Key Competencies (use blog, class page and classroom displays)
- three to create up to date info/research/classroom studies based on blogs,twitter links etc that stakeholders can easily access if they want to - if only some people read this then we help connect the stakeholders with the pedagogy, one discussion ata coffee group could spread the word.
Oh and there should be a fourth, teachers need to understand the whole point of it so they can easily talk to any stakeholder and confidently enthuse about their learning decisions on the classroom - this can refer back to the succinct statement.
Sounds easy! I will work on the statement...
Update post - Scott McLeod has already done it! See this excellent YouTube summarising the answer to 'Why IT?" or for my purposes 'Why ipads?"
Also, here is a great clip using Sir Ken Robinson's talk on shifting paradigms in education, I love the use of the visual to illustrate the speech.
|
OPCFW_CODE
|
5.8V Electrical Short = half baked Pi (or, questions about damaged but still working Pi)
I just shorted 5.8V+ (4 half-done AA batteries) through my Pi... at most for 3 seconds... it went through somewhere near the ethernet port... whilst connected to the ground of the circuit, also connected to the batteries...
Having got my Raspberry Pi in a little robot to be made to finally work from my Android phone, I may have go a little overexcited and driven down some stairs...
Hence when I picked the remains up at the bottom, the Pi fell out of its holder, which disconnected it from the 5V power supply, but touched some point near the ethernet port against an exposed contact on the battery pack... with the Pi's GPIOs still connected to the motor circuit, with both the grounds connected....
Here is a picture of the damage:
After being left for at most half a day after the incident, it still works...
My questions are:
A) Have I voided the warranty? I can't find anything on this... if so I might as well solder on a reset P6 header...
B) Having scraped through this, what impact would this have on my Pi? The GPIOs, the Graphics, the USB ports all seem to work. Looking at where it has melted through the writing on top of the packaged processor and RAM (this Pi is a model B), would it just be the RAM that is affected? Would this significantly affect the Pi's lifespan?
C) Will it better to leave it, or start using it again? I am trying to get the 'robot' (box on four wheels) ready for going away for Christmas, but I don't want to cause any more damage. I did a quick two minute test on it, and it seems fine, but I don't want to make anything worse...
A) I'm not a legal expert, but yes, normally this would void your warranty. Apparently you hooked up some power at a place where it's not supposed to be... Warranty covers manufacturing defects, not abuse :P
B) This kind of burnout will definitely have a negative impact on its lifespan. I'm surprised it works at all, however you may experience random crashes and reboots due to faulty RAM. It may work fine in winter, then go wonky in the summer due to extra heat.
C) Just use it until you buy a new one; then you'll have a Pi that you can use for experiments or development (until it really dies).
I agree on your answer for C, not quite for A. For B, even though it looks bad in the photo, the actual covering of the RAM/Processor package seems undamaged, it is just the writing on top it seems to have melted...
It's hard to tell from the picture if the heat came from outside or inside the package. Is it a wire that lay on top of your chip and burned through? In that case you may be alright. I assumed from your description that that voltage passed through the chip.
Sorry, I should of mentioned it when I first asked the question :-). I don't think anything was on the chip, even though one of the GPIO connected insulated wires may of touched it. There also seems to be a slight burn on the board to the top right of it, so it might of been that. I probably need to get a heat sink as well, which might improve the lifespan slightly and help with any future incidents. Currently, I have moved the battery holder to the other side of the container, and fixed it there with gaffer tape
|
STACK_EXCHANGE
|
determining the size of the buffer at run time? (socket programming)
In order to determine the type of message received in a UDP packet, there is a need to look at specific buffer element [i] received from "recvfrom" in order to discern the type of message intended. first, i use a buffer in the stack to populate the buffer (of recvfrom), i know the maximum size of the message i should receive.
So say my array buffer is of 300 bytes, and i receive a packets of different sizes (e.g. 30, 80, 210 byes etc)....how can i know the size received (this is because there are few other criteria i test for to determine the nature of the message )
Knowing the size will enable me to use memcpy to an object.
i'm thinking of strlen(udp packet) because it is determined at runtime as opposed to compile time.
the problem is what if the rest of packet was filled with junk....
I appreciate it
recv(2), which is used to receive a UDP packet, returns the number of bytes received.
yep, you are right, it forgot about it the return bytes, recvfrom also returns the number of bytes. thanks a lot
you might also use memset() to set the whole buffer to ;\0' before each call to recvfrom() And the max number of bytes to receive is part of the recvfrom() so the buffer is not overrun.
note: a UDP packet only allows a single read, so the buffer size must be >= to the max udp packet size. Preferably greater by at least 1 byte. The UDP packet might contain null bytes (0x00) so using the string functions would be unreliable/error prone
Thank you for your response, but i didn't resolve my problem yet, the issue resides in sizeof(pack_buff). if for example you declare char* pack_buff[100]. when i print its size it give me 800. Others said that you can't determine the size of the array the pointer points to. so even if i use mem like this memset((char*)&pack_buff,0,sizeof(pack_buff)) , i'd still need to know how to determine the size of puck_buff. I was expecting it to print simply 100 bytes (char = 1 byte).
why does sizeof(pack_buff) gives 800 if it's declared as : char* pack_buff[100];
You have char* pack_buff[100] (100 pointers) but you want char pack_buff[100] (100 bytes). char buf[100]; sizeof(buf) will work (but char* buf; sizeof(buf) won't).
Setting the buffer to NULs is a waste of time. If you're going to treat the buffer as a string, you just need to add one NUL. bytes_recvd = recv(sockfd, buf, sizeof(buf)-1, 0); if (bytes_recvdbuf >= 0) buf[bytes_recvd] = '\0';.
|
STACK_EXCHANGE
|
Using the Instaclustr Monitoring API with Datadog
Instaclustr’s Monitoring API is designed to allow you to integrate the monitoring information from your Instaclustr managed cluster with the monitoring tool used for the entire application. Datadog (datadoghq.com) is a popular platform for monitoring a range of applications. This help article walks you through how to use the Instaclustr Monitoring API with Datadog.
At a high-level, the approach we will take in this article is to install a script on a server you manage that has the Datadog agent installed. This script calls the Instaclustr Monitoring API at regular intervals and passes the information returned to the Datadog agent which reports it to the central Datadog system.
One of our awesome customers has also come up with and uses an alternative approach using AWS lambdas. You can find details here: https://github.com/manheim/InstaCluster-to-Datadog-Lambda
Table of Contents
Prepare Your Environment
Follow these steps to set up your environment:
- Set up a cluster with Instaclustr (see https://www.instaclustr.com/support/documentation/getting-started/creating-a-cluster/)
- Set up a Datadog account (datadoghq.com)
- Install the Datadog agent on the machine you will use to run the integration script (Install instruction are available in the Datadog console).
- Install Python on the machine (https://www.python.org/downloads/)
- Install the pip Python package manager on the machine (https://pip.pypa.io/en/stable/installing/).
- Install the Datadog DogStatsD API package (pip install datadog).
Set Up The Script
We have created a sample script that calls the Instaclustr API and forwards the data to Datadog. The script is available on GitHub here: https://github.com/instaclustr/ICAPI-DataDog. You can download the ZIP file or clone the repository.
One of our customers has extended our open source script above with some enhancements. Depending on your requirements, you may use that script instead. The script is available on GitHub here:
The script (ic2datadog.py) is fairly straightforward. It retrieves a specified list of metrics for all nodes in the cluster and requires a configuration file (called configuration.json) in the format shown below:
"cluster_id":"[instaclustr cluster id]",
"api_key":"[datadog API key]",
"app_key":"[datadog app key]"
"user_name":"[instaclustr user name]",
"api_key":"[instaclustr Monitoring API Key]"
The settings to be added to the configuration file are:
- cluster_id: The Instaclustr cluster ID. Available from the cluster details page on the Instaclustr console
- metrics_list: A comma separated list of metrics to retrieve and pass to Datadog. For a full list of available metrics, see the Instaclustr monitoring API documentation (https://www.instaclustr.com/support/api-integrations/api-reference/monitoring-api/).
- dd_options: Your Datadog API key and Application key. Available from Integrations/APIs in the Datadog console (you may need to create a new app key).
- ic_options: Your Instaclustr user name and API key. These can be created under the Account/API Keys tab of the Instaclustr console. Make sure you copy them as they will only be displayed once. For more information, please view our support documentation page on API Keys.
Run The Script and View The Results
Running the script is a simple matter of ‘python ic2datadog.py’. The script will then run until interrupted.After a minute or so of running, the metrics will be visible in Datadog.
You can see the results by logging into the Datadog console:
To view the gauge metrics (e.g. CPU):
- Go to Metrics/Explorer
- In the Graph text box, start typing ‘instaclustr’. You should see a list of available metrics appear in the format “Instaclustr.[Node IP].[metric name]. Choose the metrics you want and Datadog will draw you a graph.
To view the node status information:
- Go to Monitors/Check Summary
- You should see the Instaclustr node status checks in the list (filter for “Instaclustr” if necessary).
The Instaclustr metrics are now available to use wherever else you would use them in Datadog (dashboards, monitors, etc).
|
OPCFW_CODE
|
Tag Archive / web development
Finding a way to frame and hang this quote somewhere in my office space…
Stop drooling over your tools. Start thinking about what you’re building with them and how that’s affecting the world you’re helping create.
— Aral Balkan (@aral) May 10, 2014
Three months between posts. Sorry for the long delay. Been rebuilding…
Earlier in the year, I went to the IxDA Interaction 13 conference in Toronto, Canada. I have two more conferences for this year.
Next week, An Event Apart DC in Alexandria, VA (another serving…).
Then, in late October, CSS Dev Conference near Denver, Colorado.
As of this entry, finishing contruction/testing of the WordPress theme. Will be writing blog entries related to the whole history/process of Project Charles. Debuting on Monday, August 5.
August is going to be a busy month…
The role of a front-end developer is to build interfaces that give the user access to information with the least amount of interference.
— Ivan Wilson (@iwilsonjr) January 28, 2013
a method of building a visual model of an web application UI/front-end layer
During the past four months, I have been presenting previews of sketches and notes from my notebook via Flickr.
- Previews – Project Ottawa/Second Draft
- Previews – Project Ottawa/Second Draft (Modeling Example)
- Previews – Project Ottawa/First Draft
Project Ottawa [or the Ottawa model/diagrams] started in late February, prior to my trip to Ottawa for Jonathan Snook’s SMACSS Workshop. It was there that I did the first initial sketches post-workshop and a few weeks later, created the first draft.
After some review and criticism from a fellow co-worker, I decided to work on the second draft. This took more twice as much time as the first, with testing and constant revisions. However, at this point, I am writing the final pages and presenting the Second Draft as a microsite in early/mid July.
In short, the model is based on a) recent work on modularizing CSS (via Nicole Sullivan, Jonathan Snook and etc), b) analog to linear algebra, most specifically linear transforms, and c) my ideas on the UI layer, based on a concept called The Information Layer.
With these three items, I designed a visual “code” with symbols representing blocks of programming code and content. By examining mostly my own work, I developed the rules and basics to use the symbols to represent not the parts of the web application system, but also describing interactions withing the system.
The original purpose of this was archival, record keeping for myself. However, I began to realize that there was nothing out there that visually describes the work that FEDs [front-end developers] were doing. We have code and and can talk about CSS, HTML, etc. But, maybe for the first time, there was something that allowed the work to be visualized and be more tangible.
Currently, this is the second draft. Even though models are always being revised and changed, I kept the term “draft” because I wanted to present the working model in a state that was good enough for demonstrations. However, this draft appears, so far, to be close to being stable. Naturally, there is more work in the near future.
As I mentioned before, the final document will be presented as a microsite around early/mid July.
Before that, I created a single page preview, giving a brief overview of the model and its use in “mapping” a single page, AJAX driven site. This is based on the sketches/drawing from the last Flickr preview and the same example will be used as a case study in the final document/microsite.
A couple of months ago, I was looking at a promotional video touting some new technology. Something that was written solely for mobile. They went on about their processes, that by focusing not on the desktop, they were saving file size and increased performance, which is all and fine. Anything to life better, especially on those days when I want information without waiting for everything to compose itself during the morning rush hour. But at the end of this, I wanted to ask this question (which, in hindsight, I should have added to the comments):
Why should there be anything difference between the desktop and mobile?
Before answering, think about it real hard.
Don’t worry. I’ll wait…
The current paradigm of mobile is based on two things; the mobile phone and the tablet. But isn’t this the same sort of think we had before – the PC as the desktop. Didn’t we got over this? I got over this years ago, especially when my previous job required me to work with both Windows and Linux.
What I am thinking is that the current paradigm is just as short-sighted.
Let me put it this way? In a year or less, why not see a mobile device become the desktop?
Why not give the "desktop" have touch-enable events like its mobile cousins?
What I am imagining is the mobile/desktop schism not just disappearing. It simply gets redefined.
If my life is nearly almost located within the confines of my mobile phone, why not go all the way?
Our perception of the desktop is that of the monitor tethered to an external hard drive and et. al. What about a rapidly approaching near future where our version of the "desktop" is a mobile device tethered to cloud storage.
Just a thought?
Don’t wait too long.
|
OPCFW_CODE
|
Data Scientist Resume Examples
- Scraped data from real estates sources
- Built outlier detection, clustering models
- Created model for buy/rent price prediction
- Prepared R shiny web application for model usage
- Handled design, modelling and deployment for a NLP project to automate resume parsing.
- Developed case study on Attention Models and SOTA architecture BERT
- Developed various Machine Learning, Deep Learning and Artificial Intelligence activities as part of the course material
- Taught lectures to nearly 100 professionals in the range 0-25 years of experience.
- Used data mining and data cleaning techniques to analyze job posting descriptions
- Used Python to parse keywords from a self-created dictionary to determine specific trends in Computer Science jobs such as popular programming languages, more need for soft skills, etc.
- Performed Machine Learning algorithms (Linear Regression, Clusters) to determine future trends in the field
- Experienced using many Python Data Science and Machine Learning libraries such as Pandas, Numpys, Seaborn, Scikit Learn and Matplotlib
- Build Attribution and Prediction Models using Time Series , Regression and other Machine Learning techniques to support the IB research analysts
- Build R Shiny applications- For Campaign Analytics, Data Quality Monitoring – which results into efficient functioning and time saving
- Survey Analysis using bespoke analysis methodologies
- Optum Deep Vision(Intelligent Character Recognition):
data scientist (stage)
- Etude des mécanismes de transparence publicitaire de Facebook et de Twitter.
- Développement d’une plateforme pour l’audit des annonces publicitaires fournis par les médias sociaux.
- Développement d’un outil de visualisation des données et Analyse statistique des annonces
- Platform lead for 3 of the company’s software development projects involving SQL Server, MySQL and ParAccel (Actian Matrix)
- Built Python Web Service in Flask API to run the models. It helped in running the model on production.
- Maintained and updated model which used to calculate the Optimal Price at which IBM products should be sold to the client and calculates the Probability of Winning the Bid.
- Effectively communicated analytical results to key stakeholders using strong data visualizations, superior presentation skills and business language to emphasize the so what of the analysis.
- Applied statistical and algebraic techniques to interpret key points from gathered data.
- Coached, developed and motivated team members, providing coaching and mentoring to junior data scientists on Python and data mining techniques.
junior data scientist
- Migrated the Business Intelligence application on Django to vue.js. Also set up to work as progressive web app.
- Configured the AWS SageMaker to build an organized and efficient model environment. As well as developed some Linear Regression, Kmeans and Gradient Boosting models.
- Worked on Faviely doors to improve the availability of the system by predicting the possible factors influencing the maintenance schedules.
- Working on HVAC systems to builld a predictive model to identify the Lockouts/failures to reduce the down time of the system.
- Worked in the R&D team.
- Developed Algorithms for predicting the early onset of disease based on various health vitals.
- Developed Algorithms for predicting risk score of patients
- Performed data analysis on health vitals to bring out the meaning full correlation between several parameters or factors.
- Working with CNC and P2P teams of Philips on developing a Classification based Cash Flow Forecasting Model for getting estimated incoming and outgoing cashflows beforehand. Successfully implemented the first phase of the project by designing an Invoice Collection Prediction Model using Random Forest Classifier
- Clustering,Survival Analysis and Time Series on are client data. Recommendation System on Nature Basket.
- Worked on Chat bot related to domain like – Product & Service,Tours & Travel and HR BOT.
- Descriptive analysis on Media data.
- Leading the area of data science in my section of the organization. Owning end to end responsibility for identifying opportunities to develop data science solutions, in order to improve production and solve problems.
- Designing and developing software to solve complex analysis problems, using machine learning and various statistical models.
- Designing and developing software tools for data analysts, to automate and optimize everyday analysis processes.
- Providing consultancy for data analysts in the organization. Applying innovative solutions to complex analytic problems.
PROFESSIONAL RESUME TEMPLATES
Choose from 20+ tailored-built templates that have landed thousands of
people like you the jobs they were dreaming of.
people like you the jobs they were dreaming of.
|
OPCFW_CODE
|
engine: de-lint
Issue: HOTPOT-639
I urge great caution here and not just mechanical if err != nil { return err }.
I'll proxy my review to @patrickxb since he got a head start on it.
@patrick @joshblum thanks for the reviews, I've addressed the specific review comments.
I'd prefer to undo any logic changes for missing error check before this goes in. If you'd like, you could log the errors. Or just ignore them. I added comments on some of them, but I think maybe we should just not change any logic flow.
A lot of this code has been used by users millions of times and getting clean lint status isn't worth breaking it.
I'm happy to revert specifically-reviewed places where errors should be dropped, but I disagree that we shouldn't change all of them by default. Over time the code will become unmaintainable if we let stuff like this stand. The default should be to return the errors, and fix problems as they arise (and I'm happy to be involved in the debugging/fixing process). Now is as good a time as any to rip the band-aid off. I'll wait until after code freeze is over to merge this of course, so we get lots of time with it before the following release.
Please check out the current PR and let me know which other errors might be unsafe to return (and of course I'll fix any tests that broke as well).
@patrickxb I understand your concern for changes to this code but overall agree with @strib for trying to work through which errors were intentionally vs accidentally dropped.
I view any bugs it does cause/expose as a good learning opportunity to refresh our knowledge of how this code works, and it will only improve our long term code, at a low risk of short term pain.
I agree with the sentiment above and am also happy to help fix any issues that fall out of these changes.
Proposal: let's get low hanging fruits in, when error handling cannot be questioned (so e.g. test code), and make tickets for everything that we are not sure about. I looked through this PR and even though I'm familiar with the code, it's hard to make a decision about logic changes like that without getting a better view - just the call site around the change alone is not enough to judge.
My request is something along the lines of what @zapu is suggesting, and I believe @chrisnojima has suggested elsewhere:
make a PR that satisfies the linter without any logic changes
if there are things that look potentially broken, make tickets to investigate
put the tickets in triage.
i think this will be a lot cleaner than a large PR that is delinting and changing legacy logic.
As far as the comment that logic changes have already happened in other packages as part of this lint project, I didn't review any of those and would have suggested the same thing. The ticket is to satisfy a linter, not change logic.
As far as the comment that logic changes have already happened in other packages as part of this lint project, I didn't review any of those and would have suggested the same thing.
Fair enough. But given that all of those other packges are already done, and done successfully as far as we know, unless there is a specific reason not to do the same for this specific package, that seems like more of an argument to go ahead with this one for consistency's sake. I definitely understand your reluctance, and any future "I told you so"s will be well warranted.
The ticket is to satisfy a linter, not change logic.
I disagree. The only point of satisfying the linter is to make sure the code is readable and maintainable, and if we are just papering over possible bugs, we shouldn't have the linter at all. No one is going to go through and check all of the call sites one by one, -- you and zapu have already made that pretty clear. Even if we make investigation tickets for them, they will never be high priority enough to put into a sprint, and even if somehow they make it into a sprint, there will be nothing better for someone to do then to just change it and see what happens, like we're doing here. I think the only reasonable way to get this done is to do it all at once, like we have done for all the other packages, and see what happens.
By changing the code to return errors by default, we are purposely forcing a situation where the code is either readable, or we find/fix/document the reason it can't be more readable. And as I sad above, this is a great time to do that.
That said, I won't merge without at least one green checkmark.
Seems we're at an impasse here. Anyone else have further thoughts on this? @maxtaco we might need your sagely guidance on this one -- should we make this package confirm with every other package and repo we have (see my arguments above), or should I unwind part of this PR to play it safe for this package only and leave unvetted errors specifically unchecked?
I'm in favor of just getting this in, I don't think any of the individual changes are too scary.
Thanks!
|
GITHUB_ARCHIVE
|
How do you get Big Talker to work with a Samsung M3 (or any Samsung speaker for that matter)? I have it used it with LANnouncer and an android tablet before, but would like to have it pump TTS to the Samsung speaker. I don’t see it listed in the Big Talker config settings. I do have voice notifications working with the M3 using CoRE, so the speaker is working as a TTS audio device, just not clear on how Big Talker can access it.
When you install a BigTalker instance, during the initial setup phase it asks if you want to run in musicPlayer or speechSynthesis mode and gives and example of devices that will show up under those modes. LANnouncer only shows up under speechSynthesis mode for example and Sonos only shows up under musicPlayer. I suspect you have BigTalker installed in speechSynthesis mode only and therefore you cannot see your Samsung speaker.
If you are using this instance of BigTalker, you can install another instance and choose musicPlayer mode during the initial setup screen. It will be a separate instance though, so phrases from one do not automatically speak on the other, you’ll need to configure those. I hope to figure out a way to not require one mode or the other in the future and merge them into the same instance.
Excellent, Brian, that’s definitely the issue, I’ll install another instance of it and give that a whirl. Thanks for the quick response!
Just a quick follow-up, I added another instance of Big Talker for the Samsung M3 and configured a bunch of events, and it’s working great so far. Thanks for the great and EASY-to-use SmartApp, Brian!
And the speech on the M3 is not cutting off as others have reported… I’m wondering if they are using WiFi on their M3s instead of hard-wired ethernet.
Version 1.1.8 and prior of Big Talker are now obsolete. Please update to 1.1.10.
Further discussion should be directed to this thread:
@slagle , perhaps not edit the main post but maybe ability to edit the subject of an old post? ie: a [RELEASE ] post that is now obsolete. It would be nice to change [RELEASE ] to [OBSOLETE ] directing users to the new post.
Your wish is my command
The GitHub integration for the app updating in ST IDE is broken. Can you provide the correct info for this?
Login to graph.api.smartthings.com
Go to My SmartApps
Ensure that you have the following configure as a line:
Click Update from Repo
Click SmartThings-BigTalker (master)
If you are not running the latest version, BigTalker will be listed under “Obsolete”. Check it, check Publish and click Execute Update.
If you made custom code changes yourself, BigTalker will be listed under “Conflicted”. If you wish to get back to the current published code, Check it, check Publish and click Execute Update (you will lose any custom code changes that you made).
I have a ring doorbell pro. I set it up so when the doorbell button is pushed the speakers in the house say “someone is at the front door” which works great. The issue is everytime the doorbell detects motion bigtalker send a push notification of “bigtalker - check configuration. phrase is empty for front door”
Even if I add the doorbell to motion and give it a “there is motion outside” phrase, it will say the phrase, but will still give me that push notification error.
Any remedy for this?
I believe I have just resolved this issue.
Please see the newer BigTalker thread and specifically this post (#12): [Release 1.1.12 3/13/2017] Big Talker - Talk when events occur
Thanks for working on this app! It looks like a lot of code and hours of work.
I installed it with my MIMOlite controller as a doorbell notification input and I used Big Talker to tell in the Sonos speakers phrase like “someones’s at the door”. However the previous playing music doesn’t resume. The auto-generated mp3 file is read first, then resume logic kicks in and is playing the same mp3 second time, like it would fail finding previously playing music.
I am more than happy to enable debugging mode/logs if needed and provide it to you. Feel free to contact me on priv or here. I would love to get it working.
I will try to understand the code and maybe I can fix it too.
I am going to attempt to use Big Talker with VLC thing.
I had a few questions:
- Does Big Talker allow the ability to play custom MP3s for events? For instance I want to play the Star Trek door swoosh sound when I enter my front door.
- Does Big Talker allow for different volumes for different events?
- Is there an http API to invoke Big Talker? When my house alarm is triggered, I can have it call an http URL. I would like to expose a URL that will play a really loud recording of rottweilers barking.
Hi guys nice app
I have a little issue with it though
I’ve set it up to read out when it detects presence from the 4 in our house but it only ever reads my daughters nobody else.
Also I was going to set a time restriction in the shm events but changed my mind but it now won’t let me delete the time it stays red wanting me to select the time and won’t let me back out until I do
The inability to clear the Don’t Speak After field is due to a bug in the Android SmartThings app. I’ve filed a support request for them to fix this, but it may not get attention without others also submitting the same.
To clear the field on Android, first clear the Don’t Talk Before time, press Done then go back into the event and clear the Don’t Talk After time.
Thanks for the reply that’s sorted it cheers.
I didn’t realise you had it reported on your github I did look there as well
Have you any idea why the apps not reporting on all my presence sensors only the one that’s in presence group 1 ?
The 2.0 development version has custom mp3 support now.
See this thread regarding 2.0 development:
Specifically this post regarding mp3 support: https://community.smartthings.com/t/bigtalker-2-0-development/55305/86
Note: 2.0 is a completely new install from 1.x. They can run side by side. It is also an in-development version. Expect it to break and provide feedback.
Is there an event that can trigger based on a sensors temperature being within a range or less/greater than a level, and then have that temperature included in the vocal response? Am hoping I’m just missing something.
You’ll probably have to use something like WebCore to accomplish what you want.
|
OPCFW_CODE
|
multiple nics force routing
Not sure if the question is correctly phrased. Here's my setup: because I'm moving around a lot,, I don't have fixed line Internet access (such as ADSL). However I have a LAN for interconnecting a number of machines - through 100Mb cables. To access the Internet, I use one of those pocket-WiFi devices that allows up to 5 devices to be connected and uses mobile broadband (3G and HSPA). what I'm finding is that if I'm connected to the local network, I can't connect to the Internet on the same machine.
Some research on this site suggested that by adjusting the routing cost, I could get the WiFi link to be preferred. It also suggested I should remove the gateway address. I couldn't figure how to do this (as it was a DNS setup so I changed the actual gateway address to a different one than in the properties.
It seemed to work (at first), but now it seems to not work (intermittently). I'm using Windows-7 (predominantly)
My question is, given my setup, how can I set up the two NICs (cable and wireless), so that if look up an internet address it will use the WiFi and if I want a local address it will use the cable?
What does the 'pocket-WiFi device' do? Has it a Ethernet port? Does it have a fixed IP address of its own? Does it act as DHCP server? ... (A link to the PWD's manual would do wonders to help answer this question).
The PocketWiFi is a HUAWEI E585 http://www.huaweidevice.com/worldwide/productFeatures.do?pinfoId=3073&directoryId=5009&treeId=3619&tab=0
It does act as a DHCP Server (although it doesn't seem to be able to set permanent leases). It doesn't have an Ethernet port. My immediate problem seems to have been solved by @Rain's answer below.
The problem here is that Windows favors wired connections before wireless connections, and quite honestly, this makes sense as a default.
To fix this, navigate to Network and Sharing Center > Change Adapter Settings (or simply type ncpa.cpl in Run). Next, hit Alt to bring up the menu and click Advanced > Advanced Settings.... Then, simply reorder your network connections on the Adapters and Bindings page so that your Wireless interface is first.
A reboot may be required for the settings to go into affect, but it should work.
I now have a related issue: http://superuser.com/questions/456590/prefer-wired-over-wireless-on-same-network-special-case. Anyone got a solution there?
The easiest option is to add a router to your setup if possible. Does the pocket WiFi have a LAN port? If so just throw a router in between your LAN and Internet. Plug the pocket WiFi device in to the WAN port of the router, set the WAN port to DHCP, and the rest of the routing should be automatic. But I'm guessing the problem your facing is that there are no ports on the device.
If there's no way to work a router in there, then we'll have to add a route that tells your computer where to look for local computers, and tell it not to look for anything else there.
Let's assume that your LAN is running <IP_ADDRESS>, and your WiFi device is handing your computer <IP_ADDRESS>. These will have to be on different subnets to work, so if your WiFi is in the same range as the LAN change whichever network is easiest to change.
Open up a command prompt and run ROUTE PRINT. This will give a table of your computer's current routes. Look for an entry like this:
<IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS>
This is assuming that there is a router on your LAN at <IP_ADDRESS>, and your computer is at <IP_ADDRESS> on the LAN. If there is no router, you'll see On-link instead of <IP_ADDRESS>.
We don't want this route. It's telling your computer that it can find any address in the whole wide world (<IP_ADDRESS>) by going through the <IP_ADDRESS> interface. Since this simply isn't true, we need it to realize that it can find any address in the whole wide world by using <IP_ADDRESS>.
Try this:
ROUTE ADD <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS> <IP_ADDRESS>
That should tell your computer where to look for machines on the LAN. This won't persist through reboots however, you'll have to add a -p flag for it to stick.
Now, to be honest, this is where I run out of knowledge. I'm not sure if we need to delete the original route, or if we can change it instead. I can't quite make sense of the syntax and how it would differentiate between the two <IP_ADDRESS> destination routes, or if two of them will even exist. I wish I was set up right now to do some testing. This might at least get you going, or get someone with more knowledge in here to correct me.
You can find some more information here:
Command Syntax
Basics of route tables in Windows
Multihoming
|
STACK_EXCHANGE
|
ESEbcli2 DLL Functions Interface
ESEbcli2 DLL Functions Interface
This content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release. This interface is used for creating applications that back up and restore Microsoft® Exchange Server 2003 databases.
The following table lists the methods of the ESEbcli2 DLL Functions interface.
Name Description ESEBackupFree The ESEBackupFree Function function frees memory allocated by HrESEBackupGetLogAndPatchFiles, HrESEBackupGetTruncateLogFiles, HrESERestoreLoadEnvironment, HrESERestoreAddDatabase, and HrESERestoreGetEnvironment. ESEBackupFreeInstanceInfo The ESEBackupFreeInstanceInfo Function function frees the structures and memory allocated by HrESEBackupPrepare Function. ESEBackupOpenFile The ESEBackupOpenFile function opens the specified database file. ESEBackupRestoreFreeNodes The ESEBackupRestoreFreeNodes Function function frees the memory used by the structure returned by the HrESEBackupRestoreGetNodes Function. ESEBackupRestoreFreeRegisteredInfo The ESEBackupRestoreFreeRegisteredInfo Function function frees the memory used by the structure returned by the HrESEBackupRestoreGetRegistered Function. ESERestoreFree The ESERestoreFree Function function frees memory buffers allocated by other restore functions. ESERestoreFreeEnvironment The ESERestoreFreeEnvironment Function function frees memory allocated by the HrESERestoreLoadEnvironment Function. HrESEBackupCloseFile The HrESEBackupCloseFile Function function closes the section handle for the file that is being backed up. The section handle was provided by the HrESEBackupOpenFile Function. HrESEBackupEnd The HrESEBackupEnd Function function frees all of the resources held for the backup context that is specified. HrESEBackupGetDependencyInfo The HrESEBackupGetDependencyInfo Function function retrieves the names of applications and services that the specified service relies upon. Data for those applications should be backed up with the Exchange Storage Engine (ESE) application data to ensure consistency of the application and its dependencies during restore. HrESEBackupGetLogAndPatchFiles The HrESEBackupGetLogAndPatchFiles Function function returns the list of log and other files related to the storage group that is being backed up. HrESEBackupGetTruncateLogFiles The HrESEBackupGetTruncateLogFiles Function function returns the list of log files related to the storage group that is being backed up. The log files indicated will be deleted after they are backed up. HrESEBackupInstanceEnd The HrESEBackupInstanceEnd Function function finishes the backup process for a storage group. HrESEBackupPrepare The HrESEBackupPrepare Function function opens the connection to Exchange Server 2003. The Exchange server returns information about the storage groups that can be backed up. HrESEBackupReadFile The HrESEBackupReadFile Function function reads data in the application buffer. HrESEBackupRestoreGetNodes The HrESEBackupRestoreGetNodes Function function returns a tree of nodes that lists the Exchange Server 2003 computers in the current domain that can be backed up or restored. HrESEBackupRestoreGetRegistered The HrESEBackupRestoreGetRegistered Function function returns an array of server application names that are registered for backup and restore on the specified computer. HrESEBackupSetup The HrESEBackupSetup Function function informs the ESE that the specified database is to be backed up. HrESEBackupTruncateLogs The HrESEBackupTruncateLogs Function function examines the storage group log files, and deletes those that are no longer needed to fully restore the storage group. HrESERestoreAddDatabase The HrESERestoreAddDatabase Function function informs the ESE that the specified database is to be restored. This function, or HrESERestoreAddDatabaseNS, must be called for each database in the storage group that is being restored. HrESERestoreAddDatabaseNS The HrESERestoreAddDatabaseNS Function function informs the ESE that the specified database is to be restored. This function, or HrESERestoreAddDatabase, must be called for each database in the storage group that is being restored. HrESERestoreClose The HrESERestoreClose Function function informs the ESE that the files have been completely restored from the backup storage media. This does not instruct the ESE to begin recovery. HrESERestoreCloseFile The HrESERestoreCloseFile Function function closes an open database file. HrESERestoreComplete The HrESERestoreComplete Function function informs the ESE application that the database files have been restored, and that the application can now recover the files. Exchange Server 2003 will then attempt to recover the databases and bring them to a consistent state. HrESERestoreGetEnvironment The HrESERestoreGetEnvironment Function function returns a pointer to the structure that holds information about the restore environment. HrESERestoreLoadEnvironment The HrESERestoreLoadEnvironment Function function loads the stored restore environment file into a structure in memory. This function is not necessary for the restore or recovery operations. Instead, this function enables applications to easily determine the status and settings for an ongoing restore operation. HrESERestoreOpen The HrESERestoreOpen Function function opens the connection to Exchange Server 2003. The ESE returns a context handle, and the restore log path if none is specified. HrESERestoreOpenFile The HrESERestoreOpenFile Function function instructs the ESE to open the specified database or log file for restore. This function must be called for each file being restored. Note that for most file types, this function will respond indicating that the file is to be replaced using normal file-system operations. However, it is important that you tell ESE which database log files are being restored. HrESERestoreReopen The HrESERestoreReopen Function function opens the connection to Exchange Server 2003, and loads the restore environment from the folder specified in wszRestoreLogPath. The ESE returns a context handle. HrESERestoreSaveEnvironment The HrESERestoreSaveEnvironment Function function saves to disk the in-memory state of the restore operation. HrESERestoreWriteFile The HrESERestoreWriteFile Function function writes data to the specified database file. HrESESnapshotStart The HrESESnapshotStart Function function should not be used to create backup and restore applications. HrESESnapshotStop The HrESESnapshotStop Function function should not be used to create backup and restore applications.
The esebcli2.dll is a non-dual dynamic-link library (DLL). For this reason, C/C++ must be used to access the backup and restore functions. Use the Microsoft® Windows® LoadLibrary function to load the DLL.
|
OPCFW_CODE
|
Can I use Additive Secret Sharing without Finite Fields without introducing an exploitable bias?
As far as I know Additive Secret Sharing uses a finite field to generate its shares.
The gist of the scheme is that the shares $A_{shares}$ = {$A_1, A_2, ..., A_n$} of a value $A$ on a finite field $N$ can be attained by:
$A\ mod\ N = \sum_{A_i \in A_{shares}} (A_i\ mod\ N)$
If all shares in $A_{shares}$ are positive and no finite field was used, an adversary could infer from any given share that the target $A$ is greater than the observed share. The use of the field stops the adversary from learning that.
Alternatively, I believe we could also use negative shares to compensate. If the shares can be negative, others could be greater than the original value and the adversary would learn nothing from them.
I tested this by sampling shares using a uniform distribution over a large range of positive and negative numbers. I found no bias in the results or clues about the value of $A$ from summing and averaging random shares, however I could be missing something.
Everywhere I searched the use of a finite field was used; however I don't understand if it is mandatory (and if so, why).
My question is: Is the use negative valued shares viable as an alternative to computing the Additive Secret Sharing scheme over a finite field, or does it introduce some exploitable bias and is not secure?
I tested this by sampling shares using a uniform distribution over a large range of positive and negative numbers.
Actually, unless you limit the number of possible values to a finite range, that's actually impossible.
It turns out to be impossible to do a uniform sampling over a set of size $\aleph_0$ (and that's the size of the set of integers); there will inevitably be some values that have a higher probability of being sampled than others. And, because of this nonuniformity, the adversary would obtain some information about the shared secret, given less than $n$ shares.
And, of course, if you do select from a finite range, you still leak information. For example, if the adversary has $n-1$ shares and they sum to -993, then (if the range you select is from $[-1000...1000]$, then he can deduce that the secret is no more than 7.
Now, if we work with a finite field (or group; you don't need a field for what you're doing), this is not an issue; we can select from a finite set uniformly.
Thank you for your answer, with regards to the penultimate paragraph, we could have that the share values were selected from an interval between [-1000 ... 1000], however we could impose no restrictions on their sum. That is, I could sum two shares resulting in a value over 1000, even tough each is smaller than a thousand. As a result the "last" share that settles the difference between $A$ and the remaining shares could have a value outside the boundaries which could leak something. However even this could be mitigated by offloading some of its "excess" to the other shares.
|
STACK_EXCHANGE
|
Please add the language definition for Interlingue (formerly Occidental), a popular constructed language.
ISO code: ie
Country code: N/A
English name: Interlingue (Occidental)
Patience, it was holiday season..
So, why do you actually need that contructed and probably rarely used language to be selectable from the UI listbox, given that now you can enter arbitrary valid language tags in the language combobox to attribute text, i.e. "ie"? I'm asking because that list is already quite cluttered..
Do you plan to submit locale data?
The language has its ISO 639-1 code, and it is a reason enough.
What is more important, is that Libre Office does not support spell check and language detection for custom languages; there is no MS Word roundtrip and some other UI elements does not support it too.
What are the locale data needed for? A language entry and spell checking support is enough.
If we would add an entry for every ISO 639 code we'd have 3000 languages in the list. Anyway, if a spell-checker is installed that supports 'ie' then spell-checking should work even for that custom language. Language detection will of course not come automatically unless someone implements it. MS-Word round-trip *is* possible if you use .docx format (MS-Word doesn't explicitly support Interlingue, does it?). Which UI elements do you mean don't support it?
Locale data is needed for number formats, day and month names in fields, and to be able to select a language as default language. See also https://wiki.documentfoundation.org/LibreOffice_Localization_Guide/How_To_Submit_New_Locale_Data
Though having locale data with a language only that has no default country assigned is much arbitrary, e.g. in separators and date and currency formats.
> if a spell-checker is installed that supports 'ie' then spell-checking should work even for that custom language.
I don't know what it 'should' do, but it definitely doesn't -- check it yourself.
There is only 184 ISO 639-1 codes in existence.
No, locale support is not required.
I don't know what system you're on and which spell-checker files you use for 'ie' and how it is installed, but if it makes you happy I'll add the entry.
Eike Rathke committed a patch related to this issue.
It has been pushed to "master":
tdf#96647 add Interlingue Occidental [ie] to language list
It will be available in 5.2.0.
The patch should be included in the daily builds available at
http://dev-builds.libreoffice.org/daily/ in the next 24-48 hours. More
information about daily builds can be found at:
Affected users are encouraged to test the fix and report feedback.
> I don't know what it 'should' do, but it definitely doesn't -- check it
And where can I download an Interlingue spelling dictionary to check by myself that support “doesn’t work”?
You can create one yourself or rename any existing one.
|
OPCFW_CODE
|
Imaging Lingo... er, ActionScript
Fans of Director's Imaging Lingo, Processing, and similar technologiesand now a whole community of Flash artistsare going to love this. The new BitmapData class provides a long list of bitmap manipulation methods. I won't go through the whole list in detail, as I'm planning on focusing on this in a future column.
The basis of the process is creating a new BitmapData object. This object can have dimensions, can be transparent or opaque, can be filled with color, and so on. Into this object, you can load a bitmap from the library, draw the contents of another BitmapData object or MovieClip, generate a noise pattern, or add a fill. You can copy pixels or a specific channel (red, green, blue, or alpha) from another BitmapData object. You can apply a filter from the aforementioned filters object, and merge (or dissolve pixels between) two BitmapData objects (see Figure 7).
|Figure 7. Additive Blending: The new BitmapData class makes additive blending possible. This example was inspired by a beautiful sphere by Emil Korngold.|
You can also examine bitmap data and build creative interactions. For example, you can get and set pixels, you can isolate areas of specific colorseither by finding a minimum bounding rectangle surrounding all pixels of that color, or even by checking the hitTest at a pixel level. Testing thresholds can be used to perform logical operations based on color. For example, some people have already attempted motion capture with Flash video. (See example collections in Related Resources, left column.)
Hand in hand with the BitmapData class, scripters will use the flash.geom class. This class makes it possible to specify rects and points for manipulating areas of an image, as well as create a transformation matrix that allows for simultaneous transforms of position, scale, and rotation.
Even without new features, developers are going to be interested in new performance improvements and MovieClip rendering options that will influence playback speed and file size. In general, playback performance is significantly improved in Flash Player 8 over previous versions. Perhaps the most tangible change is the introduction of the new MovieClip property, Cache As Bitmap.
Runtime Bitmap Caching
|Figure 8. Bitmap Caching: Bitmap caching can be enabled in the Properties panel (note the "Use runtime bitmap caching" checkbox) for continuous use, or can be enabled and disabled via ActionScript.|
One of the primary reasons that Flash is so much slower than many relevant bitmap-based technologies is that Flash must constantly recalculate all of the vector math required to display its shapes and symbols. If you could convert complicated vectors to bitmaps, they would display much quicker because those processor-intensive calculations would be dramatically reduced. The problem is, of course, that you lose the sharp vector qualities that prompted you to create your assets as vectors in the first place.
Runtime bitmap caching solves this problem. It temporarily caches and displays a bitmap version of a MovieClip, while still maintaining the integrity of the vector information. This allows the player to focus more on other changing elements of your file, without having to update the vector math of the cached MovieClip. This feature can easily be enabled in the Properties Panel (see Figure 8) or can be changed on the fly at any time via ActionScript. And because the vector information is preserved, you don't lose any opportunities for future vector manipulation.
When possible, enabling the ActionScript property MovieClip.opaqueBackground can improve performance even more. This is somewhat equivalent to the reduction in processor drain when switching from a PNG with a transparent background to a JPEG. If Flash can temporarily stop calculating opacities, for example, of a cached bitmap, performance can be further boosted.
Another cool performance optimization, and creative option, is the new MovieClip.scrollRect property. This allows you to quickly scroll movie clips within a cropped rectangle defined by the property. Complex MovieClips and text fields scroll much faster because a bitmap of the clip is scrolled instead of having to recalculate the entire clip from vector data. Just like Director's sprite rect, the scroll rect is based on the MovieClip's bounding rectangle with 0,0 residing in the upper left corner of the clip.
Type Rendering Improvements
Figure 9. Better Anti-aliasing: These examples of various typical fonts at a variety of sizes demonstrate the improved Flash type rendering engine.
Figure 10. Anti-alias Presets: Two new anti-alias presets have been added to the type rendering options, as well as an option to define your own custom anti-alias values.
Developers and designers alike will rejoice in the improved type rendering engine in Flash 8. Fonts have been a perennial weakness of Flash since its inception and this is the first major improvement. In most cases, the new engine makes everyday fonts look vastly better at smaller file sizeseven as small as 6 pt. Figure 9
shows four fonts in sizes from 6 to 10 points. Some fonts fare better than others, but all are significantly improved over prior versions of Flash.
Two presets in the Properties panel allow for improved anti-aliasing optimized for animation and readability (see Figure 10). You can also customize thickness and sharpness values for your own anti-alias values. This makes it possible for you to create settings that are optimized for specific fonts.
ActionScript's new TextRenderer class makes this even easier. For a given font, you can define the anti-alias type (normal, text field control, or advanced), the color type (dark or light), the font style (bold, italic, or bold/italic), and the grid fitting, or type hinting, type (align verticals along the pixel or sub-pixel gridthe latter best for LCD displays). With the "advanced" anti-alias type, you can even define the inside (opaque) and outside (transparent) cutoff thresholds at which the anti-aliasing is applied.
|
OPCFW_CODE
|
checking condition(drawable == drawable)
sorry to ask question already asked. but i am helpless
in my programi have 27 imageview's which can display any of the 3 drawable's i have in my drawable's folder.. and i want these imageview's click to perform a different action for each drawable they contain..but i found that we won't be able to compare two drawable's for equality....
here's the code i wrote, which didn't work....
if(((ImageView)arg0).getDrawable() != getResources().getDrawable(R.drawable.sq))
i have googled it but no one was clear....they were saying we could use setTag() method but i am not able to figure out How . so please have pity on me and tell me how could i use setTag() in solving my problem with a example
Using setTag() is pretty straightforward. Either you use void setTag (Object tag) and you just provide an object to identify your drawable (typically a String) or if you need to store several properties you can use the void setTag (int key, Object tag) to provide an integer key.
Code example (this code has not been tested but it should explain what I mean by itself):
class MyActivity extends Activity {
private static final String FIRST_IMAGE = "firstImage";
private static final String SECOND_IMAGE = "secondImage";
protected void onCreate (Bundle savedInstanceState) {
// Instantiation
ImageView imageView = (ImageView) findViewById(R.id.imgview);
imageView.setImageResource(R.drawable.firstimage);
imageView.setTag(FIRST_IMAGE); // The view is now tagged, we know this view embeds the first image
imageView.setOnClickListener(new ImageClickListener());
ImageView anotherImageView = (ImageView) findViewById(R.id.secondimgview);
anotherImageView.setImageResource(R.drawable.firstimage);
anotherImageView.setTag(FIRST_IMAGE); // The view is now tagged, we know this view embeds the first image
anotherImageView.setOnClickListener(new ImageClickListener());
ImageView secondImageView = (ImageView) findViewById(R.id.thirdimgview);
secondImageView.setImageResource(R.drawable.secondimage);
secondImageView.setTag(SECOND_IMAGE); // The view is now tagged, we know this view embeds the second image
secondImageView.setOnClickListener(new ImageClickListener());
}
private class ImageClickListener implements View.OnClickListener {
// The onClick method compares the view tag to the different possibilities and execute the corresponding action.
public void onClick(View v) {
String tag = (String) v.getTag();
if (v.getTag() == FIRST_IMAGE) {
// perform specific action
} else if (v.getTag() == SECOND_IMAGE) {
}
}
}
}
drawable has no setTag() method with it ...and looks like u dint ustand my question..or i dint ustand ur ans...so plz provide me a small example...wud be a gr8 help
I know that drawable have no setTag() method, I was talking of the ImageView that displays the drawable. For each ImageView, when you instantiate it, you can set its tag to identify easily the drawable it displays. Your code above would then become:
if ((String)((ImageView)arg0).getTag() == DRAWABLE_SQ_TAG)
moystard, many thanks for trying to answer...but i still dint get you....the part i dint get is : "when you instantiate it, you can set its tag to identify easily the drawable it displays"..HOW ?.try to elaborate it with an example...wud luv tht..
Edited my first post to provide an example
thanks a lot dude..for ur answer and patience....in fact i wasn't clear from the first post..my imageview's content(drawable) changes dynamically...it's not constant..so the pblm is finnding out wat the imageview is currently displaying...
|
STACK_EXCHANGE
|
LGPL 2.1 + GPL 3 = problems?
matthew.flaschen at gatech.edu
Mon Jul 16 05:01:41 UTC 2007
Philippe Verdy wrote:
> If your copyright notice that references the appropriate licence to use only
> specifies a precise version of the licence, you can still use a higher
> version according to the terms of this referenced original licence.
This is only relevant to the LGPL2.1 clause allowing use of later
versions of the GPL. The main GPL does not allow use of later versions
> if you really want to exclude any higher version, your copyright notice
> should explicitly contain an "additional restriction" (as defined and
> allowed in the GPL licences), such as:
> <one line for the name of the program and describing what it does>
> Copyright (C) <year> <author name>
> This library is a free software; you may redistribute it or modify
> according to the terms of the "GNU Lesser General Public License"
> version 2.1 as published by the Free Software Foundation, with the
> additional restriction that any later versions are excluded.
It should say later versions of GPL.
> Without this EXPLICIT additional restriction in your copyright notice, the
> original terms of the GPL v2.1 license allows upgrading the version of the
The original terms of LGPLv2.1 you mean.
> referenced license. Almost all GPL- or LGPL-licensed works do not have such
> explicit "additional restriction", and can then be used or conveyed under
> the terms of a newer version.
No. No GPL work can be used under a new version of the license unless
the license notice allows it.
> Reread for example the section 4 of the GPLv3 which explicitly states that
> authors can decide which version of the GPL they accept.
It says no such thing. License is defined as "“This License” refers to
version 3 of the GNU General Public License." which doesn't allow later
versions (unless otherwise allowed in the license notice).
> The "or any later version" is an explicit statement that is now highly recommended, but this
> is not the only option. GPLv3 allows later versions to be acceptable only
> through acceptation by a given proxy. And it also allows an author to
> enumerate the accepted version numbers.
None of these really require a clause in the license.
> These are viewed as additional permissions or restrictions
They're additional permissions, because they can be removed (for
instance you can convert "GPLv3 or later" to "GPLv3")
, according to
> section 7, which also allows changing the terms for the limitation of
> warranty, or allows requiring or prohibiting the preservation of the
> original author names, or
> limiting the usage of their personal names within commercial products:
It certainly doesn't limit this to commercial products (that would be an
OSD 6 violation). It says, "Limiting the use for publicity purposes of
names of licensors or authors of the material". I don't see what this
has to do with choosing later versions of (L)GPL, though.
More information about the License-discuss
|
OPCFW_CODE
|
I’ve started placing objects to be picked up and interacted with. Armakuni can now pick up shurikens according to LN2′s way to do that: it is clear where they are found and once he picks them up they are gone from the background
In this update I am showing how to animate the foreground (the flags in this case) and affect the sprite clipping at the same time.
This is all new code inspired by LN3 and now integrated in the LN2 framework. LN2 itself does include background animations (e.g. the torches in level 3), but not foreground animations (those that affect sprite clipping).
This is all good experience that I am documenting as I go along as part of the framework
I have been spending some time today on the framework itself. Most of the time went into an effort to compare the disassembled sources of level 1 and 3 from The Last Ninja 2.
What I observed is that there is a big deal of invariant code shared between these 2. Such code is very likely to be invariant for all levels but it is loaded in different positions in RAM for each level.
As a short term plan, I’d like to simply break down the sources into smaller ones and try to have a baseline that can be reused for all levels. On top of that customizations should find their place. Of course I’d like to give things a logical structure. I doesn’t really matter if the same code is not located at the same RAM locations for all levels.
As a long term plan, I’d like things to change though. In fact, apart from the tune, the disassembled sources I have now are completely relocatable. What I think would make sense for a new game is to try and make tunes relocatable too so that they can be pushed to the top of the RAM together with most of the level specific code and data: the benefit is that the invariant code would only be loaded once in the lower segment of the RAM, thus reducing duplication and file sizes.
As a consequence of having invariant code at the same RAM locations would be, as example, that the POKEs for unlimited lives, unlimited energy, etc. would be the same for all levels.
I decided to change the inventory to use the graphics from LN3. The demo is converging to where it could and at a very fast pace if you think about the fact I don’t spend more than 20 minutes in a day on it and it’s just me working on it
LN3 rewritten using LN2′s engine
Even better, a demo is available here!
I decided to give Armakuni a few weapons at the start of the level in order for him to wander around safely. Don’t get used to that, Armakuni! He’ll soon have to collect these weapons
I also changed the in-game tune. Current look and feel:
At the same time as I work on the generic framework, I have put together a few more bits for the rewrite of LN3. It’s good experience for me and it motivates me to do even more on the framework itself.
Here’s a video of the current status of this exercise:
While trying to collect unused bytes here and there in the LN2 disassembled source, I figured out an optimization that shortens the code and makes one of two lookup tables unnecessary. If I were able to consolidate all the unused fragments before $2000 (where the bitmap starts) I would probably be able to gain a 512 byte slot for some additional code. In there I would move a few routines so that I could extend the sprite clipping routine to work 100% of the times.
The sprite clipping problem is quite complex to explain so I will try to put together a few images of where the tricky bit is and why John Twiddy’s simplification saved him big time.
As I commented before, this problem seems to have been solved properly in LN3 (well done Stan!).
|
OPCFW_CODE
|
I'm not a Skeptics user, but I click through from the Hot Network Questions section on Stack Overflow. Twice now, that I can recall, I've come across a question where the following are all the case:
- The claim in question comes from a right-wing source
- Per what I believe to be a plain-English reading of the claim, it is straightforwardly true, at least based upon the evidence presented in the accepted answer
- ... and yet that same answer leads by stating, bizarrely, that the claim is false, right before presenting the facts illustrating that it's true
- Further interrogation suggests that the answerer, along with lots of other readers, have read some bizarre alternative meaning into the claim that is plainly not what was actually stated. My attempts to draw out what this meaning is (and thereby understand what the sense is in which the answerer believes the claim to be false) draw derision from other commenters and quite possibly get deleted.
This situation differs from that described in How to proceed when implicit and explicit claims diverge in that that question is premised on the idea that the implicit claim is clear (although I reject that framing of the supposed "implicit claims" listed in that question); I'm instead talking about the scenario where an answerer is responding to a claim that I don't think is implied at all.
I've found these situations frustrating. You can accuse any claimant of being a liar if you take the liberty of reading whatever random outrageous crap you like into their words, rather than addressing their plain meaning; to me, these highly-upvoted answers (and the personal attacks on me that have consistently followed for criticizing them) seem to be motivated by a desire to accuse conservative speakers of dishonesty, rather than to objectively address the truth or falsity of the claim. I fear that skim-readers will see the tl;dr summaries at the top of these answers and go away misled, thinking that the plain-English claim is false, when really only addressing some wacky alternative interpretation of the claim specially selected for its falsity.
Two case studies:
The wording of the original claim:
Sweden’s board of health and welfare and the migration authority just released this pamphlet ... meant to help guide men who marry underage girls through the Swedish welfare system.
My interpretation: the pamphlet contains some helpful information about welfare entitlements in Sweden specific to men married to underage girls. (This is 100% true.)
The accepted answer's interpretation, drawn out through a long, frustrating comment thread with its author: the pamphlet expresses pro-child-marriage views and is a detailed step-by-step guide to claiming benefits for men married to underage girls. (This is 100% false.)
The distinction between my interpretation and MichaelK's interpretation, it seems to me, is that mine is what the original text says and his is something he made up so that he could disagree with it. After I finally figured out what was going on and pointed out the two different interpretations of the answer, he didn't edit his answer to reflect the possible interpretations of the claim and indicate that one was true and one was false; along the way, he throws this at me in chat...
@MarkAmery I don't know why it is so damned important to you to be able to point a finger at the Swedish authorities and make it sound as if they are helping paedophiles, but I am not supporting you in that because that CLEARLY is not the intent of the pamphlet, nor can a reasonable person ever think that from reading the pamphlet. You are being deliberately unreasonable just to be able to push this interpretation, and I think that is low of you.
despite the fact that I'd already told him that I thought it was perfectly reasonable for the Swedish government to release guidance on how the welfare system handles underage marriages, and so it plainly made no sense for this to be my motive - I was just arguing for the plain-English interpretation of the claim.
The wording of the original claim (from Donald Trump):
Wow, word seems to be coming out that the Obama FBI “SPIED ON THE TRUMP CAMPAIGN WITH AN EMBEDDED INFORMANT.” Andrew McCarthy says, “There’s probably no doubt that they had at least one confidential informant in the campaign.” If so, this is bigger than Watergate!
My interpretation: somebody in the Trump campaign provided information to the FBI. (This, based upon the evidence in the accepted answer, seems to be 100% true.)
The accepted answer's interpretation: I don't really have a clear picture of that - maybe that the FBI deliberately planted some... fake employees, or something?... inside the Trump campaign whose whole purpose was to spy upon Trump? In any case the answer suggests that there was a "leak" but that there was not in fact an "embedded informant", which seems bizarre to me since those sound like two ways of saying the same thing.
I questioned this apparent self-contradiction in the answer, and now after a long comment thread my comment has been deleted. Except perhaps by asking this Meta question and drawing the answerer's attention to it, I'll never learn what his interpretation of words was by which it's meaningful for a leaker not to be an "embedded informant", and he may never see that I found his interpretation incoherent.
What should I do in situations like these? I struggle to comprehend how these answerers have managed to reach their interpretations of the claims at stake, and I don't like leaving tl;drs up asserting that a claim is false on an answer that actually completely vindicates (my interpretation of) the claim. Is it reasonable for me to suggest edits to these answers such that they lead by listing the two different interpretations of the claim and indicating which is true and which is false? How can I engage with answerers like this constructively to try to figure out what their interpretation of the claim is without getting sucked into a political flame war, accused of dishonesty, and censored by the mods?
|
OPCFW_CODE
|
Copy to Publish Directory output of PreBuild Event
Inside my csproj I have a pre-build event where I run the build of Vue js project. It outputs to a "dist" folder, and that is loaded by an cshtml file.
In the csproj file I have a reference to the dist folder and I tell it to copy to publish directory:
<ItemGroup>
<Content Include="dist\**" CopyToPublishDirectory="Always" />
</ItemGroup>
On publish, MsBuild seems to be trying to copy the files in the dist folder that exist before the pre-build event starts. Is there an way to get MsBuild to copy the contents of the folder after the pre-build event?
In order to support all possible publish mechanisms that tooling (VS etc.) supports, I suggest setting it up similar to how the in-box angular template works:
<Target Name="PublishDistFiles" AfterTargets="ComputeFilesToPublish">
<ItemGroup>
<DistFiles Include="dist\**" />
<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
<RelativePath>%(DistFiles.Identity)</RelativePath>
<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
</ResolvedFileToPublish>
</ItemGroup>
</Target>
With "single file publish" available, I would add <ExcludeFromSingleFile>true</ExcludeFromSingleFile> after the CopyToPublishDirectory element.
You can add a step to do it manually using the Copy task
<Target Name="MyCopyStep" AfterTargets="AfterPublish">
<ItemGroup>
<MyDistFiles Include="dist\**" />
</ItemGroup>
<Copy SourceFiles="@(MyDistFiles)" DestinationFiles="@(MyDistFiles->'$(PublishDir)\dist\%(RecursiveDir)%(Filename)%(Extension)')"/>
</Target>
Note that the copy taks relies on the system using file system publish. There may be other publishing or packing operations not executing the copy stage of publishing (such as tool package creation) which is why I recommend hooking after ComputeFilesToPublish as described in my answer.
@MartinUllrich Thanks for the clarification. I actually don't know that much about csproj build steps and posted what's likely a hack implemented a former employee. Is there a place where the targets (like ComputeFilesToPublish) and tasks (like CopyToPublishDirectory) are documented or is the only way to copy Microsoft's templates?
Documentation on build steps is spread out over multiple docs, but I don't know of good ones about .net core specific steps (publish for instance). I usually use the build source code for the .NET Sdk or MSBuild itself on GitHub to determine how to extend my build definitions.
|
STACK_EXCHANGE
|
Switching dimmer switch with 2 black wires to regular switch
I just wanted to confirm before wiring this up. When I took this off I see that two black wires going to the dimmer switch when it is normally black, white, and ground. Not sure how you tell which is exactly the neutral I need to be plugging into both screws on the new switch. I found the picture below which looks like the situation I am in currently:
Not really sure how I can tell which one is supposed to be to the load and which one is to the service panel but I want to assume that the single line that is going into the dimmer would be where I would connect the hot wire and the wing nut that has multiple wires going into it to power both of the other lights would be going to the load but just wanted to confirm based on my current setup which is this (the black wire that broke off that is in the top right went to the wing nut at the bottom just for clarification):
Using a voltmeter/multimeter on an apppropriate AC voltage range (typically 200V for most multimeters and US wiring) or a non-contact voltage tester, determine (carefully) which wires are live with the breaker on and the switch turned off. Then go turn the breaker off again.
Those are the ones connected to the supply/line/circuit breaker, and it's very common for multiple wires to be joined to feed unswitched hot on to other devices on the same circuit.
None of those are wing-nuts, they are just standard wirenuts/Marrettes.
That's a Wing-Nut® (for joining wires.) It has wings, to make tightening it properly easier. Your wire nuts are not tightened properly. You don't have to replace with this sort, but you do need to tighten the ones you have properly when reassembling this mess. Several of your ground wires are not in a wirenut at all, they are just loosely twisted together. That Will Not Do.
You can either learn to connect wirenuts properly, or you can change to something easier like Wago Lever-Locks® or Ideal's push-in connectors. Beware unlisted import knockoffs of those.
See also: https://diy.stackexchange.com/a/180926/18078 and https://diy.stackexchange.com/a/84354/18078 and https://diy.stackexchange.com/a/77881/18078
Appreciate the insight, I did notice the copper wire in the back just sitting next to one another as you mentioned and I will get those fixed up. Just took this apart in order to switch it over to a normal switch due to the flickering issues but I never had to switch a dimmer switch to a normal switch. I did also notice they didn't correctly twist the wires into one another before putting on the wire nuts so that they would have a proper secure connection which I will also fix when fixing whatever is going on here. Appreciate the reply
If you line the wires up correctly and twist the nut hard enough, as you're supposed to, the twist happens because of the nut. One of the posts I linked has a whole string of nigh-religious dissent around that subject, The makers advise either as acceptable, but the END result when you open up a box should be twisted no matter which camp you are in. You should proactively open up other boxes to see if this worker has been there and done this quality of work.
two black wires going to the dimmer switch when it is normally black, white, and ground.
There's your trouble. You're used to hooking black, white, ground to loads and sockets. You're thinking "a switch must be exactly the same as that other stuff".
Nope, a switch is not a load, it interrupts power to the load.
Since current flows in loops, interrupting the hot side is enough, so it only connects to two hot wires and does not need to talk to neutral at all.
If you have seen switches wired black-white-bare in the past, those are pre-2011 switch loops which were not properly marked - the white wire should have been re-identified with black paint or tape to indicate it is actually always-hot.
Post 2011, actual neutral must be brought down to switches, because smart switches need it.
Not really sure how I can tell which one is supposed to be to the load and which one is to the service panel
A plain switch does not care, and you don't need to either.
But if you did care, the supply hot generally serves more than one load, and the switched-hot generally serves just the light, so typically has a lonely hot wire. Typically.
|
STACK_EXCHANGE
|
Python developers record their dependencies on other Python packages in requirements.txt and test-requirements .txt . But some packages havedependencies outside of python and we should document thesedependencies as well so that operators, developers, and CI systems
know what needs to be available for their programs.
Bindep is a solution to this, it allows a repo to document binarydependencies in a single file. It even enablies specification of which distribution the package belongs to - Debian, Fedora, Gentoo, openSUSE, RHEL, SLES and Ubuntu have different package names - and allows profiles, like a test profile.
Bindep is one of the tools the OpenStack Infrastructure team has written and maintains. It is in use by already over 130 repositories.
For better bindep adoption, in the just released bindep 2.1.0 we have changed the name of the default file used by bindep from other-requirements.txt to bindep.txt and have pushed changes to master branches of repositories for this.
Projects are encouraged to create their own bindep files. Besides documenting what is required, it also gives a speedup in running tests since you install only what you need and not all packages that some other project might need and are installed by default. Each test system comes with a basic installation and then we either add the repo defined package list or the large default list.
In the OpenStack CI infrastructure, we use the "test" profile for installation of packages. This allows projects to document their run time dependencies - the default packages - and the additional packages needed for testing.
Be aware that bindep is not used by devstack based tests, those have their own way to document dependencies.
A side effect is that your tests run faster, since they have less packages to install. A Ubuntu Xenial test node installs 140 packages and that can take between 2 and 5 minutes. With a smaller bindep file, this can change.
Let's look at the log file for a normal installation with using the default dependencies:
2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
Need to get 148 MB of archives.
After this operation, 665 MB of additional disk space will be used.
Compare this with the openstack-manuals repostiry that uses bindep - this example was 20 seconds and not minutes:
0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
Need to get 35.8 MB of archives.
After this operation, 128 MB of additional disk space will be used.
If you want to learn more about bindep, read the Infra Manual on package requirements
or the bindep manual.
If you have questions about bindep, feel free to ask the Infra team on #openstack-infra.
Thanks to Anita for reviewing and improving this blog post and to the OpenStack Infra team that maintains bindep, especially to Jeremy Stanley and Robert Collins.
|
OPCFW_CODE
|
div not containing content floats have been cleared
I have a bunch of divs I'm trying to organize here. The ones I'm having trouble with have been given a red border and a blue border, they are suppose to appear one after the other. They do actually do this, red coming first and blue second, but there are several divs that are in the red layer, and instead of containing them it just sits on top of them.
There are floated layers, but I thought I had cleared this with a div called clear-fix. The main containing div, the one with the inset box shadow had this same problem and I had fixed it with that div, it now contains all the layers in it properly, so I'm not sure why it's not also doing this to the red layer. Help please!
http://jsfiddle.net/2TAaC/6/
can you show screen shot what exactly you want to achieve that will better because in your code so many divs are present and it is complicated without knowing what output we want.
yes http://klossal.com/portfolio/final.jpg the stack of images in contained in the div with the red border, but in that fiddle the div sits on top of the contents in stead of surrounding it. I'd like it to surround what ever is inside of it (in this case the image stack) and push the div with the blue layer down. It seems like it would all work fine but the red layer just isn't wrapping around the contents.
check the answer and let me now if any issues are still present.
so I'd like to do it with out setting a height. If that's possible.
Dear Check the updated fiddle.
all you have to do is...
<div id="level4" style="top: 0px; left: 0px; z-index: 4; position: relative;">
You can find this div inside your graphics DIV.
Demo : http://jsfiddle.net/2TAaC/10/
I was hoping to have it regulate it's own height and wrap accordingly, I couldn't figure out why it wouldn't do that on it's own. I've definitely had divs with out heights that are able to do this.
your other idv's have height such as spacecontainer and all they are conflicting, let me check once again..
well for example the div id="text" which is the blue div, the div is just the height of the invisible image contained inside of it. There is not set height.
yes! that's awesome, can you tell me what you changed, I can't find it.
there is a DIV in your graphics, just do position :relative instead of absolute this will work fine.
check the fiddle i removed the height and its working. no issues.
See the updated fiddle:
Fiddle: http://jsfiddle.net/2TAaC/12/
Demo: http://jsfiddle.net/2TAaC/12/embedded/result/
Note: Your image is not present in blue border div thats why it is not taking the 150px height so thats why i put the height 150px; you just placed your imahe in blue border div and remove the height. it will work.
I was hoping to have it regulate it's own height and wrap accordingly, I couldn't figure out why it wouldn't do that on it's own. I've definitely had divs with out heights that are able to do this.
that's great, can you tell me what you changed, haha I can't see it.
Thank god it was totally confusing issue. I just set the position:inherit on level-4 element and set the height 150px temporary on blue border div. If still any issues you can tel me. And thanks for selecting my answer.
oh, wait, but the red div still has the height in it, it can't wrap around contents? Giving the blue layer a height doesn't actually change anything because that div was already taking the height of the contents in it, so the image of 150 pixels high is what makes the blue layer have height, if I take away the height property of that div and change it to 200 pixels the blue layer becomes higher...make sense? It can be done with out a height property being set to the div.
|
STACK_EXCHANGE
|
I suddenly have this issue with at least one of my z-wave dimmers. For the second time in one week the dimmer switched to 100% in the middle of the night turning the light in my bedroom on full wack
It did not amuse my wife ( and me too as you would understand).
I try to read the log in the morning but it doesn’t show the lines anymore of a few hours back. Is there anyway I could get the log of a specific time?
I did some searching of randomly switching fibaro modules and found an article called Phantom Menace on the fibaro forum. I can not make much of it besides it seems to have something to do with association that are made .
Is someone familiar with this problem?
What would be a good alternative for my z-wave fibaro nodes as it is always something with these things. I would be happy to migrate to zigbee or some other protocol. At least not as buggy as fibaro.
Reading such stuff first thing that comes into my mind is always: have you opened any port on your router to make OH available via public internet not using the myopenhab.org service ?
In case OH is available on the internet there are people that find your installation and they may play such a game to make you aware of that it is not a good idea to open any service in an unprotected way.
Normally log files are rotated and you should have there a list of files. E.g.:
-rw-r--r-- 1 openhab openhab 2724289 Mar 12 09:56 events.log
-rw-r--r-- 1 openhab openhab 2227660 Mar 12 09:00 openhab.log
-rw-r--r-- 1 openhab openhab 1789193 Mar 9 21:01 events.log.7.gz
-rw-r--r-- 1 openhab openhab 1801535 Feb 23 15:06 events.log.6.gz
-rw-r--r-- 1 openhab openhab 1795417 Feb 8 10:03 events.log.5.gz
-rw-r--r-- 1 openhab openhab 286639 Jan 23 23:11 events.log.4.gz
-rw-r--r-- 1 openhab openhab 38314 Jan 23 23:11 openhab.log.7.gz
You need to login to your OH host and manually look into the files. frontail only shows a subset of the complete history.
Thanks Wolfgang, I found the logs and I feel soooo stupid.
I messed around last week with my installation as my IP-adres constantly changed. I searched if I could do some DDNS routing and finally configured that inside my mikrotik router. I thought I also needed to forward my openhab instance port 80 to reach it from the outside, Stupid of course as it is not secured by a password. I should have known.
So now in the log I can see someone randomly pressed a few buttons as also my garage door was open. Luckily I live in a safe neighbourhood
I removed the forwarded port so that should fix this in the future.
Besides accessing my openhab-web could a badwilling person had done more harm to my installation or is that not possible?
Just need to figure out how I can secure my login in combination with DDNS. Will need to dive in this information some more.
|
OPCFW_CODE
|
A while back I posted some thoughts on using a message framework as a way of abstracting part of the distributed computing problem. I stated I would start and write some code and so I have. On the way I had to make some interesting design decisions I wanted to share with you. This article is about these decisions and what where some of the reasons I made these.
Targeting the right objects
One of the first things I had to make a choice about, was how to target the right objects. Basically I needed two scenarios.
The first scenario is that Object A needs some task to be done by an object of type B. In this case Object A doesn't care what instance of type B actually does the job. It doesn't even care where this instance might be. In this case Object A should send out a message targeting type B. This means using a fully qualified class name as the receiver address.
The second scenario is that Object A sends out a message to notify other objects but it doesn't know which objects might be interested in the message. In this case other objects should register themselves with a dispatcher, stating that they are interested in some type of message. In this case Object A shouldn't include a receiver address at all.
But what about interaction with the system? This wouldn't be sufficient in a multi user scenario, because you can't target a specific instance of an object, which would be needed to actually give feedback to the user. I've decided that being able to target a specific object is not part of the low level messaging. To achieve a scenario where you need users to only receive their own messages back, there should be a layer on top of the existing messaging.
I chose this approach because it would be highly impractical to keep track of individual object instances across the entire ecosystem. Remember that one statement is that an object can be anywhere and it shouldn't matter for sending a message. By introducing unique addresses per instance, now it starts to matter, because a sender needs to know these unique addresses.
Another important aspect of the messaging framework is the communication between different processes. Again an object can be anywhere, but also anything .NET should work. I've investigated on using WCF, because this is obviously a very flexible and configurable way of communicating, scaling out from local (on the same machine) to global (across the internet). However it does put some bloat on the framework for people who don't want to use it.
I've also considered using Microsoft Messaging Queue, which kind of makes sense for a messaging framework. However this would involve building several tools around MQ to make sure all process types would be able to access the message queue and it would also put a deployment strain on the framework, beyond what I find is acceptable.
I settled on WCF, which doesn't mean that MQ is completely discarded. In the future it might prove valuable to actually build libraries around MQ to incorporate it into the framework. It does mean that by default every process includes code for both a WCF host and a client. However these are only instantiated as soon as a process is attached to another process.
To indicate that an object can recieve messages, it is needed that it implements the IMessageReceiver interface, which only contains one method (at least for now) that's called RecieveMessage. It takes a single parameter of type IMessage. The IMessage type has two properties, Sender and Receiver which are both strings (for containing the fully qualified classnames of the objects involved in the communication).
To make sure objects don't have to worry about getting a message to the receiver, every process should have exactly one MessageDispatcher object, who's sole responsibillity is to collect messages and dispatch them to the right objects and/or to other processes as needed.
To attach to another process in the ecosystem, the process in question has to make a request to the other process. Because we already have a message in the processes interface, that's exactly what I want to use. I've introduced a FrameworkMessage type that implements IMessage, so I can send it across to another process, in effect registering as a client to that process. To make sure the other process can also communicate back, it's response is to send a registration message as well, to make itself known as a client process as well.
As you can see there is a lot to think about when building a messaging framework. I'll keep working on this every now and then. Next time I'll try to get some code in.
|
OPCFW_CODE
|
OfficeTabs adds tabs to MS Office applications Excel, PowerPoint, and Word, allowing for tabbed navigation across several open documents in the manner familiarized by Firefox or Chrome, etc (see screenshot above)
The free version allows for personal, non commercial use, and provides a number of options such as the ability to place tabs on top, bottom, and even left or right of the main display area, clicking on the empty space next to a tab to start a new document, customizing the style and color of displayed tabs, and a few other options.
Tabs are so ubiquitous (and so useful) that it’s a mystery to me why Microsoft doesn’t feature them as navigational tools in MS Office by default, or in Windows explorer, for that matter. This software adds tabs to MS Office and does it so well.
Here are some PROs and a Wish list:
- Can place tabs on any of the 4 corners of the screen. Actually a very nice implementation of tabs on the side (see screenshot to the right).
- Tabs look good, and are highly customizable, can be moved around via drag and drop
- Compatible with all versions of MS Office (2003, 2007, 2010)
- Adds keyboard shortcuts to scroll through/select tabs.
- Adds a few more seconds to your Excel/Word/PowerPoint startup time (invariably)
- Right clicking a tab displays options that are only available in the paid version. Which is ok, except that I wish there was a way to remove these and to NOT have to always see the functions that are not available in the free version.
Differences between free and paid versions: the free version is for personal non commercial use. Also, the paid version has a few options available on tab right click, including locking a workbook, opening workbook path folder, etc… none of which are particularly worth having in my opinion.
The verdict: this is the second free software that I’ve seen that adds tabs to MS Office (both programs having the same name), although the program reviewed here has the better looking tabs by far. So, while the concept is not entirely new, it nonetheless holds a certain attraction and can be quite useful; check it out for yourself.
[Thanks go to reader Brockman for letting me know about this program].
Note on installing the right version: when choosing whether to download and install the 32 bit or 64 bit version of this software, your decision should be based on whether you have the 32bit or 64bit version of MS Office, and not your Windows operating system.
Version Tested: 6.51
Compatibility: Requires MS Office 2003, 2007, or 2010; 32bit or 64bit.
Go to the program home page to download the latest version (approx 2.91 megs).
|
OPCFW_CODE
|
Welcome to part-7 of the series. We will first learn about passing data from children to parent component.
In react we generally pass data(or props) from parent component to child component. But if we want to pass props from Child to Parent component, we need to pass methods.
We will create a class based component ParentComponent inside components folder. It have a local state of parentName and a greetParent(), which will show an alert containing this state.
In the render part, we are simply calling a ChildComponent.
Now, we will create the ChildComponent component. It just have a simple button, with text Greet Parent.
Also, include the ParentComponent in App.js by adding it.
Now, we want to click the button in the ChildComponent and execute the greetParent() in ParentComponent.
So, we pass the method itself as props to the ChildComponent. So, in our ParentComponent.js file, we are passing the props greetHandler to the ChildComponent.
Back to the ChildComponent now, we will use this props greetHandler in the event handler of the button.
Now, in localhost, when we click at the button the pop-up with the parentName state will be displayed.
Now, if we want to pass some parameter from the ChildComponent to the ParentComponent, we need to use the arrow function. After that we can pass a parameter in it.
Next, in ParentComponent.js we have access to it as a parameter to the function. So, we are using it from the parameter.
Now, back in localhost when we click the button we will see both state and the parameter in the alert.
In react there are four ways by which we can conditional render a part of the code. We will look into them now.
We will create a class based component UserGreeting inside components folder. Here, we have a state variable isLoggedIn which is false now. We also have two h1s.
Next, we will include UserGreeting in the App.js file.
Now, both the h1s will be shown in localhost. But as you might have guessed, we want to show only one depending on the isLoggedIn variable.
Method #1 The first method which we will learn, is the if-else statement. Here, we are using the if statement to check if the isLoggedIn is true and displaying Welcome Nabendu in the case, or else Welcome Guest is displayed.
Now, in localhost Welcome Guest will be displayed because isLoggedIn is false.
Method #2 The second method is to add the html in element variables. Here, we have created a variable message and then inside the if-else, assigned different statements to it.
Now, in the return statement showing the message.
Now, in localhost Welcome Nabendu will be displayed because isLoggedIn is true.
Method #3 The third method uses the ternary conditional operator and is used the most in React codes. The benefit of this approach is that it can be used inside the JSX.
In this method we use a ternary operator, which is equivalent to if-else statement. Here, we are checking if this.state.isLoggedIn is true and show Welcome Nabendu in the case, or else Welcome Guest is displayed.
Method #4 The fourth method uses the short-circuit operator &&, but it can be used only if we want to render something or nothing.
Here, we are checking if this.state.isLoggedIn is true then only execute the other statement i.e. Welcome Nabendu.
This completes part-7 of the series.
|
OPCFW_CODE
|
UI/UX of Square Terminal
Redefining the User Experience of Square Products
I spent 12 weeks during the summer of 2018 interning with the design team at Square, San Francisco. I got to work on a wide variety of projects ranging from designing a complete new keyboard for their new Square Terminal along with exploring and re-envisioning the Tutorial feature for their main Point of Sale app.
I also did some User Experience design for their new projects which cannot be shared here due to Non Disclosure Agreements.
User Interface / User Experience
Product Design Intern
Mark Grossman, Brian Stegall
Design Brief 1:
Design a tutorial which helps merchants to customize the payment flow to best suit their business
The existing Square Point of Sale app had few on-boarding tutorials which helped the users to learn how to do tasks like - Accepting Payments and Issuing a Refund. However, there was a need of adding more tutorial topics to the On-boarding feature. The current style used the hand holding format to guide the user through a task. But the same was not applicable for other tasks like - Customizing the checkout flow of the device.
Current Tutorial Style
The current payment flow needed the merchant to go through many layers in the setting tree to choose from four set of options. This non linear task flow made it difficult to follow the hand holding format. Moreover, the setting terminologies like ‘Customer Checkout Screen’, ‘Standard Checkout Screen’ etc were not very clear to a first time user.
The new direction proposed a dedicated section in the Tutorials tab that could help the merchants achieve their goal in one linear flow. We also proposed to have more visuals to help the merchants understand what each terminology meant and looked in the payment flow.
This new format provided a clear understanding to the merchants of the terminologies of the various screens involved in the checkout process
They could get a visual preview of the flow at the time of selecting the screens, instead of waiting to see it on a real transaction.
Simplification of the choosing process with the usage of only radio buttons, instead of a mix of various elements.
Having all tutorials at one place, making it convenient to visit later if needed.
Design Brief 2:
Designing a new keyboard for Square Terminal in alignment with the design language developed for Square Register.
The Square Register boasts of a massive 13” static screen which enriches the keyboard with comfortable real estate. Whereas, the Square terminal had a much smaller screen with hand held usage, making it difficult to adapt the keyboard design language on the smaller device.
Landscape format of the Register vs the Portrait format of the Terminal.
Scalable design to adapt to multiple languages for international market.
I was able to work directly with developers and was able to get the Keyboard feature into development during my tenure.
I got a wholistic experience of working on 2 UI based projects and 1 big UX Project (not shown here) during my internship.
Apart from work, I had a great time exploring the West coast and in the process, made some great friends and memories.
|
OPCFW_CODE
|
Posted by evanx
on March 24, 2007 at 3:47 AM PDT
Google don't seem to have a cohesive plan except to hire anyone who's anyone away from other companies who need them eg. Sun et al, for what evil purpose? To compete with the opensource desktop, it seems.
i was just reading a blog "Google Hasn't Improved Search" where the author says,
Whenever they release a new product, it does nothing to improve the existing search offering. Whenever they do something to change the existing search offering, it's a minor layout move. Whenever there's a new product in labs, it's no longer outlandish, it doesn't make me think and again; is no improvement or change to their core offering: SEARCH.
I agreed and added the following ranting comment about Google, which i repeat below.
But enough about you, let's talk about me, what i think
Google don't seem to have a cohesive plan except to hire anyone who's anyone, away from other companies who need them eg. Sun et al, for what purpose? To take over the information world, to displace and replace everyone else? It doesn't feel right.
Rather than leverage and contribute to opensource projects such as Thunderbird and OpenOffice, to web-enable those as stateless RIAs or something, they develop web-based apps for mail, calendaring and docs which are relatively poor in terms of usability and features compared to the opensource desktop equivalents. But at least the web apps have ads! ;)
Why don't they use Thunderbird/XUL for gmail? Why not add "G-drive" integration to OpenOffice? And throw in ads somewhere to pay for it.
So i think they are taking the industry backwards with their "web-browser-hobbled desktop knock-offs" when they could be driving the whole rich opensource desktop forward to leverage their internet infrastructure, and not just the browser.
But enough about my thoughts, let's talk about my plans
It looks like i'm back on the road again. Every six months to a year, i decide to switch locations from Johannesburg and Cape Town or visa versa, and it seems that time has come again.
I'm gonna be spending the next few months flitting around the country between Johannesburg, Durban and Cape Town, visiting various family and friends, with my two notebooks and my backpack. One "notebook" is a computer with GSM 3G connection, so i'm good to go. My other very important notebook is paper-based. Cos I find the best way to design software
i want to write, is in a coffee shop with pen and paper and a double cappucino or three.
After i've overstayed my welcome and exhausted the hospitality of my family and friends once again, i'm thinking of heading over to Europe and getting a job like my mom says i should. I'm justing waiting for Netbeans6 with its minty goodies like JSR295, JSR296, JPA... Ooo, what a good time to reenter to job market! :)
|
OPCFW_CODE
|
Getting Started with Amazon Comprehend custom entities
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text. We released an update to Amazon Comprehend enabling support for private, custom entity types. Customers can now train state-of-the-art entity recognition models to extract their specific terms, completely automatically. No machine learning experience required. For example, financial companies can analyze market reports for terms and language related to bankruptcy activity. Manufacturing companies can now analyze logistics documents looking for specific parts IDs and route numbers. Combining custom entities with Comprehend’s pre-trained entities enables a complete picture of what is contained within text data. Use this data to look for trends, anomalies, or specific conditions within text.
Training the service to learn custom entity types is as easy as providing a set of those entities and a set of real-world documents that contain them. To get started, put together a list of entities. Gather these from a product database, or an Excel file that your company uses for business planning. For this blog post, we are going to train a custom entity type to extract key financial terms from financial documents.
The CSV format requires “Text” and “Type” as column headers. The text contains the entities and the type is the name of the entity type we are about to create.
Next, collect a set of documents that contain those entities in the context of how they are used. The service needs a minimum of 1,000 documents containing at least one or more of the entities from our list.
Next, configure the training job to read the entity list CSV from one folder, and the text file containing all of the documents (one per line) from another folder.
After both sets of training data are prepared, train the model. This process can take a few minutes, or multiple hours depending on the size and complexity of the training data. Using automatic machine learning, Amazon Comprehend selects the right algorithm, sampling and tuning the models to find the right combination that works best for the data.
When the training is completed the custom model is ready to go. Below, view the trained model along with some helpful metadata.
To start analyzing documents looking for custom entities, either use the portal or APIs via the AWS SDK. In this example, create an analysis job in the portal to analyze financial documents using the custom entity type:
This is how the same job submission would look using our CLI:
Take a look at the job output by opening the JSON response object and look at our custom entities. For each entity, the service also returns a confidence score metric. If there are lower confidence scores, fix them by adding more documents that contain that specific entity.
Below, view the custom model extracted financial terms.
Please visit the product forum to provide feedback or get some help.
About the author
Nino Bice is a Sr. Product Manager leading product for Amazon Comprehend, AWS’s natural language processing service.
|
OPCFW_CODE
|
Per account sounds notification sounds have disappeared
Describe the bug
With the older version you could configure a different notification sound per account. This has now disappeared. Now only the system setting is used.
Environment:
K-9 Mail version: 5.800
Android version: 10 with EMUI <IP_ADDRESS>
Device: Honor 10 (COL-L29)
Account type: IMAP with Inbox set for Push
Additional context
Could be considered related to #5118
+1
I'm finding the new app rather annoying in this regard. I'm getting dozens of default system notification sounds playing every hour. I previously had my email sound set to a quiet, short pop sound, which I was able to easily ignore if busy. Now I don't have a clue what's trying to tell me stuff, and I have to pull my phone out all the time to check.
Seems like there is also no way to turn off the notification sound, but still keep notifications showing up on the lock screen? Seems like a total design bomb in my opinion. This bug as well as https://github.com/k9mail/k-9/issues/5452 have made the new k-9 mail totally unusable. Any idea what the latest version of k-9 mail is that I can downgrade to that does not have this notification bug?
Now only the system setting is used.
The system settings screen (that you can reach by pressing the last item in K-9's notification settings screen) allows setting a custom notification sound for each account.
The system settings screen (that you can reach by pressing the last item in K-9's notification settings screen) allows setting a custom notification sound for each account.
That seems to not be the case for every Android version. For me - with EMUI 9.1 (based on Android 9) - that screen does not allow to change the notification sound. But if I open the apps notifications settings manually from the Android settings menu, I do get the option to configure a notification sound per account. With one big problem there: The screen doesn't show for which account I'm configuring that right now - all accounts have the same description texts. So it's trial and error.
With EMUI 10, in App notifications, I only get the options to Show notifications, Messages and Miscellaneous per account. The account is named but there is no option to change the sound.
Directly tap on the text "Messages".
Directly tap on the text "Messages".
Ok, thanks.
Wow! No wonder I could not find it. Agree with:
That UI design is not very intuitive...
I guess it makes the bug invalid, but anyone can raise another bug about the UI.
I guess it makes the bug invalid, but anyone can raise another bug about the UI.
Is this confusing UI a k-9 UI or android UI? Hard to tell because I don't know if android is super customizable there and k-9 mail used the customization in a confusing way or if android forces it to be that confusing.
It's confusing UI in Android. You get the same screen for all other apps, too, and K-9 can't do anything about it.
Seems the name "messages" is set by the app, as other apps appear to have other names for their notifications (eg qksms simply has "default", while Google Maps has things like "assistant driving mode", both of which have a notification sound option).
Perhaps some of the confusion could be avoided by renaming the notification type to "New Email" or something similar so it's clearer that this is what we need to press, since I personally assumed "messages" meant things the app was trying to tell me, like connection error or whatever.
IMHO, the Messages text should not have a check box beside it if it is the entry point for a sub-menu. There is a feint separator between the Messages text and the check box but this is not intuitive.
Personally I'd remove the check box and use a slider at the top or the next menu.
|
GITHUB_ARCHIVE
|
Manage Microsoft and MySQL databases with FreeSQL
FreeSQL is a free and easy to use utility, that will help you query and manage databases, such as Microsoft Access, Microsoft SQL Server, Oracle and MySQL.
Features: Connect to a database using profiles; Browse data – Select an object (table, view, synonym) and browse or edit the data; History – FreeSQL saves each query into a file so that it can be recalled later; Beautify query text – It is possible to beautify a single query, a single query contained in a script or a text selection (note that only SELECT, INSERT, UPDATE and DELETE commands can be beautified for now); Execute query – If the box contains a sequence of queries (where each one is separated by slash ‘/’), you must position the cursor into any point of the query you want to execute (you can also select a part of the query, e.g. a subquery, and only the selected text will be interpreted); Specify parameters that FreeSQL will replace for you when the query is executed (when you run the query, a window will open for parameter replacement values);
Execute as script – with this option you can run a sequence of non SELECT queries, separated by slash ‘/’ (for example you can run with a single clic all of these statements: CREATE TABLE MYFRIENDS (ID NUMERIC(5), NAME VARCHAR(30)) / INSERT INTO MYFRIENDS VALUES(1,’JOHN’) / INSERT INTO MYFRIENDS VALUES(2,’MAGDA’) / INSERT INTO MYFRIENDS VALUES(3,’SUSY’) / ; Import an XML file or a TEXT file into a new table or into an existing table (if no table is selected this function will create a new one, without index and primary key; Export the results of a query in XML, CSV or flat text file format; Print the contents of the grid into your default browser; Drag a table, synonym or view into the text-area (with the Shift key pressed) to create a query for this object; Drag a field into the text-area to drop the complete field text (object name.field name);
If the Shift key is pressed when you drag a table, synonym or view, only the object name will be dropped; Altering data – FreeSQL must be executed with the -plus option, to activate the import function, commit menu, non SELECT and script statements (e. g. C:\FreeSql\Freesql.exe -plus); A SELECT query produces an output that can be modified into the grid – when you are ready to send your data to the database just confirm by clicking on Apply changes from the Commit menu – all other instructions (like INSERT, UPDATE and DELETE) are auto-committed.
|
OPCFW_CODE
|
Like many things in life, changes to your favorite applications and services can be inevitable – and really painful, especially if they don’t match your personal preferences. Discord, one of the largest VOIP services in the world, has recently undergone major updates. Updates which, to be honest, have not all been received with general praise.
Discordes has a new response menu that, in addition to the messages in the chat window…. window. All messages.
Curiously, every time the mouse pointer hovers over a chat message, a new response function automatically appears. This has caused a lot of anger in the communities and the Discord forums – there are wires everywhere trying to figure out how to disable this feature.
Don’t worry, WePC is here to help you. We’ll tell you exactly how to disable this feature once and for all before crashing and deleting your Discordance account.
So, without further ado, let’s get to the heart of the matter.
More: Can’t you hear the people from Discord? Here’s the deal.
Correct/Cancel entry in the new response menu
You will be glad to know that removing the response menu is actually quite simple and only requires a few clicks.
# Step 1: Go to user settings
Navigate to the User Settings tab to the right of your user name and click.
#Step 2: Go to text and images
Search and select the Text and images tab.
#step 3: Disable Show emotion on messages
On the Emoji tab, locate the Show Emotions on Messages setting and uncheck.
If it is not selected, return to the main center. This should completely deactivate the new reaction function. What I mean is, you can’t leave any more answers on the posts. But at least it puts an end to the boredom on the menu, right?
More: How to create a Discord server?
New User InterfaceInconsistencies
So it can be said that the recently released Discord update didn’t do much for the big fan base. There are literally hundreds of posts on the forums that show indignation about him! But what exactly has changed?
Let’s take a good look.
One of the biggest differences, as mentioned earlier, is the new reaction function that appears every time you float above a position. See below:
This is by far the most important reason for the recent increase. But we’ve already taken care of that. Let’s go to the next big update problem….
How the message you’re flying over now is getting darker. It’s awful. Again, a lot of people are not very happy about this.
The last problem people have with the upgrade (as far as I know) is the new unread message. Each time you receive a new message, a marker appears on the screen where you stopped after the previous session. The best way to show it is to show it to you. Well, here it is. Get ready.
So here is our complete overview of how we can remedy this new and unfortunate reaction. I hope your life is less painful!
Anyway, leave a comment below and let us know what you think of this article, or better yet, visit our Community Centre where you can discuss the new Discord interface in detail.
why can't i react on discord servers,discord reaction button missing,discord reaction ban,discord missing reactions,discord limit reactions,discord reaction spammer,discord bot remove reaction,discord how to remove emotes,discord can t add reactions,how to ban emojis on discord,how to react on discord mobile 2020,how to see who reacted on discord pc,how to add reaction roles to discord,emoji in discord,enable emoji reactions notification,discord letter reactions,integromat discord bot,how to send long messages on discord,discord automation bot,airtable discord bot,integromat support,integromat waiting for data,discord.js reaction collector,messagereactionadd,discord bot react to message python,discord.js reaction role github,discordapierror: unknown emoji,send message after reaction discord js,reaction blocked discord,how to disable reactions on discord server,how to react to get a role discord,how to remove reactions on discord mobile,how to react to messages on discord 2020,discord reactions
|
OPCFW_CODE
|
Data Expo 2011 - Deep-water horizon oil spill
Download informational flyer (pdf).
The data set is available for download here.
This data only contains comprehensive measurements on temperature and salinity. How this is related to petrochemical compounds is unclear, but it would be interesting if the oil could be detected from these measurements. NOAA's job was to predict the currents to obtain some idea where the oil was headed. Measurements are also very sparse in the geographic space.
The web site http://www.noaa.gov/sciencemissions/bpoilspill.html provides data related to the April 20, 2010, BP oil spill. The oil spill arose from an explosion at the Deepwater Horizon rig, at the location 28.44oN, 88.23oW. Below is a compilation of the data available at that site:
The EPA web site http://www.epa.gov/bpspill/download.html provides water chemistry data focused on petrochemical products, sampled near the coastline in the months since the oil spill. Here is the data:
Here are additional links that might be interesting to visit: Major Oil Spill site http://www.restorethegulf.gov/, NY Times article http://www.nytimes.com/interactive/2010/04/28/us/20100428-spill-map.html.
The US Fish and Wildlife Service has been collecting data on affected wildlife. This is data from http://gomex.erma.noaa.gov/erma.html :
Keep checking this space for more data. If anything more comprehensive or directly related to oil products emerges we'll post it here!
The aim of the data expo is to provide a graphical summary of important features of the data set. This is intentionally vague in order to allow different entries to focus on different aspects of the data, but here are a few ideas to get you started:
- Are the extents of the oil spill visible in measurements on temperature and salinity?
- Are the temperature and salinity measurements consistent between measuring devices?
- Is there a spatiotemporal pattern in the temperature and salinity measurements that might indicate presence of oil?
- Where did the oil go?
- Is there evidence of contamination in the fisheries?
- What locations along the coastline are most in danger of contamination from oil?
- What species of birds were the most affected and where were they when found?
To enter the competition you need to submit a poster to the data expo session at the 2011 JSM (more details to follow closer to the time). As well as a printed poster, you're also welcome to bring along your laptop to present interactive/animated components. After the JSM, we'll also organize a special journal issue (tentatively, Computational Statistics and Data Analysis) where you can submit a paper that describes your methodology in more detail.
How to enter
Student entries and/or group entries are welcome. If the competition garners sufficient entries we will award separate prizes for student submissions. Educators may want to incorporate this competition as a class project.
The use of dynamic and/or interactive graphics is likely to be very useful, at least in the exploration of the data. This is encouraged, and we will attempt to provide support for laptops within the poster session so that dynamic/interactive graphics can be included in the poster presentation.
- Send email expression of interest/intention by Jan 15, 2011, to email@example.com.
- Submit abstract to http://www.amstat.org/meetings/jsm/2011/index.cfm by Feb 1, 2011. (It doesn't need to be perfect or very specific. Abstracts can be modified up until May.)
- Bring poster entry to JSM July 30-August 4, 2011.
There will be cash prizes awarded to the best posters (as judged by a panel of experts). As well as the honour and glory, the best entries will receive an invitation to publish their work in a journal article.
Prizes for school age entries and undergraduates still to be negotiated.
- First place: $500
- Second place: $300
- Third place: $200
|
OPCFW_CODE
|
Lecture “Statistics, Probability and Applications in Bioinformatics (SPAB)”
Specialized course, B.Sc. and M.Sc. Bioinformatics, Saarland University.
Elective course, M.Sc. CS, DS/AI, and related, Saarland University (substitute for StatsLab).
||Mathematics (especially analysis and linear algebra); solid programming skills (required!)
||9 ECTS credits
||4V+2Ü (4 hours of lectures, 2 hours of tutorials per week)
||using the SIC Course Management system (CMS)
||available after registration in the Course Management system
Target audience (IMPORTANT!)
This course is offered as a specialized lecture in the B.Sc. or M.Sc. Bioinformatics degrees, for 9 ECTS credits.
It can be taken by students of other programs, subject to agreement with the instructor.
In particular, it can be taken as a substitute for StatsLab if necessary, but you cannot get full credits for both StatsLab and this course.
The following topics will be covered in the course; additional topics may be included, depending on time and current events.
- uniform distributions on finite sets (Laplace spaces)
- elementary and advanced combinatorics
- finite, discrete and continuous probability spaces
- random variables
- discrete probability distributions and where they come from
- probability distributions and OOP, scipy.stats
- conditional probabilities
- Bayes’ Theorem, simple version
- continuous probability distributions
- a glimpse at measure theory
- posterior distributions
- descriptive statistics
- moments of random variables (expectation, variance, …)
- parametric models
- statistical testing (frequentist view)
- statistical testing (Bayesian view)
- parameter estimation: moments, maximum likelihood
- parameter estimation in mixture models: EM algorithm
- regression (simple linear, logistic, robust, multiple)
- regularization and Bayesian view on estimation
- robust regression
- multiple regression
- logistic regression
- stochastic processes
- Poisson process
- models for random sequences
- Markov chains
- Markov processes: models of sequence evolution
- Hidden Markov Models and applications
- Probabilistic Arthimetic Automata and applications
- distribution of DNA Motif Occurrences: compound Poisson
- significance of pairwise sequence alignment
- the PCR process
Applications in Bioinformatics
- tests for differential gene expression
- Bayesian view on differential gene expression
- high-dimensionality low-sample problem
- multiple testing
Please refer to the SIC CMS for all details.
Lecture Times: Tue+Thu 08:30 - 10:00 via Zoom (link for registered students in the CMS)
|
OPCFW_CODE
|
If I fill a vending machine with \$0.50 sodas and sell them for \$1, am I creating 50 cents of wealth for the economy?
This is a very simplified example.
If I fill a vending machine with sodas that cost \$0.50 each and someone comes along and pays \$1 for it, am I creating an extra \$0.50 of wealth for the economy? The wealth being that I've taken the beverage and delivered it to someone in a convenient format.
I am referring to the idea that the economy is not a zero sum game. If I buy something and sell it for a higher price we are not just shifting money around the economy but actually creating new money (assuming I provide some value in the process, but that is determined by the consumer).
So in the above example with the vending machine, does the federal reserve have to create an extra 50 cents of the currency in order to avoid deflation?
Another example: the company I work for generates an extra \$10m in sales this year than last year. So people in the economy have transferred \$10m from them to us. That equates to the creation of \$10m of wealth, right? And the fed just then issue another \$10m to keep the currency at the same level?
Note: I am a total laymen without any education in economics. These concepts - currency values, value / wealth creation - are simply of interest to me
It is not so much wealth as income (or better still value added). Though you may have prevented a retail shop selling the same soda, possibly at the same or a different price, so reducing their income. Money is just the mechanism for payment, so is transferred but not created or destroyed in this transaction.
You have to be very careful with your definitions here. Following the basic definitions (e.g. see use of terms in Mankiw Principles of Economics):
Value: Value depends on your marginal utility and typically it is the amount of money you are willing to give up for something.
Wealth: Is by the value of net assets.
Income: Is the net return to some activity.
You are right that economy is not necessary zero-sum game (although there are economic interactions which can be zero-sum), but your use of the term wealth is improper.
If someone pays you to fill the soda machine, you have costs \$0.5 but get paid \$1 your income will be \$0.5, you created at least \$0.5 of value because the person who paid you clearly valued the filled vending machine for at least \$1 otherwise they would not exchange that 1 dollar for your work, but at the same time you had 0.5 costs, so you are creating at least \$0.5 extra value.
The wealth is accumulated by saving (whether you save by buying house or putting money on your account value of your net assets increases). Hence if you have \$0.5 income and you save it all you also create \$0.5 wealth. If you consume half you only create \$0.25 wealth and so on.
So in the above example with the vending machine, does the federal reserve have to create an extra 50 cents of the currency in order to avoid deflation?
Here the answer is maybe. Inflation/deflation does not depend just on the value of output that you create.
Inflation/deflation is just positive/negative change in the price level which is in turn determined by the money market equilibrium. The money market equilibrium, in its simplest form is given by equation of exchange (See Mankiw Macroeconomics pp 87) as:
$$MV=PY$$
Where $M$ is the money supply, $V$ velocity of money, $P$ price level and $Y$ output.
Solving for price level and log-linearizing (so % changes in right hand side variables give us the % change in $P$) we get:
$$\ln P=\ln M + \ln V − \ln Y$$.
If you perform those extra services you increase real output by \$0.5, ceteris paribus, you are correct that Fed would have to create additional \$0.5 dollars to prevent inflation. But the, ceteris paribus, assumption might not hold in real life because velocity of money can change as well (plus in more complex models of money market equilibrium expectations play role and so on). Consequently, the correct answer here would be maybe. Under ceteris paribus assumption yes, but it is not guaranteed.
PS: Note sometimes even economists in casual speech equate wealth creation with value creation, so you might have heard some pundit or economists use similar example to argue wealth was created, but that is not how the two terms are rigorously defined in the literature.
Thanks for the answer, it's helped clear some of my questions up. I am a total layman with zero education in economics - I maybe should said that in the question - so a lot of your answer is way over my head.
If you consume half, you are paying someone else to create wealth, right? The wealth is still created, it just can't be attributed to you?
@user253751 no you are creating an income for someone else. Unless someone actually saves that portion of income to become wealth it won’t. It is theoretically possible to consume whole production even with economic growth (eg every year consume everything that is produced even if production increases) meaning there would be zero wealth added. For adding wealth, under definition of wealth in economics, you have to save portion of your income
@1muflon1 oh is that because of the economic definition of saving? If I buy something and don't consume it, it's savings and therefore wealth?
@user253751 it is because of both definition of saving and wealth. A wealth is by definition value of net assets over some period. In order to turn income into net asset you can’t consume it. Eg if we define our time period as a day if at t=1 you receive 10USD that is your income for that period. Saving is income not consumed so if you decide to consume just 5USD in t=1 your saving will be 5 and that becomes your wealth next period. In next period you could dissave by consuming your income of t=2 plus some of your net assets
|
STACK_EXCHANGE
|
Can I download and write a disk image to partition without saving as a file?
I was wondering if it would be possible to write a disk image file directly to a partition without saving it as a file first. Something like
dd if="http://diskimages.com/i_am_a_disk_image.img" of=/dev/sdb1 bs=2M
I would also accept an answer in C or Python because I know how to compile them.
The point is how you can verify the correctness of the download. The other thing is resuming an aborted download. With today's disk sizes you should afford to save it in a file first.
@U.Windl You can do both just fine, whether you write to a file or a partition.
@U.Windl The problem with phrases like "today's disk sizes" is that it assumes a desktop, laptop or server. There are many contexts even today where you're not so lucky with space such as SBCs.
This is actually trivial. You can write to the device just like it's a file, and there are commands for directly downloading content and either writing it to a file or writing it to "stdout".
As the user root you can simply:
curl https://www.example.com/some/file.img > /dev/sdb
Where /dev/sdb is your hard drive.
This is not generally recommended but will work just fine and is useful in very small devices without much disk space.
Incidently it would be more normal to write a disk image to a disk /dev/sdb not a partition /dev/sdb1.
You can use wget -O option to print to disk directly:
wget -O /dev/sdb http://diskimages.com/i_am_a_disk_image.img
You don't really need to use dd.
Consider rephrasing your answer - in its current form it suggests -O has something to do with accessing the disk, while it is simply a flag for output file.
Yes, you can actually. Use something like this:
curl 'http://diskimages.com/i_am_a_disk_image.img' | dd conv=sync,noerror bs=2M of=/dev/sdX
Out of curiosity, why bother with dd in this context?
well two reasons, main is that OP gave an example with dd in the initial question, second is, the advantages of block size specifications (if required) and to avoid stopping due to read errors (high latency over network in this case, etc) with conv=sync,noerror. Given the question, they are not mandatory options but if one is to use dd, they would be nice to have
Curious. I'm not convinced that's desirable when reading from a pipe. but hey. Thanks for the info.
It's also handy to split into two processes if you want them to run as different users. A cautious operator would download as non-root user, but may need to be root to write to the disk partition.
@TobySpeight interestingly you don't need need dd for this since with redirection the stdout file descriptor is opened by the shell and passed in. Though I take your point about it being marginally simpler to achieve.
I was thinking of curl … | sudo dd … when the invoking user can't write to the device.
@TobySpeight: Right, that's easier to type than what Philip was getting at. But it is possible. Given a root shell, you redirect stdout to disk, then drop privileges and exec curl or wget so it runs as nobody or some other non-root account. e.g. sudo sh -c "sudo -u nobody wget ... > /dev/sdX"
@Alex: What actual (re)blocking do you want dd to do? If that actually happened, e.g. writing a full output block after a short read of an input block from the pipe, it would munge your data. When is dd suitable for copying data? (or, when are read() and write() partial) / Is it better to use cat, dd, pv or another procedure to copy a CD/DVD?.
Most programs other than dd pick some reasonable block size for their output, like at least 4k or 8k. 64k or so would be better to trade off system call overhead with CPU L2 cache staying hot for the kernel to copy the data into a buffer queue, but since this isn't O_DIRECT it's not an actual hardware write block size. Specifying bs= something non-default is important for dd only because its default is tiny, 512 bytes, so system-call overhead is a killer.
-1 because of the conv=sync possible data corruption issues. Using tee or input redirection is safer.
@Josh it's right the opposite in this case. If I wanted en example of data corruption from an image transfered over the network with ssh or curl or whatever, you wouldn't be able to give one to me. The funny thing is I've done this on lots of machines in a real production environment, and a real world example shows that there was no data corruption. Somebody will google this question someday and they will come accross the answers. I will never offer a "quirky test man cave linux environment solution" to anyone. If somebody chooses dd for this, my answer will do the job.
I disagree. conv=sync,noerror means that if curl writes less than 2 MiB, dd will pad that 2 MiB block with zeros, corrupting the data. This is unlikely to happen over a reliable, fast link, but it's possible. Review the questions that Peter Cordes posted -- I, too, used to use dd all the time, and answers on this site made me reconsider my views.
@Alex "... to avoid stopping due to read errors (high latency over network in this case, etc)" - That is not a thing. curl handles the download. If there are network issues, either curl will stop producing data for a bit, and resume outputting data when the network side recovers. In that case, dd adds nothing. Or the network issues are so bad that TCP timeouts occur, curl aborts, the pipe is torn down, and dd is terminated as well. Again, dd adds nothing. I also concur with the other commenters warning of dd problems. dd is usually harmful.
|
STACK_EXCHANGE
|
These functions create bindings in an environment. The bindings are
... as pairs of names and values or expressions.
env_bind() is equivalent to evaluating a
<- expression within
the given environment. This function should take care of the
majority of use cases but the other variants can be useful for
env_bind() takes named values which are bound in
env_bind() is equivalent to
env_bind_fns() takes named functions and creates active
.env. This is equivalent to
base::makeActiveBinding(). An active binding executes a
function each time it is evaluated.
env_bind_fns() takes dots
with implicit splicing, so that you can supply
both named functions and named lists of functions.
If these functions are closures they are lexically
scoped in the environment that they bundle. These functions can
thus refer to symbols from this enclosure that are not actually
in scope in the dynamic environment where the active bindings are
invoked. This allows creative solutions to difficult problems
(see the implementations of
dplyr::do() methods for an
env_bind_exprs() takes named expressions. This is equivalent
base::delayedAssign(). The arguments are captured with
exprs() (and thus support call-splicing and unquoting) and
assigned to symbols in
.env. These expressions are not
evaluated immediately but lazily. Once a symbol is evaluated, the
corresponding expression is evaluated in turn and its value is
bound to the symbol (the expressions are thus evaluated only
once, if at all).
Pairs of names and expressions, values or functions. These dots support tidy dots features.
The input object
.env, with its associated environment
modified in place, invisibly.
Since environments have reference semantics (see relevant section
env() documentation), modifying the bindings of an environment
produces effects in all other references to that environment. In
env_bind() and its variants have side effects.
As they are called primarily for their side effects, these functions follow the convention of returning their input invisibly.
# env_bind() is a programmatic way of assigning values to symbols # with `<-`. We can add bindings in the current environment: env_bind(get_env(), foo = "bar") foo#> "bar"# Or modify those bindings: bar <- "bar" env_bind(get_env(), bar = "BAR") bar#> "BAR"# It is most useful to change other environments: my_env <- env() env_bind(my_env, foo = "foo") my_env$foo#> "foo"# A useful feature is to splice lists of named values: vals <- list(a = 10, b = 20) env_bind(my_env, !!! vals, c = 30) my_env$b#> 20my_env$c#> 30# You can also unquote a variable referring to a symbol or a string # as binding name: var <- "baz" env_bind(my_env, !!var := "BAZ") my_env$baz#> "BAZ"# env_bind() and its variants are generic over formulas, quosures # and closures. To illustrate this, let's create a closure function # referring to undefined bindings: fn <- function() list(a, b) fn <- set_env(fn, child_env("base")) # This would fail if run since `a` etc are not defined in the # enclosure of fn() (a child of the base environment): # fn() # Let's define those symbols: env_bind(fn, a = "a", b = "b") # fn() now sees the objects: fn()#> [] #> "a" #> #> [] #> "b" #>
|
OPCFW_CODE
|
Windows 8 is released by Microsoft which had major changes to the UI experience to compete with the other major mobile platforms like Android and iOS. As with all Microsoft related stuff, Windows mobile app development environment can be set up only on the Windows machines.
You need to download Visual Studio from the below link.
Visual Studio is the IDE supported by Microsoft for developing apps using Microsoft related technologies.
Once the Visual Studio is downloaded, double click on downloaded installer to start the installation process and follow the instructions to complete the installation of visual studio.
Once visual studio is installed, launch visual studio.
Windows Phone tools
Select File –> New –> Project.
In the New Project Wizard, select Installed –> Templates –> Windows 8 and the tools required for creating Windows mobile project are displayed.
If the tools are already installed, then the other project related fields like Name, Location etc. are enabled, else the below Message is displayed.
Click on ‘Install’ to download the Windows Phone related tools and SDK’s. Once the installer is downloaded, double click on the installer to download and installer the Windows phone related tools.
If the Visual studio is open, close it and click on Retry in the below screen to continue installation of Windows mobile development environment.
The installer displays the list of packages that will be installed. Just click on ‘Next’ to continue installation.
Click on ‘Update’ to install the selected features. It will take considerable time based on bandwidth available since the installer that has to download is almost 4 GB of data.
Windows Phone project
Once the installation is completed, launch the Visual Studio to create and run a sample windows app.
Select File –> New –> Project. And in the New Project wizard, select Installed –> Templates –> Visual Basic –> Windows –> Windows 8 –> Blank App(Windows Phone)
Provide the App Name and the Location where you want to create the project and click on ‘OK’.
Visual Studio creates the Windows Phone project.
Select the emulator on which you want to run the demo app you created.
This will start building the project and launching the emulator.
The Windows Phone emulator is not fast and so it will take some time before the emulator is started.
The windows phone emulator is launched and the demo app is launched on the emulator. The app doesn’t display any thing which is totally fine, since it is just a template app and we didn’t make any changes to display in the app.
Issues with Emulator
While launching the emulator, you may run into some issues like the below. You may see the below message even if the virtualization is enabled on the system.
To work around this problem, go to Control panel –> Programs and Features –> Turn Windows features On or Off.
Check if all the Hyper-V features highlighted above are enabled. If they are not enabled, enable them. If they are enabled already, disable them, restart the machine and enable them again and restart the machine.
Then you may see the below message, just click on ‘Retry’ if you see a similar message.
Then you may see the below window, which asks whether you want to connect the emulator to the internet. Click on Ýes’ to allow the emulator to connect to the internet. During this process the network of your system may be disconnected for a while before it reconnects automatically.
The above steps should help in resolving any issues with launching the emulator.
Apache Cordova Environment
To create a Cordova project using Visual studio, select File –> New –> Project
In the New Project wizard, search for Cordova and the New Project wizard displays the list of Cordova project templates available.
If the required Windows Mobile tools etc. are not downloaded along with the version of Visual Studio you downloaded, the below message will be displayed.
Click on Ínstall’ to download the required tools for developing mobile apps.
Click on ‘Next’ to download and install the required tools for developing Apache Cordova apps using Visual Studio.
Click on Úpdate’ to start the download and install the Apache Cordova tools for Visual Studio.
Installing Cordova tools for Visual Studio requires around 10 GB of disk space, so make sure your disk has required disk space. Also downloading 10 GB of information over the internet is going to take considerable time.
Visual Studio creates a new Cordova app project.
Select the mobile os you want to emulate(highlighted in red) and then select the device model you want to emulate(highlighted in green).
This will launch the mobile emulator.
And the below message will be displayed in the emulator and since it is just a template app, the message displayed is fine.
Open index.html in the www folder and change the message displayed in the index.html to any other message and relaunch the app in emulator.
The modified message should be displayed in the emulator.
We have set up the development environment for creating windows mobile apps and learnt how to run the windows phone emulators.
|
OPCFW_CODE
|
FAQ: Version Management with SAP Analytics Cloud (Part I – Basics)
In SAP Analytics Cloud (SAC) users can choose to maintain versions of their data if desired. To get the most out of the versioning capability, one should be familiar with the concept of public and private versions in SAC. Hence here, in a 2-part series blog, is a list of FAQs to help clarify some of the uncertainties.
- How do I create public and private versions?
- How do I start editing a public version? And what is meant with the public edit mode?
- If I make a private copy from the public version, how can I see which data is locked?
- Do I need to save my data changes? I’m afraid that I will lose them when I log out.
- Is every change immediately visible for everyone in the public version?
- Wouldn’t everyone be overwriting each other’s public edit mode changes?
- Sometimes a cell appears input-enabled in a public version for me, but when I try to edit it, it changes into non-editable state. Why?
- Do I see my colleagues changes immediately in the public version?
- How do I know which data has been changed in my version?
- How do I know the size of my versions? Or any other details?
- Can everyone publish everything into a public version?
Q: How do I create public and private versions?
A: A public version comes as a default upon creation of a SAC model. Upon import from a data source user can also specify the nature of the version to be public.
Additional versions, regardless public or private, cannot be created in the model but via the version management panel instead.
To create a private version, simply copy an existing version. It is possible to constraint the scope of the copy by limiting the data to a desired filtered context, the defined planning area, or only visible data in the table. You could also copy the entire version or no data at all, but usually a filtered private version is a good starting point.
Creating a new public version follows the same procedure, meaning you basically create first a private copy of an existing public version. The only difference is that at the end you click on “publish as” to create a new public version with the data.
Q: How do I start editing a public version? And what is meant with the public edit mode?
A: There is no special preparation needed for simple data entry in a public version – you just go ahead and click on the desired cell to do your entry. You will automatically be led into the public edit mode. (Alternatively, you can also trigger the edit mode in the version management panel for the desired version). When done, there are several ways to exit the edit mode and publish your results, depending on how your IT has configured your interface for you – here the instructions from the IT should serve as guidance (see also question below, “Do I need to save my changes?”).
Technically, the public edit mode is a snapshot of the data and lock state from the public version to enable planning. This ensures that multiple users working simultaneously on the same public version do not face the problem of cross interference in their data entry or the subsequent calculations/data copy. This is also the reason why triggering a data action will also activate the edit mode – for undisturbed operation within the individual snapshot before publishing.
If the version name is present in your table headers, you can see a “*” next to the name of your version, signifying that you are working within your own snapshot, i.e. in the edit mode.
Q: If I make a private copy from the public version, how can I see which data is locked?
The lock state information will also appear in your private version.
Q: Do I need to save my data changes? I’m afraid that I will lose them when I log out.
If you are working with the public version, your changes are stored in an edit mode until you choose to publish them. Publishing will merge your changes with the public version and remove the edit mode you have been working with. This is similar to working with private versions, except that you can share private versions with colleagues, but not your edit mode changes.
Note that only valid changes (according to data access control, data locks and validation rules) will be published. Invalid changes will be discarded along with the private version/public edit mode. More details on how this works together with data action can be found in part II of the FAQ series.
Q: Is every change immediately visible for everyone in the public version?
As mentioned above, your changes are stored in an edit mode until you choose to publish them.
It is an isolated snapshot of the public version created for you to enable your planning.
Q: Wouldn’t everyone be overwriting each other’s public edit mode changes?
Your public edit mode changes are your own, visible and editable only by you.
Q: Sometimes a cell appears input-enabled in a public version for me, but when I try to edit it, it changes into non-editable state. Why?
A: One of the reasons could be that a recommended planning area for the model has been setup, which only kicks in when you start planning, with the aim to optimize the size of your public edit mode snapshot. The boundary of the planning area will determine what is plannable, what not, and display the corresponding data as such. Hence, sometimes you might notice a difference in the editability of certain data before and after you have started your edit.
Q: Do I see my colleagues changes immediately in the public version?
Even after your colleagues have published their results, the latest data changes in the public version are only visible if you (a) refresh your browser, or (b) trigger a data refresh from the toolbar.
Some planning interactions, such as publishing data or submit a calendar task will also automate a refresh and display the latest numbers. Your IT could also configure certain data actions or use an analytic application to trigger implicit refresh behind the buttons.
As a rule of thumb – simply trigger the data refresh in toolbar to view the latest public numbers, if uncertain about the actuality. Doing so will not overwrite the unpublished changes you have done for the same version, as your changes are stored in the edit mode.
Q: How do I know which data has been changed in my version?
A: Data changes are visible in three areas:
- Cell Highlight:
- Single data entry mode: After each data entry, the affected cells are highlighted, but only changes between the latest entry and the one before.
- Fluid data entry mode: During quick succession of data entries, affected cells are not immediately visible. Only when an entry pause is detected, all entries are batch processed and highlighted.
- Mass data entry mode: Changes are only highlighted when “process data” is triggered
- Version History Panel:
- Data entries within a private version are visible within a version history and could be rolled back to a desired point. All entries between the creation and publishing of the version are listed. Once published, this list will be discarded along with the private version. This holds true also for the public version edit mode.
- Data Audit/changes:
- Once enabled under the model preferences, all data changes within a model will be logged in the data audit, with the corresponding delta value.
Q: How do I know the size of my versions? Or any other details?
A: Go to the version management panel and select “Details” from the dropdown menu of your selected version.
The version detail panel will open and display overview information such as version size, access rights and creation date.
Q: Can everyone publish everything into a public version?
When you trigger a version publish, several security checks will kick in and only data that has passed all criteria will be merged into the public version. These checks include data locks and validation rules, among others.
Note that even though while planning you could choose to ignore data locks, this is only to enable simulation and private planning. Once publish is triggered, invalid data made to locked slices will be discarded. More details on publishing will be given in part II of this blog, which will also answer questions relevant for administrators and power user scenarios, including how versions work regarding security checks, data action processing, and size monitoring.
|
OPCFW_CODE
|
In traditional SDLC process, manual code review is done after the code is constructed and finding & fixing the defects require more time and resources which is costly and overburdening.
With IDE plugins, the code review is automatically done as the developer writes code by detecting various kinds of coding defects (e.g. security vulnerabilities, coding errors, wrong coding practices etc.) during development phase. Some IDE plugins help detect the defects and provide informative fixes during the construction of programs itself. With this, manual code review effort is minimized & developers can jump to the defects immediately to see the explanation on how to fix it. The IDE plugins also allow to write customized rules and/or guidelines as per the company’s frameworks and policies.
Plugins That Detect Security Vulnerabilities
Application Security plugin for Integrated Development Environment (ASIDE) from OWASP is an open source Eclipse Plugin designed to help developers write more secure code by detecting and identifying potentially vulnerable code and providing correct fixes during the construction of programs in IDEs.
It consists of two branches, the ASIDE branch that is responsible for detecting software vulnerabilities for Java & PHP and helping developer write secure code, and the ESIDE branch that is focusing on help educating students acquire secure programming knowledge and practices (for Java).
Cigital SecureAssist is a commercial lightweight static analysis tool that identifies security related vulnerabilities and provides informative guidance to enable the developers to immediately fix the problem. SecureAssist supports Java, .NET and PHP and is integrated directly into development environments, such as Eclipse and Visual Studio. It comes with an enterprise server portal that helps manage the users or groups and helps track & manage the usage statistics and reports.
FindSecBugs is an open source static code analysis plugin that detects and identifies potentially vulnerable code and provides informative fixes during the construction of Java programs. FindSecBugs can be used within IDEs like Eclipse, Netbeans & IntelliJ IDEA.
Plugins that Detect Generic Code Defects
FindBugs is an open source tool for static analysis of Java programs. It is a defect detection tool for Java that uses static analysis to look for more than 200 bug patterns, such as null pointer dereferences, infinite recursive loops, bad uses of the Java libraries and deadlocks. FindBugs can identify hundreds of serious defects in large applications (typically about 1 defect per 1000-2000 lines of non-commenting source statements). FindBugs can be used from the command line or within ANT, Eclipse, Maven, Netbeans and emacs.
PMD is an IDE plugin that scans Java source code and looks for potential problems like:
- Generic bugs - empty try/catch/finally/switch statements
- Dead code - unused local variables, parameters and private methods
- Suboptimal code - wasteful String/StringBuffer usage
- Overcomplicated expressions - unnecessary if statements, for loops that could be while loops
- Duplicate code - copied/pasted code means copied/pasted bugs
|
OPCFW_CODE
|
Unusually for a game, The Sentinel doesn’t have a plot to speak of. A number of magazines included a story in their review, but nowhere in the official packaging is a back story mentioned.
When the game loads, the player is asked to input a landscape number from 0000 to 9999. After this, they are prompted for an 8 digit secret entry code, for every landscape except 0000, which didn't require an entry code to play.
Before the game starts, the player is shown an aerial view of the landscape. Each landscape looks like a greatly extended chess board, but with platforms of varying heights. The aerial view also shows the relative positions of the Sentinel (who stands on a tower at the highest point on the landscape) and its sentries. Thankfully, the first few levels didn't have any sentries, thus giving the player a slightly easier introduction to the nuances of playing the game. The Sentinel (and sentries in later landscapes) remain inactive until the player expends or absorbs energy.
The main concept of the game is all about energy. Trees and Boulders are dotted around each landscape. Boulders are worth two units of energy, whilst trees are only worth one. The robot controlled by the player are worth three. The player can rotate on the square they occupy and absorb the energy of any boulders or trees where they can see the square those objects are standing on. Therefore, to absorb energy from something else, the player has to be on the same level or higher than the object. To move, the player has to create a robot on another square (costing energy to do so) and then jump into the new robot shell. They can then turn around and absorb the old robot shell and continue as before, assuming something else doesn't start absorbing it first!
Once activated, the Sentinel and sentries also slowly rotate on the spot, scanning the landscape for squares which contain objects of more than one unit of energy. If they can clearly see such a square, the Sentinel or sentry absorbs the item's energy, 1 unit at a time. Therefore a robot becomes a boulder, and a boulder then becomes a tree. Although the player can absorb trees, the Sentinel and its cohorts are a little more environmentally friendly and leave the trees alone!
If the player's robot falls under the gaze of the Sentinel or the sentries, an alarm triggers and the robot's energy levels start to reduce as the energy is being sapped 1 unit at a time. The only way out is to move, either by quickly creating a new robot shell to jump into or by performing an emergency hyperspace to a random square somewhere else on the landscape. The drawback with that strategy is that the player has little chance of absorbing their old robot shell and they might still fall into the deadly gaze of the Sentinel! Hyperspace also cost three units of energy (the number required to create a new robot shell) and if the player doesn't have enough energy to complete the jump, then hyperspacing can destroy the robot and end the game!
The total amount of energy on each level remains constant, so if the player is losing energy then for each unit they lose, a new tree (worth one unit of energy) is created and randomly placed on the landscape.
If the player survives and successfully absorbs the Sentinel, then an 8 digit code is presented. This code is based on the number of the landscape completed and the amount of energy left after the player hyperspaces to the platform where the Sentinel was standing before it was absorbed by them. The landscape the player moves to next is based on how well they played the current landscape (how much energy that have at the end). This meants they didn't have to play all 10,000 landscapes to reach the end!
On the subject of the end, the only complaint some expert players had was that The Sentinel didn't actually end! After completing the final level, the game returns the player back to level 000000 but with the access code equivalent to the energy they have amassed.
|
OPCFW_CODE
|
How to use Sentemul 64 bit to emulate SuperPro dongles
Sentemul is a software that can emulate SuperPro dongles, which are hardware devices that protect software from unauthorized copying or use. Sentemul can create a virtual dongle that mimics the original one, allowing you to run the protected software without the physical device. However, Sentemul 64 bit is not compatible with some dump files created by other tools, such as Edgespro11 or HaspHL2007. In this article, we will show you how to use Sentemul 64 bit to emulate SuperPro dongles on Windows 7 64 bit, Windows Vista 64 bit, or Windows XP 64 bit.
A SuperPro dongle or a dump file (.dng or .dmp) of it.
A 32 bit operating system (OS) where you can run Sentemul2007 or HaspHL2007.
A 64 bit OS where you want to use Sentemul 64 bit.
The following programs: Sentemul2007, Sentemul 64 bit, MultiKeyEmu x64 v. 0.18.0.3, dmp2mkey v 2.3, PVA v 3.3, Driver Signature Enforcement Overrider, and the latest Sentinel Drivers.
If you have a SuperPro dongle, plug it into your computer and use PVA v 3.3 to dump its data into a .dmp file. If you already have a .dng file, skip this step.
In your 32 bit OS, run Sentemul2007 or HaspHL2007 and install the driver. Then load your .dng or .dmp file and start the service. This will emulate your dongle in the 32 bit OS.
Still in your 32 bit OS, run dmp2mkey v 2.3 and use it to convert your .dng or .dmp file into a .reg file. This file contains the registry entries that will be used by Sentemul 64 bit in your 64 bit OS.
Move to your 64 bit OS and install the latest Sentinel Drivers. These are required for Sentemul 64 bit to work.
Create a new folder in your 64 bit OS, for example C:\\MultiKey. Copy the MultiKeyEmu x64 v. 0.18.0.3 files into this folder.
Install the .reg file that you created in step 3 by double-clicking on it and selecting âto regâ or right-clicking on it and choosing "Merge". This will add the registry entries for Sentemul 64 bit.
Run Driver Signature Enforcement Overrider and use it to disable driver signing enforcement and sign the multikey.sys driver file that is located in C:\\Windows\\System32\\Drivers\\multikey.sys. This is necessary because Windows does not allow unsigned drivers to be installed.
Run install.cmd from the MultiKey folder and follow the instructions to install Sentemul 64 bit.
Restart your computer and enjoy your emulated dongle!
If you encounter any problems with Sentemul 64 bit, here are some possible solutions:
Make sure you have the correct version of Sentemul for your OS (32 bit or 64 bit).
Make sure you have the latest version of Sentinel Drivers installed.
Make sure you have disabled driver signing enforcement and signed the multikey.sys driver file with Driver Signature Enforcement Overrider.
Make sure you have installed the .reg file correctly and that it matches your dongle type and serial number.
Make sure you have copied all the files from MultiKeyEmu x64 v. 0.18.0.3 into the MultiKey folder and run install.cmd as administrator.
If you have any other dongle emulators installed, uninstall them before using Sentemul 64 bit. 061ffe29dd
|
OPCFW_CODE
|
Since many of you already know our Alfresco employees and their backgrounds, I thought we would stray from the Featured Member interview, and instead share what they do on a day to day basis. Today, we're featuring Dave Draper, Senior Engineer:
I usually start my day by going through my e-mails - usually to see what issues have been raised on the Aikau in GitHub or in JIRA. I will usually then review the previous day's IRC logs for any interesting conversations and browse all the new questions that have appeared on the Alfresco Community Platform and Stack Overflow. I do my best to answer any questions to the best of my ability and will often reach out to other members of the Engineering team if I see a question that they are better placed to answer.
I also check any new conversations that occurred in the many Skype groups that I’m a member of as well as checking for anything relevant to me on Twitter. Quite often there will be one or two interesting blog posts that have been shared so I will read through them as I like to try and keep my knowledge of what’s going on in the industry (and in web development in particular) current.
I always try to finish each day “cleanly” so that I don’t spend my evenings thinking about unfinished problems. This means that I’ll usually be making a start on a fresh feature or bug fix. I’m simultaneously active in two sprints - one for Aikau and one for Share - although there is usually considerable overlap.
I try to follow a Test Driven Development approach as best I can so will always try and write my tests before I write the code. Once tests are passing I will create pull request and which I will get reviewed and merged.
If Aikau is ready to be released (I try to ensure that there is at least one release a week) then I’ll run a full regression test, update the release notes and use our Bamboo servers to perform the release and updates to the JSDocs and Sandpit applications.
When I’m not fixing bugs or working on features I’ll most likely be writing up something in a blog post and increasingly I’ve taken to recording video posts (because I feel like I can convey more information in less time that way).
In the very rare moments when I have something approaching free time I will try and spend some time working on specific areas of Aikau that need to be pushed forward - the forms runtime service for example or looking at ways in which to increase performance.
Fortunately I have very few meetings each week, but there is always a daily scrum call for the Share project and occasionally I will have a 1-to-1 meeting with my manager or some other such thing to attend.
When I’m working from home (which I do 3 days a week) my day is broken up with school runs and dog walks. I find that the time I spend walking my dogs can actually be very productive to problem solving - stepping away from the keyboard into the fresh air and getting my body moving helps me to think clearly through any tricky bugs that I happen to be dealing with.
When I’m in the office my day is broken up with my 2 hour commute on the train. I’m usually able to work effectively on the train despite an occasionally patchy 4G service and find that getting stuck into an interesting problem makes the journey fly by. My days in the office are also valuable as this is when I can most effectively network with my colleagues - although I would say that on the whole I’m probably more productive working from home as there are generally less distractions.
|
OPCFW_CODE
|
import UIKit
class FirstOnboardingViewController: UIViewController {
@IBOutlet weak var breadContainerView: UIView!
@IBOutlet weak var cheeseContainerView: UIView!
@IBOutlet weak var fishContainerView: UIView!
@IBOutlet weak var strawberryContainerView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
createCircleBorders()
}
private func createCircleBorders() {
let viewToColorMap: [UIView: UIColor] = [
breadContainerView: UIColor.wheat,
cheeseContainerView: UIColor.pastelRed,
fishContainerView: UIColor.purpleGrey,
strawberryContainerView: UIColor.sickGreen
]
viewToColorMap.forEach { (view, color) in
view.createCircleBorder(with: color)
}
}
}
|
STACK_EDU
|
Top 10 best PHP Frameworks for Developers
An overview of the Top 10 best PHP frameworks for web development to find out how to make your workflow faster and easier.
Table of Contents
PHP is a popular server-side scripting language designed primarily for the web development. Originally created by Rasmus Lerdorf in 1994, it’s reference implementation is now produced by PHP group. PHP is designed to be used and embed easily in HTML. Because of it’s easy to use and great functionality, it empowers Content Management Systems like WordPress and social media websites like Facebook.
This post is to help you choose the best PHP framework. A framework provides a large library of code for the common application task. Frameworks also force you to write better and cleaner code thus allowing you to make the code more readable, scalable and maintainable.
Using PHP frameworks will speed up your development workflow. It will also help you to make your code clean and well structured.
In this post, we will find out Top 10 PHP frameworks.
Laravel is a free and open-source PHP web framework for the development of web applications following the MVC or model-view-controller architectural pattern.
Laravel is regarded as the most popular PHP framework. You can take a look at google trends for top 5 PHP frameworks.
This shows that laravel is the most popular framework. Also, Laravel has already taken the top spot in the list of backend frameworks on GitHub based on the total number of stars.
Laravel also has a huge ecosystem and laracasts also offers many screencast tutorials for laravel developers. Laravel features make rapid application development possible. Laravel also includes a templating engine called Blade. Laravel includes a lot of features out of which most popular are routing, authentication, sessions, queueing, caching and event broadcasting.
Symfony framework is a set of reusable PHP components and libraries that you can use to complete your development tasks. Symfony includes 50 stand-alone components out of which most popular are routing, validator, form, filesystem, cache, and console. You can install any of these components independently with composer which is a PHP dependency manager.
Components of Symfony 2 are used in the development of big projects such as Drupal, phpBB, and finally, Laravel which is the most popular framework. According to Symfony website, Laravel uses 11 components of Symfony 2.
CodeIgniter is a rapid development framework for building dynamic PHP websites. It is lightweight built for developers who need a simple toolkit for creating web applications. CodeIgniter was released back in 2006 and it was one of the first major PHP frameworks. CodeIgniter does not require a lot of learning as it is simple.
CodeIgniter can run easily on older PHP versions. Right now the latest version recommends PHP versions 5.6 or higher. CodeIgniter is loosely based on MVC architectural pattern. Controller classes are necessary whereas models and views are optional. CodeIgniter size is less than other frameworks making it faster, lighter and lean framework.
CakePHP is a web framework written in PHP that follows model-view-controller architecture. It was released back in 2005 (12 years ago). At the time of writing, the latest version of CakePHP is 3.5. The latest version includes Scoped Middleware, Console Runner, dotenv support, console integration testing and much more.
CakePHP is used by big companies and institutions such as BMW, MIT, and HYUNDAI to empower their websites. It offers security against three most common attacks: SQL injection, XSS (cross-site scripting), and CSRF (cross-site request forgery) out of the box. It also includes a dedicated security component.
Phalcon is a PHP web framework based on the model-view-controller (MVC) architectural pattern. It was released in 2012. Phalcon is the fastest PHP framework ever built because it is built on C and C++ to get the cutting edge speed. It is delivered as a C extension so you get maximum execution speed do not have to learn C to use it. Phalcon has low memory and CPU consumption compared to other frameworks that are built on PHP. It increases execution speed and decreases resource usage.
Phalcon has optimized low-level architecture which provides the lowest overhead for MVC-based applications. Phalcon includes many features such as cache, config, queue, logging, validators, routing, event manager and much more. It is very well documented and has a big community around it as well. If you want faster execution speed, Phalcon is your best choice.
Yii is a component based MVC PHP web application framework. It was first released in 2006. It boosts application execution speed because it uses lazy loading technique extensively which means it does not require a file until the class is used. It is faster than many other frameworks. It follows MVC pattern, promotes DRY design and supports rapid web development.
Yii comes up with form validation and ajax support out of the box. It also offers built-in authentication. It has a built-in code generation tool called Gii which speeds up your development. It offers great security, a lot of extensions, plugins, and widgets, Internationalization, error handling, logging, testing, active record implementation and many more.
Zend Framework is a collection of professional PHP packages. Zend framework uses Composer as a dependency manager to install its packages. It uses MVC architecture. Zend is a huge framework with a lot of options and steep learning curve. For this reason, it is not recommended for small projects. Zend has many partners such as Google and Microsoft that have contributed components or features to the frameworks.
Zend Framework v3 is optimized for PHP 7 that makes it runs up to 4x faster than v2. Zend Framework has many great features such as authentication, barcodes, cryptography tools, database abstraction layer, generate Atom and RSS feeds, validate and display forms, and much more. Zend empowers many powerful enterprise applications.
FulePHP is a simple, flexible framework which is community driven. It is compatible with PHP 7 but you can run it on PHP 5.3+. FuelPHP supports Hierarchical-Model-View-Controller (HMVC) pattern. It also includes ViewModel as a powerful layer between the controller and the view.
FuelPHP supports modularity and extendability. It also offers many security features out of the box such as XSS, CSRF, SQL injection, URI, and Input filtering. Other features include code generation, interactive debugging, cron tasks, ORM (Object Relational Mapping) and using any template parser for your views.
Slim is a PHP micro framework that helps you create simple but powerful web applications. Slim is a microframework in design which is great for smaller applications. They are different from the full-stack frameworks that contain a lot of functionality such as authentication, authorization, roles and much more.
Slim is used by many PHP developers for developing restful APIs and many other smaller services. Slim provides many great features such as a fast and powerful router, dependency injection, error handling, caching, encryption and many other features as well. It has a great user guide that you can use to learn this framework fast.
PHPixie is a component-based PHP framework. It is a high-performance framework. Its components are 100% unit tested. It is easy to learn and their official website claims that you can learn it in 30 minutes. It follows HMVC design pattern.
PHPixie is built upon independent components that can be used separately from the framework. Their features include built-in authentication, query builder for SQL databases and MongoDB, Dependency Injection, ORM, database schema, image manipulation and some other great features. It is a relatively new framework with a small community but it is increasingly becoming popular.
|
OPCFW_CODE
|
Misleading indentation errors on gcc 11.x
Had to suppress that warning to be able to compile.
This does the trick:
if (NOT (GCC_VERSION VERSION_LESS 11.0))
# Disable warning on misleading indentation it become more aggressive in 11.0
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-misleading-indentation")
endif()
Where do you get indentation errors, it it is localized, perhaps we can fix at source level, otherwise I'm happy to add you suggestion. I presume this is for the main CMake file.
OK, my patience with GCC warnings is limited so I'll disable.
Should be fixed on master, please check.
Wonder why this isn't part of pedantic that should already have been disabled on recent GCC.
It is, thanks. I will close this issue.
I see, pedantic is still on for gcc, next time I'm going to remove pedantic for GCC.
as the problem is still present, what about including #pragma GCC diagnostic ignored "-Wmisleading-indentation" on the generated files?
@amery There is distinction between compiling the flatcc tool chain and the runtime. Notably the runtime (library and generated) can be compiled by whatever tool chain the user prefers. I guess this topic only covers the tool chain.
There is a hook for dealing with runtime warnings in the portable library. This is mostly for generic issues that prevent smooth cross platform behaviour and are therefore always enabled. Others should be dealt with in the users tool chain.
The current situation is a bit special because the warning is actually useful in the general case, but specifically for flatcc generated code, it is not helpful. Therefore we cannot or should not inject the warning at the portable layer because the portable library is not flatcc specific.
I have not fully investigated, but I think can conditionally disable the the warning in pdiagnostic.h which is included in all generated files. Then set a flag in flatcc specific include files to enable the warning in this context.
Would you be willing to look into this?
Here the warning can be added in either pwarnings or probably better pdiagnostic.h, guarded by a PDIAGNOSTIC flag:
https://github.com/dvidelabs/flatcc/blob/master/include/flatcc/portable/pwarnings.h
https://github.com/dvidelabs/flatcc/blob/master/include/flatcc/portable/pdiagnostic.h
https://github.com/dvidelabs/flatcc/blob/master/include/flatcc/portable/pdiagnostic_push.h (and pop)
I think the PDIAGNOSTIC enabled warning could then be enabled with a define in
https://github.com/dvidelabs/flatcc/blob/master/include/flatcc/flatcc_flatbuffers.h
or
https://github.com/dvidelabs/flatcc/blob/master/include/flatcc/flatcc_rtconfig.h
This issue has been closed, was this fixed in a later version of flatcc? Either by simply adding a newline in the auto generated code, or disabling the warning with a push/pragma/push?
@ndjhartman Please explain your problem more clearly. I assume you that flatbuffers union break for you with -Werror.
The issue with indentation is not limited to any feature but to all generated code, as a rule.
It is not a fixable problem. The flatcc code generator produces large amounts of dense code that is intended for macro expansion not for human consumption. Other flatcc code should be properly indented, and generally is.
You mention pragma push/pop:
I already cover this in my above comment, but there is possibly room for improvement if someone wants to step up.
Historically there are three systems to deal with this:
the CMake build which is not necessarily what users use, but it tells which warnings are considered necessary. It also deals with warnigns that cannot be handled in source code, notably some MSVC warnings.
pwarnings.h : here are some warnings that universally break standard C, this is mosly related to old MSVC compilers, afair.
pdiagnostic_push/pop.h headres. These are included in all generated code. Here the most important compiler specific differences are handled, but handling all compiler variants get complicated fast, just look at CMakeLists.txt
That said, flatcc has always aimed for -Werror with as high a warning level as pratically possible. Because gcc has become overly aggressive, the level of max warning support for that compiler has been lowered, as it kept breaking things unnecessarily just randomly taking up time to fix nothing, and worse, breaking portability of code. You can again see this in CMakeLists.txt and comments.
So, is it possible to move some of these warning disablers from CMakeLists.txt into pdiagnostics? Yes likely, but we cannot deal with all possible combinations there. If you are willing to put in the effort, and test widely, there is opening here.
See my previous comment where I link these files.
Can we add it to pwarnings.h? No. We want identation warnings in general, and the portable sub library is not flatcc specific, it only disables hostile warnings while pdiagnostics handle cases where flatcc might not adhere to otherwise reasonable warnings for one reason or another.
Can we just use CMakeLists.txt: not really, because users have different builds.
So there isn't a general solution, as there never is with complex software infrastructure.
But the solution is not to modify the flatcc code generator as per earlier comments.
@ndjhartman
I should have added the commit earlier on:
https://github.com/dvidelabs/flatcc/commit/f8c4140dd9dde61c86db751f6002def78754fced
https://github.com/dvidelabs/flatcc/blob/f8c4140dd9dde61c86db751f6002def78754fced/CMakeLists.txt#L228-L231
Thanks @mikkelfj. I do see this check present in the CMakeLists.txt. I haven't had time to debug this specifically, but I'm wondering if it's something to do with using compiling with ARM GCC.
|
GITHUB_ARCHIVE
|
Virtual PC 2007, Microsoft's newest desktop virtualization product, is the main competitor in the desktop virtualization space with long-time market leader VMware Workstation. Virtual PC 2007 is a free download, available at http://www.microsoft.com/downloads/details.aspx?FamilyId=04D26402-3199-48A3AFA2-2DC0B40A73B6. Here are the biggest highlights of Virtual PC 2007—along with its biggest failings.
10. Support for migrating from Virtual PC 2004 SP1—Virtual PC 2007 supports in-place upgrades from Virtual PC 2004 SP1. The virtual machine (VM) file format remains identical, but you'll need to reinstall Virtual Machine Additions after you upgrade. If you're running a version of Virtual PC other than Virtual PC 2004 SP1, you'll need to uninstall the earlier version and then do a new installation of Virtual PC 2007.
9. No Linux guest support—One of the surprising limitations of Virtual PC 2007 is that it still doesn't officially support running Linux as a guest OS. Although Linux actually does work on Virtual PC VMs—and Virtual PC 2007's new hardware-assisted virtualization should provide even better performance than you got in earlier releases—official support for Linux remains MIA.
8. Support for Windows Vista as a host—It's probably no surprise that Virtual PC 2007 supports Vista as a host OS. Virtual PC 2007 runs on both 64-bit and 32-bit Windows Vista Business, Vista Enterprise, and Vista Ultimate editions, as well as on many earlier versions of Windows.
7. Support for Vista as a guest—The 32-bit versions of Vista can also run as guest OSs. Virtual PC 2007 supports the following guest OSs: Vista Business, Enterprise, and Ultimate editions; XP Pro and Tablet PC editions; Windows 2000 Professional; Windows 98 SE; and for some reason, IBM OS/2 Warp 4.
6. Support for 64-bit host OSs—I mentioned earlier that Virtual PC 2007 supports the x64 editions of Vista as a host OS. But you might not realize that, consequently, Virtual PC 2007 provides native 64-bit support for x64 architecture, which significantly increases the number of active VMs you can have by raising the 4GB memory limitation of 32-bit hosts to 16TB. Virtual PC 2007 also uses a different setup program for installation on 32-bit and 64-bit versions of Windows.
5. Unchanged management console—Continuing the ignoble tradition set by Virtual PC 2004, Virtual PC 2007 has one of lamest management consoles of all time. Although the Virtual PC 2007 management interface is Windows Aero–enabled, it still can't hold a candle to the interface provided by VMware Workstation.
4. No support for USB devices—Virtual PC 2007 lacks another feature that's long been included in VMware Workstation products: support for USB devices. Virtual PC 2007 VMs can use USB mice and keyboards but can't use other popular USB devices such as flash drives and other USB external storage devices.
3. Network-based installation of guest OSs— Another new feature is support for performing a Preboot Execution Environment (PXE) network boot. Virtual PC 2007's PXE boot support enables Virtual PC 2007 VMs to be booted up from the network without needing to be pointed to a CD-ROM, DVD, or ISO image that's stored locally.
2. Support for running VMs on multiple monitors—This cool feature lets you display different VMs on separate monitors. For example, using the multimonitor support, you can have Virtual PC 2007 run a VM in full-screen mode on one monitor while displaying the host OS on another monitor.
1. Support for hardware-assisted virtualization—One of the most important improvements in Virtual PC 2007 is its support for both Intel VT and AMD Virtualization hardware-assisted virtualization. Of course, the system that's running Virtual PC 2007 must have a processor that possesses the new virtualization extensions. Virtual PC 2007's hardware-assisted virtualization support is enabled by default. You can disable it for specific VMs by clicking Settings, Hardware Virtualization, then clearing the Enable hardware-assisted virtualization check box.
|
OPCFW_CODE
|
"Generate Ferefences For FSI" command generate invalid reference if project is targeting .net core/standard
In VSCode I clicked "Generate references for FSI" and got long references.fsx file.
In the first line of this file which looks like below I have error:
#r @"C:\Program Files\dotnet\sdk\NuGetFallbackFolder\microsoft.netcore.app\2.0.0\ref\netcoreapp2.0\Microsoft.CSharp.dll"
error FS0193: The module/namespace 'System.IO' from compilation unit 'System.Runtime' did not contain the namespace, module or type 'FileStream'
What can I do to fix it?
I have console application and want to have few satellite tools scripts. I could to ref Console application "*.fs" files directly but found this magic references generator. Does anyone use it without problems?
Steps to reproduce:
You may download my project from https://1drv.ms/u/s!AgszF6pgNgsShrYPwm0GKCXmdoStDQ
In VSCode execute "Paket: Install" to restore packages.
"Generate references for FSI" and you will see error in first line:"The module/namespace 'System.IO' from compilation unit 'System.Runtime' did not contain the
namespace, module or type 'FileStream'"
Thanks @oleksandr-bilyk .
So .net core projects cannot be used to send references to FSI, because FSI support only .NET Framework.
The only way to fix it atm is to disable (make it hidden) the Generate references for FSI for project targeting .net core or .net standard.
Disabling may make VSCode slower. IMHO just show error "Cannot generate FSI references for .NET Core project" will be enough.
Some coding notes:
commands are defined here https://github.com/ionide/ionide-vscode-fsharp/blob/2139f2df555a994e1361a0b4cbe9e764c3e7105f/release/package.json#L91-L98
so the interested command are
fsi.SendProjectReferences , implemented https://github.com/ionide/ionide-vscode-fsharp/blob/80902953efd62e8db448d14871f8657aa8a5be75/src/Components/Fsi.fs#L138-L141
fsi.GenerateProjectReferences implemented https://github.com/ionide/ionide-vscode-fsharp/blob/80902953efd62e8db448d14871f8657aa8a5be75/src/Components/Fsi.fs#L155
after the Project.tryFindLoadedProjectByFile, you have the Project. that record contains the info if is a .net core project or not.
properties are https://github.com/ionide/ionide-vscode-fsharp/blob/80902953efd62e8db448d14871f8657aa8a5be75/src/Core/DTO.fs#L166-L167
The potential fix should apply to both FSI: Generate script file with references from project and FSI: Send references from project commands.
|
GITHUB_ARCHIVE
|
Microsoft Corporation is trying to change the way people use emails with Flow, its upcoming app. While Microsoft Outlook is already an awesome tool for general communication and emailing, the Redmond, Washington-based Microsoft will have Flow run like a messaging application, sans the signatures and subject. It looks like Outlook in its simplified version.
Described as a micro-email chat application, Flow combines chat and emailing by letting anyone do quick chat with those who have an email address. Flow is still in its early phase, said Neowin, and offers just the fundamental chat functions. The app runs on Exchange to power chatting, while messages are stored in the email Inbox so they can be searched later.
Microsoft Corporation’s goal is to lessen the friction and the time spent with emails, while keeping the service convenience used by consumers. Flow is now under internal testing, and if it will prove useful to consumers, the app will have added features.
According to BGR, email is one that needs a good “killing,” and while lots of “email killers” have been introduced in the past, they only changed email partially, and most were only all hype. Now, tech world will have a new “email killer” by Microsoft Corporation called Flow, with screenshots already leaking on the Web.
Flow will be under Outlook, the most recent entry in the category of email apps which are designed to integrate email into an interface that is a mobile messaging app lookalike. Flow has an intriguing concept that can be flawed, said BGR. While a chat interface is conducive to brief interactions, all users in the thread should be in the same application type, and some may take brief replies as rude.
With Flow, users will communicate using e-mail addresses. Flow will be initially available for iPhone with Outlook as its backend power. The app will likely make its way to other platforms as well.
Microsoft Corporation said Flow is a great way for speedy email conversations on the phone with anyone, and it is email. With Flow, one can use people’s email address, and Outlook keeps the entire conversation, so that the user can use Flow and Outlook interchangeably to join the conversations. With the absence of subject lines, as well as, signatures and salutations, Flow makes conversations natural, fast and fluid as it is created for light-weight real-time conversations.
Flow emphasizes focusing on what is important. Conversations start in Flow and replies show up in the app as well, not in the Inbox. In this way, people can focus on the most important topics in person-to-person conversations, sans the noise.
Flow marks Microsoft Corporation’s return to the messaging market, after it shut down the famous MSN Messenger last year for Skype. Released in 1999, MSN Messenger was renamed Windows Live Messenger, but when the software titan bought Skype, it dropped Messenger uses. While Microsoft Corporation is trying to change the way people use emails with Flow, the upcoming app can also be an attempt to take on today’s popular messaging apps Facebook Messenger and WhatsApp.
Since Outlook is popular among businesses, Flow will possibly become the first chat-like email app to gain many users if the enterprise setting is to be based. Neowin published last week, a screenshot of the micro-email app. The app looks “streamlined and sleek,” just like the other apps Microsoft recently released.
So far, there is less information about Flow, functionality-wise, but the Windows platform maker really turned up the heat with regard to mobile apps. There is no specific information as to when it will be released, but the tech world is eager to check on the app and see if Microsoft Corporation will succeed in trying to change email with Flow.
By Judith Aparri
Microsoft News: Microsoft is trying to change the way we use email with their upcoming app
BGR: Leak: This is Flow, Microsoft’s unreleased ’email killer’ for the iPhone
Neowin: First look at Microsoft’s new micro-email app, Flow
The Independent: Microsoft Flow: new chat app being developed that hopes to fix email and messaging
Photo courtesy of Ian Lamont’s Flickr Page – Creative Commons License
|
OPCFW_CODE
|
The growing disparity between data set sizes and the amount of fast internal memory available in modern computer systems is an important challenge facing a variety of application domains. This problem is partly due to the incredible rate at which data is being collected, and partly due to the movement of many systems towards increasing processor counts without proportionate increases in fast internal memory. Without access to sufficiently large machines, many application users must balance a trade-off between utilizing the processing capabilities of their system and performing computations in memory. In this thesis we explore several approaches to solving this problem. We develop effective and efficient algorithms for compressing scientific simulation data computed on structured and unstructured grids. A paradigm for lossy compression of this data is proposed in which the data computed on the grid is modeled as a graph, which gets decomposed into sets of vertices which satisfy a user defined error constraint, epsilon. Each set of vertices is replaced by a constant value with reconstruction error bounded by epsilon. A comprehensive set of experiments is conducted by comparing these algorithms and other state-of-the-art scientific data compression methods. Over our benchmark suite, our methods obtained compression of 1% of the original size with average PSNR of 43.00 and 3% of the original size with average PSNR of 63.30. In addition, our schemes outperform other state-of-the-art lossy compression approaches and require on the average 25% of the space required by them for similar or better PSNR levels. We present algorithms and experimental analysis for five data structures for representing dynamic sparse graphs. The goal of the presented data structures is two fold. First, the data structures must be compact, as the size of the graphs being operated on continues to grow to less manageable sizes. Second, the cost of operating on the data structures must be within a small factor of the cost of operating on the static graph, else these data structures will not be useful. Of these five data structures, three are approaches, one is semi-compact, but suited for fast operation, and one is focused on compactness and is a dynamic extension of any existing technique known as the WebGraph Framework. Our results show that for well intervalized graphs, like web graphs, the semi-compact is superior to all other data structures in terms of memory and access time. Furthermore, we show that in terms of memory, the compact data structure outperforms all other data structures at the cost of a modest increase in update and access time. We present a virtual memory subsystem which we implemented as part of the BDMPI runtime. Our new virtual memory subsystem, which we call SBMA, bypasses the operating system virtual memory manager to take advantage of BDMPI's node-level cooperative multi-taking. Benchmarking using a synthetic application shows that for the use cases relevant to BDMPI, the overhead incurred by the BDMPI-SBMA system is amortized such that it performs as fast as explicit data movement by the application developer. Furthermore, we tested SBMA with three different classes of applications and our results show that with no modification to the original program, speedups from 2x--12x over a standard BDMPI implementation can be achieved for the included applications. We present a runtime system designed to be used alongside data parallel OpenMP programs for shared-memory problems requiring out-of-core execution. Our new runtime system, which we call OpenOOC, exploits the concurrency exposed by the OpenMP semantics to switch execution contexts during non-resident memory access to perform useful computation, instead of having the thread wait idle. Benchmarking using a synthetic application shows that modern operating systems support the necessary memory and execution context switching functionalities with high-enough performance that they can be used to effectively hide some of the overhead incurred when swapping data between memory and disk in out-of-core execution environments. Furthermore, we tested OpenOOC with practical computational application and our results show that with no structural modification to the original program, runtime can be reduced by an average of 21% compared with the out-of-core equivalent of the application.
|
OPCFW_CODE
|
function WikiFormatter() {
/*
* This is the entry point, it takes a chunk of text, splits it into lines, loops
* through the lines collecting consecutive lines that are part of a table, and returns
* a chunk of text with those tables it collected formatted.
*/
this.format = function(wikiText) {
this.wikificationPrevention = false;
var formatted = "";
var currentTable = [];
var lines = wikiText.split("\n");
var line = null;
for (var i = 0, j = lines.length; i < j; i++) {
line = lines[i];
if (this.isTableRow(line)) {
currentTable.push(line);
} else {
formatted += this.formatTable(currentTable);
currentTable = [];
formatted += line + "\n";
}
}
formatted += this.formatTable(currentTable);
return formatted.slice(0, formatted.length - 1);
};
/*
* This function receives an array of strings(rows), it splits each of those strings
* into an array of strings(columns), calls off to calculate what the widths
* of each of those columns should be and then returns a string with each column
* right/space padded based on the calculated widths.
*/
this.formatTable = function(table) {
var formatted = "";
var splitRowsResult = this.splitRows(table);
var rows = splitRowsResult.rows;
var suffixes = splitRowsResult.suffixes;
var widths = this.calculateColumnWidths(rows);
var row = null;
for (
var rowIndex = 0, numberOfRows = rows.length;
rowIndex < numberOfRows;
rowIndex++
) {
row = rows[rowIndex];
formatted += "|";
for (
var columnIndex = 0, numberOfColumns = row.length;
columnIndex < numberOfColumns;
columnIndex++
) {
var cellValue = row[columnIndex];
if (cellValue === "!(") {
formatted += cellValue + "|";
} else {
formatted +=
this.rightPad(cellValue, widths[rowIndex][columnIndex]) + "|";
}
}
formatted += suffixes[rowIndex] + "\n";
}
if (this.wikificationPrevention) {
formatted = "!|" + formatted.substr(2);
this.wikificationPrevention = false;
}
return formatted;
};
/*
* This is where the nastiness starts due to trying to emulate
* the html rendering of colspans.
* - make a row/column matrix that contains data lengths
* - find the max widths of those columns that don't have colspans
* - update the matrix to set each non colspan column to those max widths
* - find the max widths of the colspan columns
* - increase the non colspan columns if the colspan columns lengths are greater
* - adjust colspan columns to pad out to the max length of the row
*
* Feel free to refactor as necessary for clarity
*/
this.calculateColumnWidths = function(rows) {
var widths = this.getRealColumnWidths(rows);
var totalNumberOfColumns = this.getNumberOfColumns(rows);
var maxWidths = this.getMaxWidths(widths, totalNumberOfColumns);
this.setMaxWidthsOnNonColspanColumns(widths, maxWidths);
var colspanWidths = this.getColspanWidth(widths, totalNumberOfColumns);
this.adjustWidthsForColspans(widths, maxWidths, colspanWidths);
this.adjustColspansForWidths(widths, maxWidths);
return widths;
};
this.isTableRow = function(line) {
return line.match(/^!?\|/);
};
this.splitRows = function(rows) {
var splitRows = [];
var rowSuffixes = [];
this.each(
rows,
function(row) {
var columns = this.splitRow(row);
rowSuffixes.push(columns[columns.length - 1]);
splitRows.push(columns.slice(0, columns.length - 1));
},
this
);
return { rows: splitRows, suffixes: rowSuffixes };
};
this.splitRow = function(row) {
var replacement = "__TEMP_PIPE_CHARACTER__";
if (row.match(/!-/)) {
row = this.replacePipesInLiteralsWithPlaceholder(row, replacement);
}
var columns = this.trim(row).split("|");
if (!this.wikificationPrevention && columns[0] == "!") {
this.wikificationPrevention = true;
columns[1] = "!" + columns[1]; //leave a placeholder
}
columns = columns.slice(1, columns.length);
this.each(
columns,
function(column, i) {
columns[i] = this.trim(column).replace(/__TEMP_PIPE_CHARACTER__/g, "|");
},
this
);
return columns;
};
this.replacePipesInLiteralsWithPlaceholder = function(text, rep) {
var newText = "";
while (text.match(/!-/)) {
var textParts = this.splitLiteral(text);
newText =
newText + textParts.left + textParts.literal.replace(/\|/g, rep);
text = textParts.right;
}
return newText + text;
};
this.splitLiteral = function(text) {
var leftText = "";
var rightText = "";
var literalText = "";
var matchOpenLiteral = text.match(/(.*?)(!-.*)/);
leftText = matchOpenLiteral[1];
if (matchOpenLiteral[2].match(/-!/)) {
var matchCloseLiteral = matchOpenLiteral[2].match(/(.*?-!)(.*)/);
literalText = matchCloseLiteral[1];
rightText = matchCloseLiteral[2];
} else {
literalText = matchOpenLiteral[2];
rightText = "";
}
return { left: leftText, literal: literalText, right: rightText };
};
this.getRealColumnWidths = function(rows) {
var widths = [];
this.each(
rows,
function(row, rowIndex) {
widths.push([]);
this.each(
row,
function(column, columnIndex) {
widths[rowIndex][columnIndex] = column.length;
},
this
);
},
this
);
return widths;
};
this.getMaxWidths = function(widths, totalNumberOfColumns) {
var maxWidths = [];
var row = null;
this.each(
widths,
function(row, rowIndex) {
this.each(
row,
function(columnWidth, columnIndex) {
if (
columnIndex == row.length - 1 &&
row.length < totalNumberOfColumns
) {
return false;
}
if (columnIndex >= maxWidths.length) {
maxWidths.push(columnWidth);
} else if (columnWidth > maxWidths[columnIndex]) {
maxWidths[columnIndex] = columnWidth;
}
},
this
);
},
this
);
return maxWidths;
};
this.getNumberOfColumns = function(rows) {
var numberOfColumns = 0;
this.each(rows, function(row) {
if (row.length > numberOfColumns) {
numberOfColumns = row.length;
}
});
return numberOfColumns;
};
this.getColspanWidth = function(widths, totalNumberOfColumns) {
var colspanWidths = [];
var colspan = null;
var colspanWidth = null;
this.each(widths, function(row, rowIndex) {
if (row.length < totalNumberOfColumns) {
colspan = totalNumberOfColumns - row.length;
colspanWidth = row[row.length - 1];
if (colspan >= colspanWidths.length) {
colspanWidths[colspan] = colspanWidth;
} else if (
!colspanWidths[colspan] ||
colspanWidth > colspanWidths[colspan]
) {
colspanWidths[colspan] = colspanWidth;
}
}
});
return colspanWidths;
};
this.setMaxWidthsOnNonColspanColumns = function(widths, maxWidths) {
this.each(
widths,
function(row, rowIndex) {
this.each(
row,
function(columnWidth, columnIndex) {
if (
columnIndex == row.length - 1 &&
row.length < maxWidths.length
) {
return false;
}
row[columnIndex] = maxWidths[columnIndex];
},
this
);
},
this
);
};
this.getWidthOfLastNumberOfColumns = function(maxWidths, numberOfColumns) {
var width = 0;
for (var i = 1; i <= numberOfColumns; i++) {
width += maxWidths[maxWidths.length - i];
}
return width + numberOfColumns - 1; //add in length of separators
};
this.spreadOutExcessOverLastNumberOfColumns = function(
maxWidths,
excess,
numberOfColumns
) {
var columnToApplyExcessTo = maxWidths.length - numberOfColumns;
for (var i = 0; i < excess; i++) {
maxWidths[columnToApplyExcessTo++] += 1;
if (columnToApplyExcessTo == maxWidths.length) {
columnToApplyExcessTo = maxWidths.length - numberOfColumns;
}
}
};
this.adjustWidthsForColspans = function(widths, maxWidths, colspanWidths) {
var lastNumberOfColumnsWidth = null;
var excess = null;
this.each(
colspanWidths,
function(colspanWidth, index) {
lastNumberOfColumnsWidth = this.getWidthOfLastNumberOfColumns(
maxWidths,
index + 1
);
if (colspanWidth && colspanWidth > lastNumberOfColumnsWidth) {
excess = colspanWidth - lastNumberOfColumnsWidth;
this.spreadOutExcessOverLastNumberOfColumns(
maxWidths,
excess,
index + 1
);
this.setMaxWidthsOnNonColspanColumns(widths, maxWidths);
}
},
this
);
};
this.adjustColspansForWidths = function(widths, maxWidths) {
this.each(
widths,
function(row, rowIndex) {
var colspan = maxWidths.length - row.length + 1;
if (colspan > 1) {
row[row.length - 1] = this.getWidthOfLastNumberOfColumns(
maxWidths,
colspan
);
}
},
this
);
};
/*
* Utility functions
*/
this.trim = function(text) {
return (text || "").replace(/^\s+|\s+$/g, "");
};
this.each = function(array, callback, context) {
var index = 0;
var length = array.length;
while (
index < length &&
callback.call(context, array[index], index) !== false
) {
index++;
}
};
this.rightPad = function(value, length) {
var padded = value;
for (var i = 0, j = length - value.length; i < j; i++) {
padded += " ";
}
return padded;
};
}
module.exports = WikiFormatter;
|
STACK_EDU
|
Review of The Mag Feeder from Tactical Development
Review of The Mag Feeder from Tactical Development. This is the product that was featured in the SHOT Show® Pop-Up Preview video on my channel (https://youtu.be/R11jd1fANlY).
#TacticalChicken #GiveThemTheBird #BirdIsTheWord
Thanks to Brad Hellyar for providing this product for my review at SHOT Show 2020. This product can be viewed online at https://www.tactical.dev/.
My Amazon lists of approved items:
Gizzard Gary Gear - https://amzn.to/2lzfCPA
Gizzard Gary Tech - https://amzn.to/2NSQRLQ
Gizzard Gary Decor - https://amzn.to/2Y3IHEm
*As an Amazon Associate I earn from qualifying purchases.*
Gizzard Gary Productions
Credit: Gary Decker
Thank you for watching. Please like, comment, share, and subscribe! Visit the Gizzard Gary Productions web page at http://www.GizzardGary.com for all of my social media links and other information.
You can purchase Gizzard Gary merchandise (and help support the channel) at https://shop.spreadshirt.com/gizzardgary.
Also, if you like my videos and want to support my channel, please go to my Patreon page at https://www.patreon.com/GizzardGary, and join these fine Patrons:
Super Chicken (Level 4):
jason stewart: https://www.patreon.com/user?u=11402910
Gizzard Guru (Level 3):
Rich White (https://www.patreon.com/user?u=9757779)
TheGunToting Pacifist (https://www.patreon.com/user?u=14058816)
Jeffrey Flores: https://www.patreon.com/user?u=2390220
Gizzard Groupie (Level 2):
Ruben Rivera Jr https://www.patreon.com/user?u=5648832
James T. Seckinger, JR: https://www.patreon.com/user?u=6839815
Seven Wonders: https://www.patreon.com/user?u=14656477
Buck Stanley: https://www.patreon.com/user?u=10299758
Gizzard Gang (Level 1):
jaime avila esparza: https://www.patreon.com/epsarta117
The Outlaw Hatfield: https://www.patreon.com/TheOutlawHatfield
Ghost Tactical: https://www.patreon.com/GhostTactical
Stacey Nelson: https://www.patreon.com/user?u=17967758
Calaveras 32Spcl: https://www.patreon.com/Calaveras32Spcl
Randy Shields: https://www.patreon.com/9952178
Gabriel Stark: https://www.patreon.com/8570857
Daniel Pfeiffer: https://www.patreon.com/dtpfeiffer
C4 Defense: https://www.patreon.com/c4defense
Travis Tuominen: https://www.patreon.com/user?u=25148585
©2020 Gizzard Gary Productions. All rights reserved.
|
OPCFW_CODE
|
AKOVIA is Automated Knowledge Visualization and Assessment
Although HIMATT has already been used by several researchers, it has two design problems worth mentioning. On the one hand, the user interface was accepted by researchers and subjects alike, and it even had a good usability (Pirnay-Dummer, Ifenthaler, & Spector, 2010). On the other hand, it was a web service which integrated both the data collection and the analysis. Researchers understandably wanted to integrate the data collection into their experiments and studies. However, subjects needed to log into HIMATT in order to input their data as text or draw graphs. They needed to enter another login, username, and password, which might have disturbed the experimental setting in some cases. The second design problem results from the first: We were often given raw data to upload into the HIMATT system so that the researchers could use the analysis facilities on their data. After following this procedure more often than the system had been used through the “front door,” we felt it was time for a complete redesign of the blended methods.
AKOVIA supports two different model input formats: 1. Re-representations on graphs (e.g. list form), 2. Re-representations as text
AKOVIA transforms the text into the list form. For several technical reasons, MS Excel® files are used to input data into AKOVIA. Although it is unconventional and usually XML is used, we found that the Excel format has several benefits, especially when character sets in plain text sometimes raise incompatibilities. Moreover, in some methodologies the list forms of models are hand coded and researchers find it easier to work with Open Office and/or Excel to input data. However, in the future we will also work on a stable XML input format to ensure better connectivity with other computer programs.
AKOVIA places no explicit limits on the size of data which can be investigated and analyzed. Large concurrent analyses used to slow our servers down to the point where the browser experienced time outs. Therefore, we separated the topology of the small analysis grid into theupload server, which takes in the files, and the analysis servers. The latter access the upload server and process the tickets offline. Afterwards, the results are uploaded to the upload server and the user is notified. Depending on the number and size of concurrent jobs, a response may take hours or sometimes even days. The figure below shows a simplification of the server topology.
A documentation is available here: AKOVIA Documentation
Please refer to the following works when using AKOVIA:
Ifenthaler, D. (2010). Scope of graphical indices in educational diagnostics. In D. Ifenthaler, P. Pirnay-Dummer & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 213-234). New York: Springer.
Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (Eds.). (2010). Computer-based diagnostics and systematic analysis of knowledge. New York: Springer.
Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2010). Highly integrated model assessment technology and tools. Educational Technology Research and Development, 58(1), 3-18. doi: 10.1007/s11423-009-9119-8
Pirnay-Dummer, P., & Ifenthaler, D. (2010). Automated knowledge visualization and assessment. In D. Ifenthaler, P. Pirnay-Dummer & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 77-115). New York: Springer.
|
OPCFW_CODE
|
What is the Daily Synchro
The Daily Synchro is a short, summary report of your daily deliverable(s) and your planned activity for the next day. The Daily Synchro is written daily.
The Daily Synchro's content is very similar to what you would say during a daily stand-up meeting. In fact, the Daily Synchro originated out of a need to avoid daily stand up meetings and replace them, for remote or WFH teams, by an online equivalent. However, while stand-up meetings are primarily for informing and communicating with other team members, the Daily Synchro is primarily for the benefit of the customer.
If your team has a daily stand-up meeting, a daily synchro may not be necessary.
What is it for
The purpose of the Daily Synchro is to keep the product owner informed of your progress:
- Inform them of your daily deliverable so that they can verify them
- Inform them of your intended focus for the next day, so that they can, if necessary, change their priorities and direct you to focus on something else
- Ask them for clarifications about their requirements
- Inform them of any technical (or functional) difficulties that you may have encountered
Note that the purpose of the Daily Synchro is not to provide minute-by-minute accounting of your day’s 8 hours of work. The Daily Synchro should be short and to the point.
When should I write my Daily Synchro
You should write a Daily Synchro every (working) day at 2pm.
As its name implies, the Daily Synchro is a daily report and therefore must be produced every day. Since it includes your daily deliverable(s), you should write it once your daily deliverable(s) are completed. As per our best practices for the daily deliverable, this must be at 2pm rather than at the end of the day.
For some projects, the lead developer will collate the individual Daily Synchros from each developer and write a team Daily Synchro for the customer. Delaying your Daily Synchro will simply create unacceptable pressure on the team leader.
Where should I write my Daily Synchro
This may vary from project to project. Generally, you should write your Daily Synchro in the designated Slack channel. This may be a dedicated “Daily Synchro” channel in the customer’s Slack group, or it may be in the project channel in the Dzango Slack group.
What is the format of the Daily Synchro
Keep the Daily Synchro short and to the point. Avoid unnecessary words (eg "Added ..." instead of "I have added..."). You can follow the same format as for conventional commit messages.
Avoid unnecessary sentences such as "please review and comment", unless you want to point the customer's attention to a specific aspect of the deliverable.
Avoid sentences like "I will finish this tomorrow". This is implied (otherwise you should report instead that this ticket will take you longer than expected, that therefore you do NOT expect to finish this tomorrow, and that you ask whether this ticket's priority should be reconsidered).
Recommended date format: DDDD, dd MMMM (eg Wednesday, 19 October).
The date format may vary according to the customer's preferences. Please follow a consistent format as per previous reports.
List your daily deliverable(s) here.
Only qualified deliverables should be listed in this section. If it's not a deliverable, use another section or report it when it becomes one.
For each deliverable, provide all the relevant information for the stakeholders to verify it. This may vary according to the deliverable, but could include some or all of the following:
- Url to relevant page in staging server
- Instructions on how to verify the feature or bug fix
- Appropriate credentials if login is required as a specific user
- Link to ADR or ticket analysis
- Link to ticket
- Videos or screenshots
* For admin, made the activity page mobile responsive
* For admin, moved "Download all activity" button above table [link to page on staging server]
* ADR: Which css scoping to use [link to ADR]
* Finished analysis for the implementation of the print layout of the users table. [link to ticket]
* Added unit test for the case when getActivities gets future date. [link to CI running unit tests on staging image]
Work in progress
In this section, list anything that is available for stakeholders to see that does not qualify as a deliverable.
This could include a feature or bug that that has not yet been fully implemented or not fully tested, but is sufficiently advanced that the customer can start validating it without wasting their time.
This should be reported using the same format as a deliverable, but with additional indications of what has been implemented and/or tested, and/or what has NOT been implemented or tested. Please avoid vague expressions like "Not fully tested" in favour of, say, "Not tested on Safari".
* For admin, the activity list is paginated. Not fully tested in Safari and edge browsers. [link to relevant page on staging]
* Found issue on Firefox where user can't click on the next button
* Tested all known cases on Safari and it worked, but not on other browsers
Use this section to:
- Indicate what your next (tomorrow's) deliverable will be. A simple description (one line) should be enough. Include a link to the relevant ticket.
- Ask questions or request clarification of the specs.
- Notify of an ongoing or new technical issue that may impact your ability to resolve the ticket you are working on
* Make the basic printing take our layout.
* ADR (WIP): How many layers of architecture to put on excel export [link to ADR]
Use this section to request clarification about priorities, or anything not directly related to a specific ticket.
Anything useful to the customer that does not fit in any of the above sections:
- If working in pairs, who is your pair partner
- Leave of absence
* I noticed an issue with open activity in new tab button [link to ticket]. Is this high priority? this may take some time, should we skip this for this week?
* Pair programming with @colleague
* Tomorrow I will be on leave.
* I got stuck on a ticket [link to ticket], which i have paused. Since this was low priority anyway, I will continue it next week only
|
OPCFW_CODE
|
Many world prefer the proportions of the higher RPG maker XP sprites come the smaller, chibi-styled sprites of later engines. It’s very easy to transform XP sprites to the new format, by adhering to a couple of simple steps.
You are watching: How to import sprites into rpg maker mv
The easiest way to use XP character sprites in later on engines is to remove the entire very first column of the personality sheet and save it right into your brand-new project’s characters folder, with a $ at the start of the document name.
Let’s take it a look at the two character layouts to understand how they work and why that first column of sprites have the right to be removed, climate go v the measures in a couple of graphics programs. Finally, we’ll look at why the newer engines usage $ and also ! in ~ the start of the paper names, so you’ll recognize when you must use them, too.
According come the Terms and also Conditions, you have the right to use the default resources from any type of RPG device engine, in any other RPG maker engine, as long as you have a license for both
The Differences in between the personality Sheets
An RPG device XP character sprite has 4 columns, and also four rows. Each row represents the personality facing various directions, and also each pillar is a different computer animation frame. When the character is moving, the game cycles through these frames in a left-to-right order. When the character is not moving, the first frame is presented as the idle pose.
In later versions (RPG device VX, VX Ace, MV and MZ), one individual character sprite has actually only 3 columns, and also four rows. The rows additionally represent the personality facing different directions, and the columns are various animations frames. However, rather of cycling v frames in a left-to-right order as soon as the character is moving, the game zig-zags back and forth, starting at the center. Once the character is no moving, the middle framework is shown as the idle pose.
Why go they change from 4 columns come three? If friend look carefully at the XP sprite, you’ll an alert that the first and third frames are identical. The frames cycle from the idle pose, come one foot forward, back to idle, then the various other foot forward.
The newer format simply combines these two similar columns, and also uses the middle column as the idle pose.
The result is that the picture size is smaller sized with 3 columns than v four, i m sorry reduces memory usage, loading speed, and also project size. However the characters still animate correctly.
How to eliminate the very first Column
Now that you know why the styles are different, let’s look at the procedures to do it happen. It’s rather easy, and you only need a basic image editing and enhancing program to carry out it. Note, your program must enable transparency in images, so Microsoft’s paint program is out.
GIMP (GNU picture Manipulation Program) is a free graphics editing and enhancing program, i m sorry you can download native http://www.gimp.org
Open your RPG an equipment XP personality sprite image in GIMP, then select Canvas dimension from the photo menu.
Now job-related out the correct broad by multiplying your image width (128 in this case) by 0.75 (to acquire 96). Go into that as the width, then traction the photo in the preview all the way to the left, so the frame is approximately the 3 columns ~ above the right. Fight the Resize button, i beg your pardon will crop away the area external the frame, leave you v what’s inside.
Finally, pick Export indigenous the document menu to conserve your brand-new image. Put a $ at the start of the document name – we’ll go into the reason for that a bit later.
Paint.net is likewise a free graphics editing programs, i beg your pardon you can download native http://www.getpaint.net – this is actually my wanted image editing and enhancing program as it has a very basic interface and also does practically everything ns need. I just resort to GIMP if I should use a grid.
Open your RPG maker XP personality sprite photo in Paint.net, then pick Canvas dimension from the photo menu.
Now work out the correct width by multiply your picture width (128 in this case) through 0.75 (to acquire 96). Get in that together the width, climate click the facility right anchor, i m sorry will keep the 96 pixels on the appropriate side that the image. Struggle the yes button, i beg your pardon will chop away the overfill area on the left that the image.
Finally, save your new image. Put a $ in ~ the start of the file name – we’ll get in the factor for the a little bit later.
How to include your new Character to her Project
You deserve to use the source Manager in your brand-new project to income your new character sprite. Just select Graphics/Characters or img/characters depending upon your RPG an equipment version, fight the income button, and go find your image.
In RPG device VX and also VX Ace, friend will have an alternative to set transparent and also translucent colors. Right-click and also then left-click on the background of the image, otherwise it could import there is no the transparency.
I uncover the resource Manager clunky, and also some civilization have difficulties with the papers not showing up after importing. I likewise don’t like having actually to set my transparent color when I’ve currently done that in the image. So I favor to usage use the Windows record Explorer to move things around. If you want to go this means just traction the picture into the Graphics/Characters or img/characters folder, relying on your RPG machine version.
Now your picture is imported, you can use it because that actor or occasion sprites.
Why the document Name is Important
RPG device XP character photos don’t have any type of special symbols in the paper names. There is one character per image, and they just work.
But if you were to layout the photo as shown over and just drop the document into your later-version personalities image folder, you’ll view some funny stuff occur when you shot to use the sprite because that a personality or one event.
Only a small part of the sprite is displayed!
Here’s why …
What’s Special around the $
RPG makers after XP actually have actually two layouts for personality sheets. The very first one is a solitary character, as displayed above, and the second one enables a character sheet to have sprites for eight different characters.
By putting a $ in ~ the begin of the file name, you’re informing these programs that this character sheet is the best size because that a single character, and also it will use the entire image.
By leaving turn off the $, you room telling the programs the this character sheet is huge enough to organize eight personalities (even if few of them are empty), and it will certainly divide the picture up and let you usage a section of the picture for each character.
If you only have one character in her image, and it’s only large enough for that one character, yet you leave the $ turn off at the start of the paper name, the program is going to think there room actually eight characters and also will division the picture up accordingly. Then each of those eight segments will certainly be further split into 4 rows and three columns, i beg your pardon is why you finish up v a small partial-sprite.
What’s Special about the !
So we’ve sorted out our NCP character sprite. Now what around that door?
Notice exactly how it’s no longer aligned v the bottom of the building! This is since RPG machine XP paint, etc its personalities aligned with the tile grid, but later versions of the engine draw personalities a couple of pixels higher. This is to prevent the figure of the personality standing right on the edge of things:
In the second image, I’ve temporarily change the name the document for the personality on the left to have actually a ! in ~ the beginning. The record for the character on the right has actually no ! at the start of the paper name.
See more: How Many Cups In 20 Oz Is How Many Cups? Convert 20 Ounces To Cups
If you have actually a character picture that demands a $ and a ! in ~ the begin of the name, the doesn’t issue which one friend put first – it can be $! or !$ – they will both occupational the exact same way.
Putting it all Together
So it’s pretty basic to usage RPG an equipment XP character sprites in later versions of the engine. Simply remember to:Change the canvas size to crop away the very first column the spritesPut a $ in ~ the begin of the filename if it’s a single character, and also no $ if you’re allowing for up to eight charactersPut a ! in ~ the begin of the filename if it’s an object/scenery sprite that demands to heat up with the grid, and also no ! if it’s an actor/NPC
|
OPCFW_CODE
|
3.10 Logging Service
PacketiX VPN Server 2.0 automatically writes logs for operational
status and packets flowing over Virtual HUBs as a log file, thereby
incorporating a function which enables a simple and sure way to confirm
proper operation as well as trace problems and discover any unauthorized
access & policy breaches at a later date. This section explains the
logging service integrated into PacketiX VPN Server 2.0.
3.10.1 Log Save Format & Save Cycle
Types of Logs Saved
The VPN Server automatically writes the Server Log as the log for the
entire VPN Server.
Also, in addition to each of the Virtual HUBs writing a security log
recording important operating conditions relating to the hub's
administration and VPN connection records, they also write packet logs
for packets types pre-designated by the Virtual HUB Administrator.
All log files have their own entry and are written one to a line in a
text file. When multibyte characters such as hiragana & Chinese
characters are used in the log file, the encoding method is unified as
Log File Save Location & Format
All log files create the three subdirectories server_log,
security_log and packet_log in the directory containing the
vpnserver process (or vpnbridge process in the case of the VPN Bridge)
executable files and write each of the server log, security log and
packet log there. A further subdirectory is created for the security log
and packet log written for each Virtual HUB. These logs are then written
to this subdirectory, which is named after its Virtual HUB.
Log File Switch Cycle
Virtual HUB Administrators can set the log file switch cycle of
security logs and packet logs. New file names are then generated based
on this log file switch cycle. The log file names created when the
settable switch cycle and its rules are applied are as follows. Note
that the entire VPN Server log is always switched and saved on a daily
for file name date portion
(Example: 1:45:10 (pm), 7 December 2005
||None (perpetually add
records to same file)
Changing the Virtual HUB Log File Settings
The Virtual HUB Administrator can set the switch cycles of the
Virtual HUB's security log and packet log by clicking on [Log save
settings] in the VPN Server Manager. When not wishing to save a log
file, deselect the relevant checkbox prevents any log file from being
saved for that type of log. It is also possible to select the details of
which types of packet logs should be saved.
All Virtual HUB logs are set with a one day switch save cycle in
In the vpncmd utility, use the [LogEnable], [LogDisable],
[LogSwitchSet] and [LogPacketSaveType] commands.
Fig. 3-10-1 Log save settings window
Measures for Log Files Exceeding 2Gbytes
While the each log file increases in response to the log contents and
volume, when exceeding 2Gbytes (or 2,147,483,648 bytes to be precise),
that log file is automatically divided and saved approximately every
2Gbytes. The first file keeps the original file name while the second
and subsequent files are sequentially named "~01", "~02" and so on.
3.10.2 Server Log
The server log is saved under the [server_log] directory.
The entire VPN Server operating log is saved in the server log, which
saves detailed operating records including event records upon the launch
& termination of the VPN Server and when & what type of connections were
received. Therefore, subsequent analysis of this log enables the tracing
of unauthorized access and the cause of problems.
In addition, copies of each of the Virtual HUBs' security logs are
saved together in the server log so that even if a Virtual HUB
Administrator sets the security log not to be saved, it is always saved
automatically in the server log. Accordingly, even when the Virtual HUB
Administrator does not save the Virtual HUB logs or deletes them, their
contents can still be accessed from the VPN Server's server log.
3.10.3 Virtual HUB Security Log
The Virtual HUB security log is saved under the
[security_log/Virtual HUB name] directory. The security log records
information on sessions which connected to the Virtual HUB, records
within the Virtual HUB (address table and database updates etc.) and
records relating to Virtual HUB administration (user creation etc.).
3.10.4 Virtual HUB Packet Log
The Virtual HUB packet log is saved under the [packet_log/Virtual
HUB name] directory. The packet log can save all of the headers of
packets flowing within the Virtual HUB or their entire payloads.
However, saving all types of packet logs generates a massive amount of
log file data. That is why the Virtual HUB Administrator is able to
select which types of packets to register in the packet log. The types
of packets which can be selected in the [Log save settings] window and
their contents are as follows.
Packets saved when this type is selected
|TCP Connection Log
Those TCP/IP protocol packets in which a TCP/IP connection
between a client and user is established or disconnected.
|TCP Packet Log
All TCP/IP protocol packets.
|DHCP Packet Log
Those UDP/IP protocol packets which are control data for
|UDP Packet Log
All UDP/IP protocol packets.
|ICMP Packet Log
All ICMP protocol packets.
|IP Packet Log
All IP protocol packets.
|ARP Packet Log
All ARP protocol packets.
|Ethernet Packet Log
When set to save packet logs, the Virtual HUB saves the packet log
types pre-designated by the Virtual HUB Administrator from among all
virtual Ethernet frames flowing within the Virtual HUB. Each Ethernet
frame is analyzed with the highest possible layer from layer 2 up to
layer 7 using the VPN Server's internal high-level packet analysis
engine and important header information is saved as a packet log.
In addition, the Virtual HUB Administrator can write not only the
header information but also the entire contents of the packet (bit
sequence) to the packet log in 16 decimal format. In this case, note
that it is necessary have a high volume disk capacity in proportion to
the total size of the packets actually transmitted.
In default, only the packet header information of two packet types,
namely the TCP connection log and DHCP packet log, are saved. While this
setting value is sufficient for many environments, change the settings
as required to save more detailed packet information. Please note that
saving all pockets logs is not practical in view of today's broadened
3.10.6 Obtaining Log Files on a Remote Administration Terminal
The log files written by the VPN Server and Virtual HUBs are saved on
the physical computer disk on which the VPN Server is running. However,
reading and downloading of the files written to the physical disk is
typically limited to that computer's Administrators and users capable of
local log in.
The PacketiX VPN Server employs a mechanism which allows log files to
be read remotely without having to actually log in locally in
consideration of the fact that the VPN Server and Virtual HUB
Administrators may not be the System Administrators of the computer
running the VPN Server. This is known as the remote log read function.
The remote log read function is very easy to use. Clicking on the
[Log File List] button when using the VPN Server Manager displays a list
of the log files which can be read with current authority along with
their file size and time of last update. Log files can be selected
arbitrarily from this list and downloaded to an administration terminal.
Data is automatically SSL encrypted to ensure safety when transferring a
log file because the administration connection's TCP/IP connection is
The [LogGet] command can be used in the vpncmd utility.
The VPN Server Administrator can remotely obtain the VPN Server's
server log, and the security logs and server logs of all Virtual HUBs.
Virtual HUB Administrators can only remotely obtain the security log and
server log of the Virtual HUB for which they have authority, and cannot
remotely acquire any other log files.
When connected to a cluster controller in a clustering environment, it
is possible to collectively enumerate and designate the log files of all
cluster member servers including the cluster controller, and download
Fig. 3-10-2 Log file list display window
3.10.17 Syslog Transmission function
As explained in 「3.3.17 Syslog Transmission Function」, enabling the Syslog Transmission function
prevents log data sent by the syslog protocol from being saved to the
local hard disk.
|
OPCFW_CODE
|
Getting started with XNA
I was recently asked to write a guide to getting started with XNA to be published in a guide we will be distributing this year. As I was writing it I was thinking this would be great for the blog so now its finished here it is!
NB: If you don't know what XNA is and would like to find out more then check out creators.xna.com
Getting started with XNA Game Studio
Before you can develop games for XNA you need to download and install Microsoft Visual C# 2005 Express Edition and XNA Game Studio Express. This will give you the IDE and allow you to code the game and then build and deploy it.
- First download and install Microsoft Visual C# 2005 Express Edition from http://msdn2.microsoft.com/en-us/express/aa975050.aspx.
- Next download and install Microsoft Visual Studio 2005 Express Editions Service Pack 1 from http://www.microsoft.com/downloads/details.aspx?FamilyId=7B0B0339-613A-46E6-AB4D-080D4D4A8C4E&displaylang=en.
- Finally download and install Microsoft XNA™ Game Studio Express from http://msdn2.microsoft.com/en-us/XNA™/aa937795.
Once you have installed the above you can navigate to XNA Game Studio on your start menu and fire up the application:
XNA on the Xbox 360
The XNA platform can not only run on Windows but also the Xbox 360, below is a guide to setting up the Xbox 360 for XNA and deploying games onto the console.
For the purpose of this part of the guide we will assume you already have XNA Game Studio fully set up and working on your PC (if not please see the above ‘Getting started with XNA Game Studio’ guide).
- To deploy games onto the Xbox 360 your console must be connected to both the Internet and your PC. You can do this with the Ethernet port on the back of your console or using the optional Wi-Fi module. It is also essential that you have the optional Xbox hard drive unit attached to your Xbox; this is where the games will be stored once deployed.
- You also need to ensure you have an Xbox Live account setup and registered on your Xbox 360. If you do not have an Xbox Live account visit xbox.com/live to create one and learn more. Either Gold or Silver Xbox Live memberships are sufficient.
- To deploy games onto the Xbox you need to have a creator’s club membership and have the XNA™ Game Launcher installed.
- To subscribe to Creators Club:
- To get a creators club membership go to the ‘Xbox Live’ Blade on the 360 dashboard and select ‘Xbox Live Marketplace’:
- Next select ‘Games’>’Game Downloads’>’XNA Creators Club’>’Memberships’
- At this stage either complete the purchase of a subscription package or press ‘Y’ to redeem a prepaid code.
To Install the XNA™ Game Launcher:
- Within the ‘Xbox Live’ blade select ‘Xbox Live Marketplace’ (see Fig.2)
- Next select ‘Games’>’Game Download’>’XNA Creators Club’>’XNA Game Launcher’
- Proceed to download the Game Launcher
- Once the Games Launcher is downloaded and installed go to the ‘Games’ blade on the Xbox dashboard and select ‘Demos and More’:
- Now select and launch ‘XNA Games Launcher’
- To deploy games to the Xbox you have to set up a secure channel of communication with your PC using a connection key. To do this select ‘Settings’ in the main XNA Game Launcher window and then ‘Generate Connection Key’. You will be presented with a hexadecimal key which you must now enter in XNA Game Studio on your PC.
- Launch Games Studio, go to the menu and select ‘Tools’>’Options’
- Make sure the ‘Show all settings’ box is ticked to the bottom left of the options dialogue:
- Now go to the ‘XNA Game Studio Express’ tab and select ‘Xbox 360’ (See Fig.4)
- Click the ‘Add’ button and enter a name for your console in the ‘Name’ field and the connection key in the ‘Connection Key’ field.
- Click ‘Test Connection’ and if you have entered your key correctly the test should complete successfully and the ‘OK’ button will become active.
- Now return to your Xbox and select ‘Accept new key’
- You are now ready to deploy games to your console.
To deploy a game to your Xbox 360:
- On your Xbox launch the XNA Games Launcher and select ‘Connect to computer’
- Now on your PC load the game you wish to deploy into XNA Game Studio and press F5 to start debugging. XNA Game Studio will deploy and launch the game on your Xbox 360.
- This game will then be permanently stored on the Xbox and can be launched without the PC through the XNA Games Launcher.
Building your first game
XNA Game Studio Express comes with a game project pre-installed. The project is called Spacewars and is a complete XNA game ready to be compiled. However because you have the source code for the game it is an excellent way to start getting used to XNA Games Studio and to write your own code by implementing some new features. The below guide explains how to get started with the Spacewars project and demonstrates how to implement a new feature in the game.
- Firstly we will open the Spacewars project for Windows. In XNA Game Studio select the ‘File’ menu and ‘New Project’ (Ctrl+N).
- In the ‘Visual Studio installed templates’ section select ‘Spacewar Windows Starter Kit’:
- The project will load and you can press F5 immediately to build and deploy the game.
- As an example of how easy it is to make a change to the game or implement a new feature we will change saucer ship in the 3D game into an asteroid (so that player 1 can play as a ship which looks like an asteroid).
- First open the ‘EvolvedShape.cs’ file from the ‘Evolved’ directory in the ‘Solution Explorer’:
- Change the p1_saucer ship mesh to the asteroid2 mesh:
This changes the mesh for player one’s second ship option to the DirectX resource located in /Content/Models/astoroid2.x
- Next we will change the texture for player one’s second ship option to match the mesh:
This changes the texture to the resource located here: /Content/Textures/astoroid2.tga
- You can now recompile (F5) and see the changes you’ve just made in the game. This should give you an idea of just how easy it is to make changes to existing games. For other projects and tutorials on XNA™ visit creators.xna.com or see the Resources section below.
- XNA™ Creators Club: http://creators.xna.com/
- XNA™ Developer Centre: http://msdn.com/xna
- XNA™ Game Studio Express Forums: http://msdn.com/xna/forums
- XNA™ Game Studio Express Blog: http://blogs.msdn.com/xna
|
OPCFW_CODE
|
#!/usr/bin/env python3
import os
import sys
on_windows = "linux" not in sys.platform
from database import DB
from alarm import Alarm
from viewModel import ViewModel
def clear_screen():
command = "clear"
if on_windows:
command = "cls"
os.system(command)
def menu(menu_actions):
keys = list(menu_actions.keys())
valid_max = len(keys)
clear_screen()
while True:
for i, item in enumerate(keys):
print("{0}: {1}".format(i + 1, item))
response = input("Select value [1-{0}]: ".format(len(menu_actions.keys())))
converted = int(response) - 1
if 0 <= converted < valid_max:
menu_actions[keys[converted]]()
else:
clear_screen()
print(
"Invalid entry, please enter an integer value between 1 and {0}".format(valid_max)
)
def add_alarm():
db = DB("alarms.json")
db.add_alarm(Alarm())
def change_alarm():
print("Change alarm")
def quit():
raise QuitException()
class QuitException(Exception):
pass
if __name__ == "__main__":
menu_items = {
"Add Alarm": add_alarm,
"Quit": quit,
}
try:
vm = ViewModel()
while True:
menu(menu_items)
except KeyboardInterrupt:
vm.__del__()
raise
|
STACK_EDU
|
Education/News/March 2019/WikiGap brings editors to close WikiGap
WikiGap brings editors to close WikiGap and open Wiki Pathshala
Author: Rajeeb Dutta
Summary: The event was originated in March 2017, a sister edit-a-thon in four languages between the Swedish embassy in New Delhi and Stockholm was organized for International Women's day. This year, I with the help of local sponsor, Wikimedia India, Swedish Embassy, The Swedish Ministry of Foreign Affairs and Wikimedia Sverige, all came together to organise WikiGap 2019 Kolkata, India workshop, while an edit-a-thon have started from 8th March 2019 till 8th April 2019, to work together to close Wiki gap.
WikiGap 2019 workshop in Kolkata
Group photo of new editors and mentors after WikiGap Workshop and Edit-a-thon at Kolkata
Teaching participants on how to edit articles
Participants working on articles
Participant getting trained to how to add citation in a Wikipedia article
Participants before the refreshments break
Participants at the end of the workshop
WikiGap is an event during which people around the world gather to add more content to Wikipedia about women figures, experts and role models in various fields. Similar events have already been arranged in almost 60 countries worldwide to improve women’s representation on the internet.
Together, we want to bring about a more gender-equal internet – and a more gender-equal world. This is the second year WikiGap is being organized. The event invites broad and diverse participation, and allows for local adaptations to the overall theme of closing the gender gap and other gaps relevant for diversity on Wikipedia.
It was a wonderful moment, as we able to start the WikiGap onsite event with a good number of participation in Kolkata, India, the celebration was more as it also marks the first-ever WikiGap event in Kolkata, the workshop was to sensitize participants about Wikipedia and to enable them to use and contribute to the sum of human knowledge, as an Edit-a-thon were already started, the participants who were new have become "New editors". The event focused on writing new or expand articles on biographies of any Indian women which are of interest to the participants. This workshop was conducted to groom new contributors in Wikipedia spaces from India, especially capture the interest of students to volunteer for Wikimedia projects like WikiGap and help to close the gender gap.
The event show us the possibility to start a "Wiki Pathshala" (Wiki Club), first of it's kind in Kolkata, India, as I being the mentor and organiser felt the need of such workshops to groom new participants into a "New editor" and in a way passing the torch of knowledge and skills to others.
WikiGap event till date, 7 articles were created, 16 articles edited, 12 editors participated, 47.7K bytes added and 166 articles views.
One of the take away of the event was basically an eye-opener for me, to open a "Wiki Pathshala" (Wiki Club).
Social Media channels or hashtags:#WikiGap2019Kolkata, #WikiGap2019
|
OPCFW_CODE
|
use std::path::{Path, PathBuf};
use crate::util::{Filter, SortBy};
// FMState holds all relevant methods and fields to reproduce the state
// of a file manager. (Essentially a singleton as long as tabbing isn't a thing)
pub struct FMState {
current_dir: PathBuf,
focused: Option<PathBuf>,
marked: Vec<PathBuf>,
filters: Vec<Filter>, // filters to apply (no filters: everything is shown)
sort_by: SortBy,
exit: bool,
}
impl FMState {
pub fn new() -> Self {
let marked: Vec<PathBuf> = Vec::new();
let mut current_dir = PathBuf::new();
let start_dir = match std::env::var("HOME") {
Ok(val) => val,
Err(_e) => "/".to_string(),
};
current_dir.push(start_dir);
let sort_by = SortBy::LexioInc;
let focused = sort_by.sort(Self::list(¤t_dir)).pop();
FMState {
current_dir,
focused: focused,
marked,
filters: vec![Filter::Dotfiles],
sort_by,
exit: false,
}
}
pub fn is_marked(&self, pathb: PathBuf) -> bool {
self.marked.iter().any(|pathb_cmp| pathb_cmp == &pathb)
}
pub fn mark(&mut self, pathb: &PathBuf) {
if !self.is_marked(pathb.to_path_buf()) {
self.marked.push(pathb.to_path_buf());
}
}
pub fn mark_current(&mut self) {
if let Some(focused) = self.focused.clone() {
self.mark(&focused);
}
self.move_down();
}
pub fn unmark_current(&mut self) {
if let Some(focused) = &self.focused {
self.marked.retain(|pathb| pathb != focused)
}
self.move_down();
}
pub fn mark_all(&mut self) {
self.list_current()
.iter()
.for_each(|pathb| self.mark(pathb));
}
pub fn unmark_all(&mut self) {
self.marked.clear();
}
pub fn get_idx(&self) -> Option<usize> {
if let Some(pathb_focused) = self.focused.clone() {
self.list_current()
.iter()
.position(|direle| direle.to_path_buf() == pathb_focused)
} else {
None
}
}
pub fn update_by_idx(&mut self, idx: Option<usize>) {
if let Some(idx) = idx {
if let Some(asd) = self.list_current().get(idx) {
self.focused = Some(asd.clone());
}
}
}
// moves out of the current dir, returns index of the former parent dir
pub fn move_out(&mut self) {
if let Some(dir) = self.current_dir.parent() {
self.focused = Some(self.current_dir.clone());
self.current_dir = dir.to_path_buf();
}
self.update_by_idx(self.get_idx());
}
// moves into the current focused dir if possible, returns new focused index
pub fn move_in(&mut self) -> Option<()> {
if self.focused.as_ref()?.is_dir() {
self.current_dir = self.focused.as_ref()?.clone();
let current_list = self.list_current();
self.focused = Some(current_list.get(0)?.to_path_buf());
self.update_by_idx(Some(0));
}
None
}
pub fn move_up(&mut self) -> Option<()> {
let current_list = self.list_current();
let mut new_idx: Option<usize> = None;
if let Some(focused) = &self.focused {
let a = current_list.iter().position(|direle| direle == focused)?;
if a == 0 {
new_idx = Some(current_list.len() - 1);
} else {
new_idx = Some(a - 1);
}
} else if !current_list.is_empty() {
new_idx = Some(0);
}
self.focused = Some(current_list.get(new_idx?)?.to_path_buf());
self.update_by_idx(new_idx);
None
}
pub fn move_down(&mut self) -> Option<()> {
let current_list = self.list_current();
let mut new_idx: Option<usize> = None;
if let Some(focused) = &self.focused {
let a = current_list.iter().position(|direle| direle == focused)?;
if a + 1 == current_list.len() {
new_idx = Some(0);
} else {
new_idx = Some(a + 1);
}
} else if !current_list.is_empty() {
new_idx = Some(0);
}
self.focused = Some(current_list.get(new_idx?)?.to_path_buf());
self.update_by_idx(new_idx);
None
}
fn order(&self, list: &mut Vec<PathBuf>) -> Vec<PathBuf> {
// sort according to the sort_by property
let mut list = self.sort_by.sort(list.to_vec());
// remove filter if needed
for filter in &self.filters {
list = filter.filter(list);
}
list.to_vec()
}
fn is_filter_active(&self, filter: &Filter) -> bool {
self.filters.iter().any(|fil| fil == filter)
}
pub fn toggle_filter(&mut self, filter: &Filter) {
if self.is_filter_active(filter) {
self.filters = Vec::new();
} else {
self.filters.push(filter.clone());
}
}
pub fn list_current(&self) -> Vec<PathBuf> {
let mut a = FMState::list(&self.current_dir);
self.order(&mut a)
}
pub fn list_prev(&self, depth: u8) -> Vec<PathBuf> {
let mut list = Self::list_previous(&self.current_dir, depth);
self.order(&mut list)
}
pub fn list_next(&self) -> Vec<PathBuf> {
if let Some(focused) = &self.focused {
if focused.is_dir() {
let mut a = Self::list(&focused);
self.order(&mut a)
} else {
Vec::new()
}
} else {
Vec::new()
}
}
pub fn get_preview(&self) -> Option<String> {
std::fs::read_to_string(self.focused.as_ref()?).ok()
}
pub fn list_previous(path: &PathBuf, depth: u8) -> Vec<PathBuf> {
if path.exists() && path.is_dir() {
if depth == 0 {
Self::list(path)
} else {
match path.parent() {
Some(parent_path) => Self::list_previous(&parent_path.to_path_buf(), depth - 1),
None => Vec::new(),
}
}
} else {
Vec::new()
}
}
pub fn list(directory_path: &PathBuf) -> Vec<PathBuf> {
let mut list: Vec<PathBuf> = Vec::new();
let path = Path::new(&directory_path);
if let Ok(contents) = path.read_dir() {
for entry in contents {
match entry {
Ok(ele) => list.push(ele.path()),
Err(_error) => {}
}
}
list
} else {
Vec::new()
}
}
// a couple setter, getter fields to keep all fields private
pub fn exit(&mut self) {
self.exit = true;
}
pub fn is_exit(&self) -> bool {
self.exit
}
pub fn set_sortby(&mut self, new_sortby: SortBy) {
self.sort_by = new_sortby;
}
pub fn jump_to(&mut self, new_focused: PathBuf) -> Option<usize> {
if new_focused.is_dir() {
self.current_dir = new_focused;
Some(0)
} else {
self.focused = Some(new_focused.clone());
self.current_dir = new_focused.parent()?.to_path_buf();
self.get_idx()
}
}
pub fn get_currentdir(&self) -> PathBuf {
self.current_dir.clone()
}
pub fn get_focused(&self) -> Option<PathBuf> {
self.focused.clone()
}
pub fn get_marked(&self) -> Vec<PathBuf> {
self.marked.clone()
}
// the following functions are to support executing shell commands
}
|
STACK_EDU
|
Once you have a mapped flow, get colleagues to look at it and brainstorm—maybe over drinks—all possible responses a user could give. Try to break the flow so you can identify the weak points now, before launch. You will get a whole conversation as the pipeline output and hence you need to extract only the response of the chatbot here.
We create a function called send() which sets up the basic functionality of our chatbot. If the message that we input into the chatbot is not an empty string, the bot will output a response based on our chatbot_response() function. Building an AI chatbot, or even a simple conversational bot, may seem like a complex process.
Create and run a chatbot
Such scenarios need to include the automatic handoff of the conversation to your employees. If your company is active across different platforms like an app, a website, social media, you need to provide a seamless and unified user experience across all of them. That supports a number of channels, including websites, Facebook Messenger, Slack, and SMS. The solution allows to create a human-like conversational experience for users.
How do I make AI chatbot in Python?
- Prepare the Dependencies. The first step in creating a chatbot in Python with the ChatterBot library is to install the library in your system.
- Import Classes. Importing classes is the second step in the Python chatbot creation process.
- Create and Train the Chatbot.
Chatbots are a great tool for tracking consumer behavior analysis. Using this data, companies can expand the scope of their activities. Case study here lays down the details if you’d like to learn more. Also check out our article on developing a mental health app. However, if you’ve picked a framework , you’re better off hiring a team of expert chatbot developers.
Define a Platform to Integrate With Chatbots
Such a chatbot create performing the role of an English teacher was an optimal solution for some Chinese areas suffering from English-speaking people shortage. Moreover, BotKit also allows operating with scripted dialogs and supports actions containing branching logic, questions, and other dynamic behavior. It is ready to build chatbot for social networks, mobile applications, and sites. It is famous for simple navigation and a lot of ready templates, so that the development process may run quicker.
Your agents can take care of these complicated questions while your chatbot deals with the easier, repetitive ones. This ensures that your customers get quick answers to all their questions, no matter how complicated these questions are. You can’t just randomly decide to build a chatbot for a specific use case without knowing what your customers actually need. Your aim with building a chatbot is to create a better experience for your customers. That involves actually understanding the problems that your customers are facing and what they need.
Lower support costs
There are numerous technologies that you can use for chatbot creation. You can integrate the chatbot with a number of third-party solutions and systems such as CRM, accounting systems, marketing analytics, payment gateways, etc. An open-source chatbot framework with NLP support and human-level intelligence.
- These chatbots are a combination of the best rule and keyword-based chatbots.
- If you’ve come this far, you already discovered that a chatbot for work that’s simple to use for the end user, could be quite challenging to get right for the creator, i.e. you.
- You can often see chatbots serving customers and helping them make purchases in the retail sector.
- But in case you really like some features of both an AI and a rule-based chatbot, you can get the best of both worlds by building a hybrid chatbot.
- We decided to use this question type to ask about the type of games the user loves to play the most.
- Chatbots can reduce your customer support costs and overheads dramatically.
A common example is a voice assistant of a smartphone that carries out tasks like searching for something on the web, calling someone, etc., without manual intervention. Here, you can see that we set up an object called userDatabase. While in a real-world application you might want to store the data about your users, in this demo project that’s overkill, so we just use a JS object instead. The lines define what to do in case not all information is available. Firstly, the user will be given a list of doctors to choose from. Then they will be asked to provide the date, and lastly — the time of the reservation.
Reason #2: Mine customer data
The instance variable nextIntent will be responsible for that. That’s why we’ll explicitly save and later use this info about the next intent in the variable. According to this research, businesses can save up to 30% on serving customer requests with a chatbot.
Now, once you have that figured out, you’d want to make a rough flow chart that helps you define how you’d like the conversations to go. You don’t need to fill in the responses just yet, just write down the purpose that you’d want the message to serve. Now sure, you could just fill your brand name in there and you’d be good, but you could make it so much better. You could add a little spice by using a name that makes your chatbot come alive and embody your brand personality. That way it does seem like your customers are talking to a bot, it makes them feel like they are interacting with your brand’s mascot. Similar to bot building, you can use testing tools and ready-made solutions for automated regression or user testing.
Will it be a bot hosted on your site, a standalone mobile app, or a Facebook Messenger bot? Today’s two most popular uses are support — think a FAQ bot that can fetch answers to any questions, and sales — think data gathering, consultation, and human handoff. Today, there’s no shortage of chatbot builders that let you set up an off-the-shelf chatbot.
— TwinyBots (@Twinybots) June 6, 2019
This language model dynamically understands speech and its undertones. As a cue, we give the chatbot the ability to recognize its name and use that as a marker to capture the following speech and respond to it accordingly. This is done to make sure that the chatbot doesn’t respond to everything that the humans are saying within its ‘hearing’ range. In simpler words, you wouldn’t want your chatbot to always listen in and partake in every single conversation. Hence, we create a function that allows the chatbot to recognize its name and respond to any speech that follows after its name is called. This makes this kind of chatbot difficult to integrate with NLP aided speech to text conversion modules.
Can you build a bot using AI?
As a result, it is incorrect. To make the bot adapt to the information and examples, we'll require machine learning. The bot's ability to deduce specific probability on which can be decided must then be tested in the real world. However, AI can only be used to create chatbots.
We decided to make it as easy as possible for you to build your AI-powered chatbots and start engaging your customers. Just like providing machine learning cloud services, the major tech companies all have their own frameworks. Choosing which one to use is partly just a matter of which ecosystem you prefer. Using a framework doesn’t mean you have to write the code from scratch. Here, we will use a Transformer Language Model for our chatbot. This model was presented by Google and it replaced the earlier traditional sequence to sequence models with attention mechanisms.
Chatbots can answer 69% of customer questions on their own. Reducing customer service costs by 30% is possible with chatbots. Without an intelligent chatbot, all you have is a team of customer support agents who work on fixed schedules.
- The bots can perform various actions like providing access to the bank’s software or user password reset.
- Identify a discount message for those who want a discount and a discount message for those who don’t.
- Of course it needs to be ‘smart’ and personalized, but crucially it must overall become a tool that employees prefer to use over the ‘old’ way to get a task done.
- Here, you can personalize the default question text “What’s your name?
- We’ve listed the required features and calculated the final price.
- It is ready to build chatbot for social networks, mobile applications, and sites.
Generally, you can say that any user story with a usefulness score of 3 should absolutely be supported in the chatbot. Any chatbot for work will have to take the friction out of this process for the user; or else it may not be viewed as useful enough for the user to come back in the future. This obviously qualifies leave requests quite nicely to get a smart Leave request chatbot overhaul. Building a chatbot has become relatively easy with many dedicated tools, but to make an internal chatbot for work can be a tall order.
This will check if the user’s answer is part of any of the items in the prompts list. Inside the forever block, let’s tell how to make an ai chatbot the user to say something or ask a question. In the text box, write something like “Say something or ask me a question.”
|
OPCFW_CODE
|
What's New Under the Sun? Sun's Love Affair With Linux - page 3
A Little Bit of History
Stephen DeWitt, the primary force behind Sun's Linux strategy since joining Sun as part of the Cobalt acquisition, has also announced his departure from Sun, but even this doesn't detract from Sun's obvious commitment to Linux. Sun's strategy for embracing Linux is complex without being Machiavellian. The high points of Sun's arranged marriage with Linux are:
- Expand the use of Linux on low-priced, scalable servers, augmenting and expanding the current Cobalt servers with more powerful edge servers that are scheduled to be released later in 2002.
- Support Linux binaries running on Solaris systems and simplify compiling Linux code on Solaris systems. Sun provides a Linux runtime environment for Solaris called lxrun that enables a wide range of Linux binaries to run without modification on Solaris systems. Sun also provides a Linux compatibility tool called the Linux Compatibility Assurance Toolkit (LinCAT) containing tools and documentation that simplifies developing applications that are source code compatible across Linux and Solaris systems.
- Invest in selected Linux technology companies that can help Sun sell high-powered, non-x86 hardware. This is clearest in the embedded market, where Sun has invested in companies such as TimeSys and Lineo. TimeSys is a vendor of embedded and real-time Linux that provides Linux for Sun's Netra hardware (designed for network equipment providers). (Disclaimer: I am an employee of TimeSys.)
- Support Linux on Sun hardware. The current versions of both SuSE (www.suse.com) and Debian (www.debian.org) Linux are available for SPARC hardware. Red Hat missed the boat here by dropping Solaris support with Red Hat 6.2, but Sun's new commitment to Linux might cause that to change in the near future. Sun has also committed to providing their own version of Linux on their new server hardware.
On the hardware front, Sun's high-end hardware still has a significant performance advantage over x86 systems, which still can't touch the SMP support provided by Solaris. On the desktop, the story is quite different. It's no secret that Sun's Solaris for the x86 platform has never caught on (even after licenses became free), while Linux is an obvious success story.
Sun's most recent desktop systems are not only inexpensive, but have also been moving toward the use of off-the-shelf hardware through their use of the PCI bus, their use of ATA IDE disks, and so on. By moving towards Linux on the desktop and its lower-end server environments, Sun can guarantee easy integration of Linux systems with their high-end servers and continue to focus on the development of advanced hardware such as their StorEdge storage systems, Netra hardware, and related software.
|
OPCFW_CODE
|
What are the project details and how to use it for portfolios?
Project details is a special kind of TheGem custom fields used for personalizing portfolio items/projects by adding additional data / information about your project. Using project details it is possible to add different type of additional content fields per project basis and personalize these fields with project individual values.
You can check some examples of using project details for personalizing portfolio pages in this examples:
And here is an example of displaying project details fields in portfolio grids/listings:
How To Add Project Details?
In TheGem’s Theme Options you can add project details for portfolio pages as follows:
1. Go to Theme Options -> Single Pages -> Portfolio Page
2. Scroll down to the “Project Details” section
3. Enable project details in “Add Project Details”
Now you can add your first project detail.
Label: This is the field label which will appear in Page Options → Portfolio Item Settings -> Project Details in portfolio items
Field Type: Select between “text”, “number” or “link” field type. Fields with “number” type can then be automatically formatted using WP Locale or custom locale specified in “Custom Fields” element.
Name (Meta Key): This is the field name (meta key).
Default Value: Appears when creating a new portfolio item
Apply on All: In case you wish to apply the default value changes to all existing projects
You can add multiple project details by clicking on “Add Field” button.
After saving your project details in Theme Options:
1. Go to edit any portfolio page
2. Scroll down to “Page Options”
3. Click on “Portfolio Item Settings”
4. In “Project Details” section you can see the project details added in Theme Options and now you can fill these fields with the respective personalized values.
Note: the values for the “number” fields should be entered without thousands separator; for decimals use (.), e.g. instead of writing 9.999.999,99 write 9999999.99. This is important for automatically applying the WP Locale for number formatting when displaying these numbers in portfolio grids and for displaying the number range filter in portfolio grids filter (see chapter “Portfolio Grid” ).
How To Display Project Details In Portfolio Page?
You can display project details in your portfolio page template, portfolio page or in portfolio grids/listings.
Portfolio Page Template
With TheGem templates builder you can create global templates for portfolio pages. When creating/editing the template for portfolio page, you can use “Project Meta/Details” content element to add project details:
In the repeater control of this element you can add multiple project details. Using the “Type” field you can select which project detail field to display. You can choose between different customizable pre-built design skins and layout for displaying project details:
Learn more about portfolio page templates in “Portfolio Page Builder” chapter.
You can also use project details to be displayed in the content of the portfolio page.
Dynamic Tags (Elementor)
While editing the content of your portfolio page in page builder, you can use any content elements and activate “Dynamic Tags” option to display the values of the project details fields. For example, let’s add the “Project Info” element to the portfolio page:
By default, the repeater items in this content element are static. However, you can activate the Dynamic Tags for title/description, select “Project Detail (TheGem)” as the source and select the respective field to display:
In the same way you can use the project details with any content element. Check “Dynamic Tags” chapter for more details on how to use dynamic tags with custom fields and project details.
Dynamic Data (WPBakery)
In TheGem WPBakery you can use following content elements to display project details using dynamic data source:
- Project Info
- TheGem Button
- Custom Fields (manual input)
Let’s check for example “Project Info” element with dynamic data source. After adding this element to your portfolio page, open any of the repeater items and select project details in the “Source” field. After that you can select respective project detail field to be displayed:
In the “Field Type” field you can choose between “Text” and “Number” field types. Use “number” to auto-format the number values according to your current WP Locale
Another example is the “Custom Fields” element. After adding this element to your content, select “Manual Input” in the “Source” field. After that, in the repeater control, you can add multiple fields and select these fields by entering the field’s name:
Check more details in “Dynamic Data (TheGem WPBakery)" chapter.
How To Display Project Details In Portfolio Grid/List?
To display your portfolios you can use “Portfolio Grid” content element as described above in section “How to display portfolio?”
After adding the Portfolio Grid, go to the “Caption” settings section and activate Project Details:
Project details are organized as a repeater control, allowing you to show multiple different project details in grid items.
Following settings are available:
Select between vertical or horizontal (inline) layout for displaying project details
Specify the title/label of the project detail
You can select between Project Details, TheGem Custom Fields (see description on chapter “Custom Fields”), ACF/Toolset custom field groups and taxonomies.
Choose the field (or taxonomy) to be displayed in the item’s caption
Select between “text” and “number” field type. The “number” field type helps to auto-format the number values when displaying in grid using the current WP Locale (see description on chapter “Custom Fields”).
Optionally add prefix/suffix to the “number” value
Optionally select an icon for the selected field/taxonomy.
How To Use Project Details To Filter Projects?
In the “Filters & Sorting” section of the Portfolio Grid element you can activate the extended AJAX filter including different project details and taxonomies. In the “Attributes” setting you will find a repeater control for adding multiple different filter attributes:
Following settings are available:
Specify the filter’s title
Choose between taxonomies, project details, TheGem Custom Fields (see description on chapter “Custom Fields”) and ACF/Toolset custom field groups
Choose the field or taxonomy for filtering
Select between “text” and “number” field type. The “number” field type helps you to display slider control (like price range slider) in filter:
|
OPCFW_CODE
|
US CERT: Default passwords make IT systems easy pickings for hackers
A new government alert warns computer and mobile device users about the risks of continuing to use default passwords. The warning by the U.S. Computer Emergency Readiness Team notes that hackers can easily attack connected systems such as embedded systems, devices and appliances, through their often publically available factory default passwords.
Intended for initial testing, installation and configuration, default passwords are supposed to be changed before a system is put into a production environment. The danger of unchanged default passwords is that they can allow attackers to access a range of systems within a vendor's particular product line.
Passwords can be found in compiled product documentation lists on the Internet. US-CERT also notes that hackers can also identify exposed systems using search engines such as Shodan, making it feasible to scan the entire IPv4 Internet. Default passwords allow attackers to log into a system, usually with root or administrative privileges.
Some examples of incidents involving unchanged passwords include:
- Internet Census 2012 Carna Botnet distributed scanning.
- Fake Emergency Alert System warnings about zombies.
- Stuxnet and Siemens SIMATIC WinCC software.
- Kaiten malware and older versions of Microsoft SQL Server.
- Secure Shell (SSH) access to jailbroken Apple iPhones.
- Cisco router default Telnet and enable passwords.
- Simple Network Management Protocol (SNMP) community strings.
To counter the threat, US-CERT recommends users to change default passwords as soon as possible before deploying a system on the Internet. Besides using sufficiently strong and unique passwords, the alert notes that vendors should design systems with unique default passwords. These passwords may be based on an inherent characteristic of the system, such as a media access control address and the passwords may even be physically printed on the system.
The alert also suggests using alternative authentication mechanisms such as Kerberos, x.509 certificates, public keys, or multi-factor authentication. However, it cautions that embedded systems may not support these authentication approaches or their associated infrastructure. Vendors can design systems in such a way that they automatically require a password change the first time the default is used, explaining that recent versions of DD-WRT wireless firmware use this technique.
Additional security steps users can take include restricting network access and identifying affected products. To restrict network access, US-CERT recommends only allowing network access to required network services. "Unless absolutely necessary do not deploy systems that can be directly accessed from the Internet," the report cautioned.
If remote access is needed, users should consider using virtual private networks, secure shell protocol, or other secure access methods. Additionally, vendors can design systems to only allow default or recovery password use on local interfaces, such as a serial console, or when the system is in maintenance mode and only accessible from a local network, the report said.
Identifying software and systems likely to use default passwords is also a key security step. A vulnerability scanner such as Metasploit and OpenVAS can help users identify systems and services using default passwords on their networks. The warning issued a list of software, systems and services commonly using default passwords:
- Routers, access points, switches, firewalls and other network equipment;
- web applications
- industrial control systems;
- other embedded systems and devices;
- remote terminal interfaces such as Telnet and SSH;
- administrative web interfaces.
Henry Kenyon is a freelance reporter.
- read the US-CERT alert.
|
OPCFW_CODE
|
What is the best practice for responsive website widths?
I have recently started learning how to use CSS media queries to develop websites that are responsive / mobile friendly however, I am not familiar with the best practices associated with determining which width ranges to develop designs for.
For example, I normally use three sets of CSS rules. One for a small width (mobile) , one for a medium width (tablet or small laptop screen) and one for a large width (desktop).
This is what it looks like in code:
@media screen and (min-width: 1495px) {
//CSS RULES HERE
}
@media screen and (max-width: 1494px) and (min-width: 1245px) {
//CSS RULES HERE
}
@media screen and (max-width: 1244px) and (min-width: 751px) {
//CSS RULES HERE
}
My sizing conventions (min width & max width) are completely arbitrary and I determine whether it works from trial and error. Often This doesn't work very well and I can't get the design to look good on all difference screen resolutions.
First of all.... Is there a best practice for the most ideal width ranges to use?
Secondly, is there a framework or template that will make all of this easier?
(That is not Bootstrap).
FYI: I use Foundation 6 to as a grid system but I haven't really found information much on responsive sites in Foundation 6.
Seems rare to me that you haven't found information about responsive sites on Foundation, this framework has been responsive from the beginning, and have wired cool stuff to help you on that matter. Just want to make clear I'm talking here about the Float Grid which isn't default anymore since 6.4 (but you can customize or switch up the grid in SASS settings).
Foundation grid has 3 default expected sizes: small (mobiles), medium (tablets) and large (desktop), in Float Grid you can use this way:
<div class="column small-12 medium-6 large-4></div>
This column will be full width on mobile, 1/2 width on tablet and 1/3 width on desktop; you can even ditch the small-12 because every column has full width (12 columns) by default.
That's the way you approach it from the grid... if you use the SASS version of the framework, you have another powerful tool, a mixing to set code for a specific breakpoint... let's say you want to apply some styling for medium size (and up), you just need to use this in your .scss file:
@include breakpoint(medium) {
// Your SASS/CSS code here
}
Please notice I said "medium and up", that's because Foundation is mobile-first, so everything you put in a smaller breakpoint, will be available on following sizes (unless you override them), if that philosophy is kinda awkward to you, and you need to put some code for only the medium breakpoint, you just need to put the code this way:
@include breakpoint(medium only) {
// Your SASS/CSS code here
}
That's a quite fast way to handle mediaqueries inside your code, totally aligned with Foundation code... the best part?, if you change breakpoint sizes at mid-development, you just need to change the sizes on the _settings.scss file and all code will update on the next build.
As you tagged this question on "Foundation" and mentioned on the question body, I did my answer deliberately Foundation-centric. Hope this helps.
This is a really helpful answer. Thanks!
I usually go for a single breakpoint at 768px.
With that I go for three queries (and they worked out pretty well so far) :
desktop (min-width is 768) [sheet #1]
mobile (max-width is 768) [sheet #2]
portrait (according to orientation) [sheet #2]
I don't think there's really a strict and fixed set of breakpoints that everyone should be using, I feel like it's more depending on what you need for your website.
Although, if you still wanna have a look at a set of breakpoints, I have bookmarked this a long time ago : ResponsiveDesign.is - breakpoints
|
STACK_EXCHANGE
|
Why my code with OpenMP performance different on different environment?
I'm a newcomer to use OpenMP. These days I optimized a program and got different results on different environment. The kernel of my code looks like this:
#pragma omp parallel for
for (int thread_id = 0; thread_id < max_thread; thread_id++)
{
int work_start_pos = spos[thread_id];
int work_end_pos = spos[thread_id + 1];
for (int pos = work_start_pos; pos < work_end_pos; pos++)
{
//Calculate some parameters
//A loop with constant length to finish a convolve
}
}
I compiled and tested it in different environments, the answers are right but the time used by the program are different:
All programs were compiled with : -O2 -march=native -fopenmp
Windows 7 x64, MinGW GCC 4.7.1 64-bit,
Xeon E3 1230v2 4 Core / 8 Threads @ 3.5GHz (I locked it when testing),
8G DDR3 Mem @ 1600MHz
1 Thread , 252.23s
2 Threads, 126.44s
4 Threads, 66.24s
6 Threads, 65.56s
8 Threads, 63.12s
Clearly it is linear speedup from 1 thread to 4 thread.
Windows 8.1 x64, MinGW GCC 4.9.2 64-bit,
i7 4702 MQ 4 Cores / 8 Threads, 8G DDR3L Mem @ 1600MHz.
When only one CPU is fully loaded, it comes to 3.1GHz. When 4 CPUs are fully loaded, it various from 2.6GHz to 2.8GHz.
1 Thread , 289.33s
2 Threads, 161.56s
4 Threads, 110.43s
6 Threads, 109.89s
8 Threads, 132.00s
It is not totally linear speedup, but nearly linear speedup. I guess maybe the CPU Boost has an effort on it.
Arch Linux Kernel 3.1.7 x64, GCC 4.9.2 , hardware is same as Windows 8.1 :
1 Thread , 226.67s
2 Threads, 208.82s
4 Threads, 237.58s
6 Threads, 248.67s
8 Threads, 247.11s
Very strange results here.
The cluster in our lab :
CentOS 6.3 x64 release Kernel 2.6.32 GCC 4.4.6,
Core2 Q8400 @ 2.66GHz max (usually the CPU is 2.0GHz), 4 Cores / 4 Threads, 4G DDR3 Mem @ 1066MHz
1 Thread , 463.51s
2 Threads, 394.96s
4 Threads, 372.48s
It did speedup, when using more threads, but far from linear speedup. And I also tried to use GCC 4.9.2 to compile and test on the cluster, but almost changed nothing.
When you thought you're running on, say, 4 threads, are you sure you actually were? this can be checked using the OpenMP run-time library. Also, your code is not the proper way to use OpenMP.
Let me guess: You are using clock() to time the program execution?
@HristoIliev The program has a "Stopwatch" module using clock() to time the program execution, but it's not major code so I care little about it. Any suggestion?
That's a very common that mistake people make. clock() works differently on Windows than on most Unix-like systems. On Windows it returns the real (wall-clock) time passed since some point in the past while on Linux (also OS X, FreeBSD, etc.) it returns the total CPU time used by all process threads. One should use instead the portable OpenMP timer routine omp_get_wtime().
@HristoIliev Sorry... I made a mistake... They use struct tms t; m_start = times(&t); and
struct tms t; return (static_cast<double>(stop - m_start)) / (static_cast<double>(sysconf(_SC_CLK_TCK))); to time the program. When compiling it on Windows, I can't find "sys/times.h" so I used omp_get_wtime() instead. And it works well on Linux beacuse sometimes I used my watch on my hand to time the program and got the same result.
How do you set the number of threads? By calling omp_set_num_threads or you use default value?
@NikolayKondratyev I use the default value. When testing, sometimes I set the max_thread manually in my code smaller than the return of omp_get_max_threads() , so it won't use all threads.
This is not really an answer, but too long for a comment. I just wand to point out that omp parallel for should really only be used to parallelise for loops with (typically) many more iterations than threads. In order to have each thread get exactly one chunk, simply use omp parallel as in
#pragma omp parallel
{
int thread_id = omp_get_thread_num();
int work_start_pos = spos[thread_id];
int work_end_pos = spos[thread_id + 1];
for (int pos = work_start_pos; pos < work_end_pos; pos++)
{
//Calculate some parameters
//A loop with constant length to finish a convolve
}
}
I doubt that this will make much of a difference with your timings, though, but you should try.
Thank you. I tried your code this morning again and it made no difference. The original program looks like for (int d_index = 0; d_index < WorkSampleSize; d_index ++ ) { Calculation and other } and I have devided it into different unrelated blocks to give each thread to work.
|
STACK_EXCHANGE
|
#!/usr/bin/env python
"""
Undistort image.
(C) 2016-2022 1024jp
"""
import math
import os
import sys
import cv2
import numpy as np
from modules import argsparser
from modules.datafile import Data
from modules.undistortion import Undistorter
from modules.projection import Projector
# constants
SUFFIX = "_calib"
class ArgsParser(argsparser.Parser):
description = 'Undistort image based on a location file.'
datafile_name = 'image'
def init_arguments(self):
super(ArgsParser, self).init_arguments()
script = self.add_argument_group('script options')
script.add_argument('--save',
action='store_true',
default=False,
help="save result in a file instead displaying it"
" (default: %(default)s)"
)
script.add_argument('--perspective',
action='store_true',
default=False,
help="also remove perspective"
" (default: %(default)s)"
)
script.add_argument('--stats',
action='store_true',
default=False,
help="display stats"
" (default: %(default)s)"
)
def add_suffix_to_path(path, suffix):
"""Append suffix to file name before file extension.
Arguments:
path (str) -- File path.
suffix (str) -- Suffix string to append.
"""
root, extension = os.path.splitext(path)
return root + suffix + extension
def show_image(image, scale=1.0, window_title='Image'):
"""Display given image in a window.
Arguments:
image () -- Image to display.
scale (float) -- Magnification of image.
window_title (str) -- Title of window.
"""
scaled_image = scale_image(image, scale)
cv2.imshow(window_title, scaled_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def scale_image(image, scale=1.0):
"""Scale up/down given image.
Arguments:
image () -- Image to process.
scale (float) -- Magnification of image.
"""
height, width = [int(scale * length) for length in image.shape[:2]]
return cv2.resize(image, (width, height))
def plot_points(image, points, color=(0, 0, 255)):
"""Draw circles at given locations on image.
Arguments:
image -- Image to draw on.
points -- x,y pairs of points to plot.
"""
# find best radius for image
image_width = image.shape[1]
radius = int(image_width / 400)
# draw
for point in points:
point = tuple(map(int, point))
cv2.circle(image, point, color=color, radius=radius,
thickness=radius/2)
def estimate_clipping_rect(projector, size):
"""
Return:
rect -- NSRect style 2d-tuple.
flipped (bool) -- Whether y-axis is flipped.
"""
# lt -> rt -> lb -> rb
image_corners = [(0, 0), (size[0], 0), (0, size[1]), size]
x_points = []
y_points = []
for corner in image_corners:
x, y = map(int, projector.project_point(*corner))
x_points.append(x)
y_points.append(y)
min_x = min(x_points)
min_y = min(y_points)
max_x = max(x_points)
max_y = max(y_points)
rect = ((min_x, min_y), (max_x - min_x, max_y - min_y))
flipped = y_points[3] < 0
return rect, flipped
def main(data, saves_file=False, removes_perspective=True, shows_stats=False):
imgpath = data.datafile.name
image = cv2.imread(imgpath)
size = image.shape[::-1][1:3]
undistorter = Undistorter.init(data.image_points, data.dest_points, size)
image = undistorter.undistort_image(image)
undistorted_points = undistorter.calibrate_points(data.image_points)
plot_points(image, undistorted_points)
if shows_stats:
print('[stats]')
print('number of points: {}'.format(len(undistorted_points)))
if removes_perspective:
projector = Projector(undistorted_points, data.dest_points)
# show stats if needed
if shows_stats:
diffs = []
for point, (dest_x, dest_y, dest_z) in zip(undistorted_points,
data.dest_points):
x, y = projector.project_point(*point)
diffs.append([x - dest_x, y - dest_y])
abs_diffs = [(abs(x), abs(y)) for x, y in diffs]
print('mean: {:.2f}, {:.2f}'.format(*np.mean(abs_diffs, axis=0)))
print(' std: {:.2f}, {:.2f}'.format(*np.std(abs_diffs, axis=0)))
print(' max: {:.2f}, {:.2f}'.format(*np.max(abs_diffs, axis=0)))
print('diff:')
for x, y in diffs:
print(' {:6.1f},{:6.1f}'.format(x, y))
# transform image by removing perspective
rect, is_flipped = estimate_clipping_rect(projector, size)
image = projector.project_image(image, rect[1], rect[0])
scale = float(size[0]) / image.shape[1]
image = scale_image(image, scale)
for point in data.dest_points:
point = point[0:2]
point = [scale * (l - origin) for l, origin in zip(point, rect[0])]
plot_points(image, [point], color=(255, 128, 0))
# flip image if needed
if is_flipped:
image = cv2.flip(image, 0)
if saves_file:
outpath = add_suffix_to_path(imgpath, SUFFIX)
cv2.imwrite(outpath, image)
else:
show_image(image, scale=1.0/2, window_title='Undistorted Image')
if __name__ == "__main__":
parser = ArgsParser()
args = parser.parse_args()
if args.test:
print("This script doesn't have test.")
sys.exit()
data = Data(args.file, in_cols=args.in_cols)
main(data, saves_file=args.save,
removes_perspective=args.perspective, shows_stats=args.stats)
|
STACK_EDU
|
Windows Update Stuck At 35%, Cannot Reboot In Safe Mode
Rarely, but in specific cases, a wrong date or time zone setting can break Windows Update. PCrisk is a cyber security portal, informing Internet users about the latest digital threats. Our content is provided by security experts and professional malware researchers. Press on the link to the convenience rollup installation file.
- You’ll probably see the right one listed on-screen while you’re restarting.
- This site came up first in a google search for a DLL file.
Click on the setup file to run the installer and click on Yes in theUser Account Control prompt, if any. In the list, make sure the KB number is pending to be downloaded due to conflicting errors. Now, click on Uninstall a program option under the Programs menu as depicted.
Inside Realistic Programs For Dll Files
One of them is when a user tries to run Windows update, an error appears stating Windows Update cannot currently check for updates, because the service is not running. The cause of this error can be a Windows update service failing to start or a corrupt registry entry causing a service to be not found. After SDI scans your system, it offers a list of potential new drivers.
- Most of these sites just want your traffic, and once a DLL is uploaded, they have little incentive to ensure that the file is kept up to date.
- By placing resources in a DLL, it is much easier to create international versions of an application.
- Even parroting others that don’t attempt to provide supporting data.
Google fmodex64.dll Chrome install failed to start, not working – You might be able to solve this problem simply by removing all earlier versions of Chrome from your PC. One possibility of making a left-click strange action mouse is a certain key on the jammed keyboard, such as Ctrl, Spacebar, Shift, etc. Check all the keys on the keyboard to make sure there is no Which keys are stuck. In the worst-case download here scenario, it’s possible that your router is broken.
Deciding Upon Effective Products Of Missing Dll Files
After a pair of updates (one downloaded and wouldn’t install and the other wouldn’t download) I had a machine that would not shut down and showed 100% use of both CPU and disk. One approach you can take is to download and install the latest version from Microsoft. It’ll either update (yay!), or hopefully tell you why it won’t. This fixes things like the master boot record and other components involved in Windows’ initial startup. Not only did this not feel like a startup issue, but startup issues are also unlikely to result from Windows Update problems, as Windows Update rarely impacts the initial startup sequence.
|
OPCFW_CODE
|
Contemporary packages in other languages also typically use very similar procedures, Despite the fact that fewer strict, and only in sure pieces, in order to decrease complexity, normally in conjunction with complementing methodologies which include information structuring, structured programming and object orientation.
Archived from the original on eleven August 2010. we established 3 levels of instruments ... The subsequent stage offers Python and XML assist, letting modders with additional experience manipulate the sport earth and all the things in it.
We be sure that the python assignment help methods of a good quality offer to the students According to their instructions. We perform a great Check out ahead of sending them to the scholar to ensure that python assignment help companies, which we provide to learners, are totally free from issues.
Typically professors and lecturers give challenging project operate in between frequent lessons. These assignments are presented to examine the typical progress of each pupil of their topics.
It contains a dynamic sort method and automated memory management and has a sizable and complete common library Hope, these routines help you to help your Python coding abilities. At the moment, following sections can be found, we have been Operating not easy to add far more workouts .... Pleased Coding! Download Python from and set up with your procedure to execute the Python plans. It is possible to go through our Python Set up on Fedora Linux and Windows seven, For anyone who is unfamiliar to Python installation.
When you are a global scholar along with your English is far from ideal – don’t worry, We'll aid you using your tasks. Our gurus are all native speakers and they're going to make your grammar, punctuation, and sentence composition the way in which it ought to be.
I took aid for my Advertising Strategy assignment and tutor supply a perfectly prepared marketing and advertising approach 10 times before my submission date. I acquired it reviewed from my professor and there were only tiny alterations. Good perform guys.
Python is on the related lines as Ruby. It is usually an Item Oriented Programming language. Most important concentration of Python is on the code readability.Any Python programmer can complete a code in handful of strains blog rather than coding substantial classes. In combination with Item oriented programming paradigm, Python supports procedural design, practical programming, etc. It offers an automatic memory administration element that makes it builders alternative. Python doesn’t deal with every little thing. Emphasis of Python is restricted, but it works effectively On the subject of currently being extensible.
Whatever you need to do in PyCharm, you do this within the context of the project. A project is really an organizational device that signifies a complete application Resolution. It serves like a basis for coding support, bulk refactoring, coding style consistency, etcetera.
Any technique or procedure is often explained by some mathematical equations. Their nature may be arbitrary. Does stability company of the… Examine additional…
Your browser isn't supported. Please improve your browser to at least one of our supported browsers. You are able to attempt viewing the webpage, but assume features to get damaged.
We prepare the assignment for you holding in your mind the many important rules that is definitely established by your higher education or perhaps the College. So you're sure to get Superb grades while in the examination.
version. Which means as soon as you specify language: python in .travis.yml your checks will operate within a virtualenv (with out you having to explicitly develop it).
An empirical study located that scripting languages, which include Python, tend to be more productive than common languages, such as C and Java, for programming difficulties involving string manipulation and look for in the dictionary, and determined that memory intake was frequently "better than Java rather than Significantly worse than C or C++".
|
OPCFW_CODE
|
报告题目1:Data Management in Microservices: State of the Practice, Challenges, and Res-earch Directions
报告题目2:Fast Search-By-Classification for Large-Scale Databases Using Index-Aware Decision Trees and Random Forests
报告人单位:University of Copenhagen
报告人简介:Yongluan Zhou is a professor in the Department of Computer Science (DIKU) at the University of Copenhagen, where he leads the Data Management Systems Lab (DMS Lab). He also heads the MSc in Computer Science at DIKU. Prior to his current position, he worked as an Associate Professor at the University of Southern Denmark (SDU) and as a postdoc at the Ecole Polytechnique Fédérale de Lausanne (EPFL). He earned his Ph.D. in Computer Science from the National University of Singapore (NUS). His research interests span database systems and distributed systems, with his recent focus being on scalable event driven systems. He has authored more than 80 peer-reviewed research articles in internationl journals and conference proceedings. He serves on the EDBT Executive Board and the SSDBM Steering Committee and has chaired various international conferences, including DEBS 2022, SSDBM 2022, and EDBT 2020. He has also served on the Program Committees of many other international conferences, including SIGMOD, VLDB, ICDE, EDBT, CIKM, and SSDBM.
报告摘要1:Microservices have become a popular architectural style for data-driven applications, given their ability to functionally decompose an application into small and autonomous services to achieve scalability, strong isolation, and specialization of database systems to the workloads and data formats of each service. Despite the accelerating industrial adoption of this architectural style, an investigation of the state of the practice and challenges practitioners face regarding data management in microservices is lacking. To bridge this gap, this talk conducted a systematic literature review of representative articles reporting the adoption of microservices, analyzed a set of popular open-source microservice applications, and conducted an online survey to cross-validate the findings of the previous steps with the perceptions and experiences of over 120 experienced practitioners and researchers.
Through this process, researchers are able to categorize the state of practice of data management in microservices and observe several foundational challenges that cannot be solved by software engineering practices alone, but rather require system-level support to alleviate the burden imposed on practitioners. This talk discusses the shortcomings of state-of-the-art database systems regarding microservices and we conclude by devising a set of features for microservice-oriented database systems.
报告摘要2:The vast amounts of data collected in various domains pose great challenges to modern data exploration and analysis. To find “interesting” objects in large databases, users typically define a query using positive and negative example objects and train a classification model to identify the objects of interest in the entire data catalog. However, this approach requires a scan of all the data to apply the classification model to each instance in the data catalog, making this method prohibitively expensive to be employed in large-scale databases serving many users and queries interactively. This talk proposes a novel framework for such search-by-classification scenarios that allow users to interactively search for target objects by specifying queries through a small set of positive and negative examples. Unlike previous approaches, the proposed framework can rapidly answer such queries at low cost without scanning the entire database. The proposed framework is based on an index-aware construction scheme for decision trees and random forests that transforms the inference phase of these classification models into a set of range queries, which in turn can be efficiently executed by leveraging multidimensional indexing structures. The experiments show that queries over large data catalogs with hundreds of millions of objects can be processed in a few seconds using a single server, compared to hours needed by classical scanning-based approaches.
|
OPCFW_CODE
|
Django field implementation for PostgreSQL tsvector.
django-tsvector-field is a drop-in replacement for Django’s django.contrib.postgres.search.SearchVectorField and manages the database triggers to keep your tsvector columns updated automatically.
Python 3+, Django 1.11+ and psycopg2 are the only requirements:
Install django-tsvector-field with your favorite python tool, e.g. pip install django-tsvector-field.
Add tsvector to your INSTALLED_APPS setting.
tsvector.SearchVectorField works like any other Django field: you add it to your model, run makemigrations to add the AddField operation to your migrations and when you migrate tsvector will take care to create the necessary postgres trigger and stored procedure.
Let’s create a TextDocument model with a search field holding our tsvector and having postgres automatically update it with title and body as inputs.
from django.db import models import tsvector class TextDocument(models.Model): title = models.CharField(max_length=128) body = models.TextField() search = tsvector.SearchVectorField([ tsvector.WeightedColumn('title', 'A'), tsvector.WeightedColumn('body', 'D'), ], 'english')
After you’ve migrated you can create some TextDocument records and see that postgres keeps it synchronized in the background. Specifically, because the search column is set at the database level, you need to call refresh_from_db() to get the updated search vector.
>>> doc = TextDocument.objects.create( ... title="My hovercraft is full of spam.", ... body="It's what eels love!" ... ) >>> doc.search >>> doc.refresh_from_db() >>> doc.search "'eel':10 'full':4A 'hovercraft':2A 'love':11 'spam':6A"
Note that spam is recorded with 6A, this will be important later. Let’s continue with the previous session and create another document.
>>> doc = TextDocument.objects.create( ... title="What do eels eat?", ... body="Spam, spam, spam, they love spam!" ... ) >>> doc.refresh_from_db() >>> doc.search "'eat':4A 'eel':3A 'love':9 'spam':5,6,7,10"
No we have two documents: first document has just one spam with weight A and the second document has 4 spam with lower weight. If we search for spam and apply a search rank then the A weight on the first document will cause that document to appear higher in the results.
>>> matches = TextDocument.objects\ ... .annotate(rank=SearchRank(F('search'), SearchQuery('spam')))\ ... .order_by('-rank')\ ... .values_list('rank', 'title', 'body') >>> for match in matches: ... print(match) ... (0.607927, 'My hovercraft is full of spam.', "It's what eels love!") (0.0865452, 'What do eels eat?', 'Spam, spam, spam, they love spam!')
If you are only interested in getting a list of possible matches without ranking you can filter directly on the search column like so:
>>> TextDocument.objects.filter(search='spam') <QuerySet [<TextDocument: TextDocument object>, <TextDocument: TextDocument object>]>
For more information, see the Django documentation on Full Text Search:
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Hashes for django-tsvector-field-0.9.0.tar.gz
Hashes for django_tsvector_field-0.9.0-py3-none-any.whl
|
OPCFW_CODE
|